Category Archives: BigBang

Time for An Over-The-Counter Morality Pill?

Stories of people who risk life and limb to help a stranger and those who turn a blind eye are as current as they are ancient. Almost on a daily basis the 24-hours news cycle carries a heartwarming story of someone doing good to or for another; and seemingly just as often comes the story of indifference. Social and psychological researchers have studied this behavior in humans, and animals, for decades. However, only recently has progress been made in identifying some underlying factors. Peter Singer, a professor of bioethics at Princeton University, and researcher Agata Sagan recap some current understanding.

All of this leads to a conundrum: would it be ethical to market a “morality” pill that would make us do more good more often?

[div class=attrib]From the New York Times:[end-div]

Last October, in Foshan, China, a 2-year-old girl was run over by a van. The driver did not stop. Over the next seven minutes, more than a dozen people walked or bicycled past the injured child. A second truck ran over her. Eventually, a woman pulled her to the side, and her mother arrived. The child died in a hospital. The entire scene was captured on video and caused an uproar when it was shown by a television station and posted online. A similar event occurred in London in 2004, as have others, far from the lens of a video camera.

Yet people can, and often do, behave in very different ways.

A news search for the words “hero saves” will routinely turn up stories of bystanders braving oncoming trains, swift currents and raging fires to save strangers from harm. Acts of extreme kindness, responsibility and compassion are, like their opposites, nearly universal.

Why are some people prepared to risk their lives to help a stranger when others won’t even stop to dial an emergency number?

Scientists have been exploring questions like this for decades. In the 1960s and early ’70s, famous experiments by Stanley Milgram and Philip Zimbardo suggested that most of us would, under specific circumstances, voluntarily do great harm to innocent people. During the same period, John Darley and C. Daniel Batson showed that even some seminary students on their way to give a lecture about the parable of the Good Samaritan would, if told that they were running late, walk past a stranger lying moaning beside the path. More recent research has told us a lot about what happens in the brain when people make moral decisions. But are we getting any closer to understanding what drives our moral behavior?

Here’s what much of the discussion of all these experiments missed: Some people did the right thing. A recent experiment (about which we have some ethical reservations) at the University of Chicago seems to shed new light on why.

Researchers there took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.

Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

[div class=attrib]Read more here.[end-div]

A Theory of Everything? Nah

A peer-reviewed journal recently published a 100-page scientific paper describing a theory of everything that unifies quantum theory and relativity (a long sought-after goal) with the origin of life, evolution and cosmology. And, best of all the paper contains no mathematics.

The paper written by a faculty member at Case Western Reserve University raises interesting issues about the peer review process and the viral spread of information, whether it’s correct or not.

[div class=attrib]From Ars Technica:[end-div]

Physicists have been working for decades on a “theory of everything,” one that unites quantum mechanics and relativity. Apparently, they were being too modest. Yesterday saw publication of a press release claiming a biologist had just published a theory accounting for all of that—and handling the origin of life and the creation of the Moon in the bargain. Better yet, no math!

Where did such a crazy theory originate? In the mind of a biologist at a respected research institution, Case Western Reserve University Medical School. Amazingly, he managed to get his ideas published, then amplified by an official press release. At least two sites with poor editorial control then reposted the press release—verbatim—as a news story.

Gyres all the way down

The theory in question springs from the brain of one Erik Andrulis, a CWRU faculty member who has a number of earlier papers on fairly standard biochemistry. The new paper was accepted by an open access journal called Life, meaning that you can freely download a copy of its 105 pages if you’re so inclined. Apparently, the journal is peer-reviewed, which is a bit of a surprise; even accepting that the paper makes a purely theoretical proposal, it is nothing like science as I’ve ever seen it practiced.

The basic idea is that everything, from subatomic particles to living systems, is based on helical systems the author calls “gyres,” which transform matter, energy, and information. These transformations then determine the properties of various natural systems, living and otherwise. What are these gyres? It’s really hard to say; even Andrulis admits that they’re just “a straightforward and non-mathematical core model” (although he seems to think that’s a good thing). Just about everything can be derived from this core model; the author cites “major phenomena including, but not limited to, quantum gravity, phase transitions of water, why living systems are predominantly CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), homochirality of sugars and amino acids, homeoviscous adaptation, triplet code, and DNA mutations.”

He’s serious about the “not limited to” part; one of the sections describes how gyres could cause the Moon to form.

Is this a viable theory of everything? The word “boson,” the particle that carries forces, isn’t in the text at all. “Quark” appears once—in the title of one of the 800 references. The only subatomic particle Andrulis describes is the electron; he skips from there straight up to oxygen. Enormous gaps exist everywhere one looks.

[div class=attrib]Read more here.[end-div]

Inside the Weird Teenage Brain

[div class=attrib]From the Wall Street Journal:[end-div]

“What was he thinking?” It’s the familiar cry of bewildered parents trying to understand why their teenagers act the way they do.

How does the boy who can thoughtfully explain the reasons never to drink and drive end up in a drunken crash? Why does the girl who knows all about birth control find herself pregnant by a boy she doesn’t even like? What happened to the gifted, imaginative child who excelled through high school but then dropped out of college, drifted from job to job and now lives in his parents’ basement?

Adolescence has always been troubled, but for reasons that are somewhat mysterious, puberty is now kicking in at an earlier and earlier age. A leading theory points to changes in energy balance as children eat more and move less.

At the same time, first with the industrial revolution and then even more dramatically with the information revolution, children have come to take on adult roles later and later. Five hundred years ago, Shakespeare knew that the emotionally intense combination of teenage sexuality and peer-induced risk could be tragic—witness “Romeo and Juliet.” But, on the other hand, if not for fate, 13-year-old Juliet would have become a wife and mother within a year or two.

Our Juliets (as parents longing for grandchildren will recognize with a sigh) may experience the tumult of love for 20 years before they settle down into motherhood. And our Romeos may be poetic lunatics under the influence of Queen Mab until they are well into graduate school.

What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.

The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again.

The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.

Recent studies in the neuroscientist B.J. Casey’s lab at Cornell University suggest that adolescents aren’t reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

From an evolutionary point of view, this all makes perfect sense. One of the most distinctive evolutionary features of human beings is our unusually long, protected childhood. Human children depend on adults for much longer than those of any other primate. That long protected period also allows us to learn much more than any other animal. But eventually, we have to leave the safe bubble of family life, take what we learned as children and apply it to the real adult world.

Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.

[div class=attrib]Read more here.[end-div]

Our Beautiful Home

A composite image of the beautiful blue planet, taken through NASA’s eyes on January 4, 2012. It’s so gorgeous that theDiagonal’s editor wishes he lived there.

[div class=attrib]Image of Earth from NASA’s Earth observing satellite Suomi NPP. Courtesy of NASA/NOAA/GSFC/Suomi NPP/VIIRS/Norman Kuring.[end-div]

Defying Gravity using Science

Gravity defying feats have long been a favored pastime for magicians and illusionists. Well, science has now caught up to and surpassed our friends with sleight of hand. Check out this astonishing video (after the 10 second ad) of a “quantum locked”, levitating superconducting disc, courtesy of New Scientist.

[div class=attrib]From the New Scientist:[end-div]

FOR centuries, con artists have convinced the masses that it is possible to defy gravity or walk through walls. Victorian audiences gasped at tricks of levitation involving crinolined ladies hovering over tables. Even before then, fraudsters and deluded inventors were proudly displaying perpetual-motion machines that could do impossible things, such as make liquids flow uphill without consuming energy. Today, magicians still make solid rings pass through each other and become interlinked – or so it appears. But these are all cheap tricks compared with what the real world has to offer.

Cool a piece of metal or a bucket of helium to near absolute zero and, in the right conditions, you will see the metal levitating above a magnet, liquid helium flowing up the walls of its container or solids passing through each other. “We love to observe these phenomena in the lab,” says Ed Hinds of Imperial College, London.

This weirdness is not mere entertainment, though. From these strange phenomena we can tease out all of chemistry and biology, find deliverance from our energy crisis and perhaps even unveil the ultimate nature of the universe. Welcome to the world of superstuff.

This world is a cold one. It only exists within a few degrees of absolute zero, the lowest temperature possible. Though you might think very little would happen in such a frozen place, nothing could be further from the truth. This is a wild, almost surreal world, worthy of Lewis Carroll.

One way to cross its threshold is to cool liquid helium to just above 2 kelvin. The first thing you might notice is that you can set the helium rotating, and it will just keep on spinning. That’s because it is now a “superfluid”, a liquid state with no viscosity.

Another interesting property of a superfluid is that it will flow up the walls of its container. Lift a bucketful of superfluid helium out of a vat of the stuff, and it will flow up the sides of the bucket, over the lip and down the outside, rejoining the fluid it was taken from.

[div class=attrib]Read more here.[end-div]

Handedness Shapes Perception and Morality

A group of new research studies show that our left- or right-handedness shapes our perception of “goodness” and “badness”.

[div class=attrib]From Scientific American:[end-div]

A series of studies led by psychologist Daniel Casasanto suggests that one thing that may shape our choice is the side of the menu an item appears on. Specifically, Casasanto and his team have shown that for left-handers, the left side of any space connotes positive qualities such as goodness, niceness, and smartness. For right-handers, the right side of any space connotes these same virtues. He calls this idea that “people with different bodies think differently, in predictable ways” the body-specificity hypothesis.

In one of Casasanto’s experiments, adult participants were shown pictures of two aliens side by side and instructed to circle the alien that best exemplified an abstract characteristic. For example, participants may have been asked to circle the “more attractive” or “less honest” alien. Of the participants who showed a directional preference (most participants did), the majority of right-handers attributed positive characteristics more often to the aliens on the right whereas the majority of left-handers attributed positive characteristics more often to aliens on the left.

Handedness was found to predict choice in experiments mirroring real-life situations as well. When participants read near-identical product descriptions on either side of a page and were asked to indicate the products they wanted to buy, most righties chose the item described on the right side while most lefties chose the product on the left. Similarly, when subjects read side-by-side resumes from two job applicants presented in a random order, they were more likely to choose the candidate described on their dominant side.

Follow-up studies on children yielded similar results. In one experiment, children were shown a drawing of a bookshelf with a box to the left and a box to the right. They were then asked to think of a toy they liked and a toy they disliked and choose the boxes in which they would place the toys. Children tended to choose to place their preferred toy in the box to their dominant side and the toy they did not like to their non-dominant side.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Drawing Hands by M. C. Escher, 1948, Lithograph. Courtesy of Wikipedia.[end-div]

From Nine Dimensions to Three

Over the last 40 years or so physicists and cosmologists have sought to construct a single grand theory that describes our entire universe from the subatomic soup that makes up particles and describes all forces to the vast constructs of our galaxies, and all in between and beyond. Yet a major stumbling block has been how to bring together the quantum theories that have so successfully described, and predicted, the microscopic with our current understanding of gravity. String theory is one such attempt to develop a unified theory of everything, but it remains jumbled with many possible solutions and, currently, is beyond experimental verification.

Recently however, theorists in Japan announced a computer simulation which shows how our current 3-dimensional universe may have evolved from a 9-dimensional space hypothesized by string theory.

[div class=attrib]From Interactions:[end-div]

A group of three researchers from KEK, Shizuoka University and Osaka University has for the first time revealed the way our universe was born with 3 spatial dimensions from 10-dimensional superstring theory1 in which spacetime has 9 spatial directions and 1 temporal direction. This result was obtained by numerical simulation on a supercomputer.

[Abstract]

According to Big Bang cosmology, the universe originated in an explosion from an invisibly tiny point. This theory is strongly supported by observation of the cosmic microwave background2 and the relative abundance of elements. However, a situation in which the whole universe is a tiny point exceeds the reach of Einstein’s general theory of relativity, and for that reason it has not been possible to clarify how the universe actually originated.

In superstring theory, which is considered to be the “theory of everything”, all the elementary particles are represented as various oscillation modes of very tiny strings. Among those oscillation modes, there is one that corresponds to a particle that mediates gravity, and thus the general theory of relativity can be naturally extended to the scale of elementary particles. Therefore, it is expected that superstring theory allows the investigation of the birth of the universe. However, actual calculation has been intractable because the interaction between strings is strong, so all investigation thus far has been restricted to discussing various models or scenarios.

Superstring theory predicts a space with 9 dimensions3, which poses the big puzzle of how this can be consistent with the 3-dimensional space that we live in.

A group of 3 researchers, Jun Nishimura (associate professor at KEK), Asato Tsuchiya (associate professor at Shizuoka University) and Sang-Woo Kim (project researcher at Osaka University) has succeeded in simulating the birth of the universe, using a supercomputer for calculations based on superstring theory. This showed that the universe had 9 spatial dimensions at the beginning, but only 3 of these underwent expansion at some point in time.

This work will be published soon in Physical Review Letters.

[The content of the research]

In this study, the team established a method for calculating large matrices (in the IKKT matrix model4), which represent the interactions of strings, and calculated how the 9-dimensional space changes with time. In the figure, the spatial extents in 9 directions are plotted against time.

If one goes far enough back in time, space is indeed extended in 9 directions, but then at some point only 3 of those directions start to expand rapidly. This result demonstrates, for the first time, that the 3-dimensional space that we are living in indeed emerges from the 9-dimensional space that superstring theory predicts.

This calculation was carried out on the supercomputer Hitachi SR16000 (theoretical performance: 90.3 TFLOPS) at the Yukawa Institute for Theoretical Physics of Kyoto University.

[The significance of the research]

It is almost 40 years since superstring theory was proposed as the theory of everything, extending the general theory of relativity to the scale of elementary particles. However, its validity and its usefulness remained unclear due to the difficulty of performing actual calculations. The newly obtained solution to the space-time dimensionality puzzle strongly supports the validity of the theory.

Furthermore, the establishment of a new method to analyze superstring theory using computers opens up the possibility of applying this theory to various problems. For instance, it should now be possible to provide a theoretical understanding of the inflation5 that is believed to have taken place in the early universe, and also the accelerating expansion of the universe6, whose discovery earned the Nobel Prize in Physics this year. It is expected that superstring theory will develop further and play an important role in solving such puzzles in particle physics as the existence of the dark matter that is suggested by cosmological observations, and the Higgs particle, which is expected to be discovered by LHC experiments.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A visualization of strings. Courtesy of R. Dijkgraaf / Universe Today.[end-div]

Weight Loss and the Coordinated Defense Mechanism

New research into obesity and weight loss shows us why it’s so hard to keep weight lost from dieting from returning. The good news is that weight (re-)gain is not all due to a simple lack of control and laziness. However, the bad news is that keeping one’s weight down may be much more difficult due to the body’s complex defense mechanism.

Tara Parker-Pope over at the Well blog reviews some of the new findings, which seem to point the finger at a group hormones and specific genes that work together to help us regain those lost pounds.

[div class=attrib]From the New York Times:[end-div]

For 15 years, Joseph Proietto has been helping people lose weight. When these obese patients arrive at his weight-loss clinic in Australia, they are determined to slim down. And most of the time, he says, they do just that, sticking to the clinic’s program and dropping excess pounds. But then, almost without exception, the weight begins to creep back. In a matter of months or years, the entire effort has come undone, and the patient is fat again. “It has always seemed strange to me,” says Proietto, who is a physician at the University of Melbourne. “These are people who are very motivated to lose weight, who achieve weight loss most of the time without too much trouble and yet, inevitably, gradually, they regain the weight.”

Anyone who has ever dieted knows that lost pounds often return, and most of us assume the reason is a lack of discipline or a failure of willpower. But Proietto suspected that there was more to it, and he decided to take a closer look at the biological state of the body after weight loss.

Beginning in 2009, he and his team recruited 50 obese men and women. The men weighed an average of 233 pounds; the women weighed about 200 pounds. Although some people dropped out of the study, most of the patients stuck with the extreme low-calorie diet, which consisted of special shakes called Optifast and two cups of low-starch vegetables, totaling just 500 to 550 calories a day for eight weeks. Ten weeks in, the dieters lost an average of 30 pounds.

At that point, the 34 patients who remained stopped dieting and began working to maintain the new lower weight. Nutritionists counseled them in person and by phone, promoting regular exercise and urging them to eat more vegetables and less fat. But despite the effort, they slowly began to put on weight. After a year, the patients already had regained an average of 11 of the pounds they struggled so hard to lose. They also reported feeling far more hungry and preoccupied with food than before they lost the weight.

While researchers have known for decades that the body undergoes various metabolic and hormonal changes while it’s losing weight, the Australian team detected something new. A full year after significant weight loss, these men and women remained in what could be described as a biologically altered state. Their still-plump bodies were acting as if they were starving and were working overtime to regain the pounds they lost. For instance, a gastric hormone called ghrelin, often dubbed the “hunger hormone,” was about 20 percent higher than at the start of the study. Another hormone associated with suppressing hunger, peptide YY, was also abnormally low. Levels of leptin, a hormone that suppresses hunger and increases metabolism, also remained lower than expected. A cocktail of other hormones associated with hunger and metabolism all remained significantly changed compared to pre-dieting levels. It was almost as if weight loss had put their bodies into a unique metabolic state, a sort of post-dieting syndrome that set them apart from people who hadn’t tried to lose weight in the first place.

“What we see here is a coordinated defense mechanism with multiple components all directed toward making us put on weight,” Proietto says. “This, I think, explains the high failure rate in obesity treatment.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Pulsars Signal the Beat

Cosmology meets music. German band Reimhaus samples the regular pulse of pulsars in its music. A pulsar is the rapidly spinning remains of an exploded star — as the pulsar spins it emits a detectable beam of energy that has a very regular beat, sometimes sub-second.

[div class=attrib]From Discover:[end-div]

Some pulsars spin hundreds of times per second, some take several seconds to spin once. If you take that pulse of light and translate it into sound, you get a very steady thumping beat with very precise timing. So making it into a song is a natural thought.
But we certainly didn’t take it as far as the German band Reimhaus did, making a music video out of it! They used several pulsars for their song “Echoes, Silence, Pulses & Waves”. So here’s the cosmic beat:

[tube]86IeHiXEZ3I[/tube]

The First Interplanetary Travel Reservations

[div class=attrib]From Wired:[end-div]

Today, space travel is closer to reality for ordinary people than it has ever been. Though currently only the super rich are actually getting to space, several companies have more affordable commercial space tourism in their sights and at least one group is going the non-profit DIY route into space.

But more than a decade before it was even proven that man could reach space, average people were more positive about their own chances of escaping Earth’s atmosphere. This may have been partly thanks to the Interplanetary Tour Reservation desk at the American Museum of Natural History.

In 1950, to promote its new space exhibit, the AMNH had the brilliant idea to ask museum visitors to sign up to reserve their space on a future trip to the moon, Mars, Jupiter or Saturn. They advertised the opportunity in newspapers and magazines and received letters requesting reservations from around the world. The museum pledged to pass their list on to whichever entity headed to each destination first.

Today, to promote its newest space exhibit, “Beyond Planet Earth: The Future of Space Exploration,” the museum has published some of these requests. The letters manage to be interesting, hopeful, funny and poignant all at once. Some even included sketches of potential space capsules, rockets and spacesuits. The museum shared some of its favorites with Wired for this gallery.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Hayden Planetarium Space Tours Schedule. Courtesy of American Museum of Natural History / Wired.[end-div]

A Most Beautiful Equation

Many mathematicians and those not mathematically oriented would consider Albert Einstein’s equation stating energy=mass equivalence to be singularly simple and beautiful. Indeed, e=mc2 is perhaps one of the few equations to have entered the general public consciousness. However, there are a number of other less well known mathematical constructs that convey this level of significance and fundamental beauty as well. Wired lists several to consider.

[div class=attrib]From Wired:[end-div]

Even for those of us who finished high school algebra on a wing and a prayer, there’s something compelling about equations. The world’s complexities and uncertainties are distilled and set in orderly figures, with a handful of characters sufficing to capture the universe itself.

For your enjoyment, the Wired Science team has gathered nine of our favorite equations. Some represent the universe; others, the nature of life. One represents the limit of equations.

We do advise, however, against getting any of these equations tattooed on your body, much less branded. An equation t-shirt would do just fine.

The Beautiful Equation: Euler’s Identity

ei? + 1 = 0

Also called Euler’s relation, or the Euler equation of complex analysis, this bit of mathematics enjoys accolades across geeky disciplines.

Swiss mathematician Leonhard Euler first wrote the equality, which links together geometry, algebra, and five of the most essential symbols in math — 0, 1, i, pi and e — that are essential tools in scientific work.

Theoretical physicist Richard Feynman was a huge fan and called it a “jewel” and a “remarkable” formula. Fans today refer to it as “the most beautiful equation.”

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Euler’s Relation. Courtesy of Wired.[end-div]

Can Anyone Say “Neuroaesthetics”

As in all other branches of science, there seem to be fascinating new theories, research and discoveries in neuroscience on a daily, if not hourly, basis. With this in mind, brain and cognitive researchers have recently turned their attentions to the science of art, or more specifically to addressing the question “how does the human brain appreciate art?” Yes, welcome to the world of “neuroaesthetics”.

[div class=attrib]From Scientific American:[end-div]

The notion of “the aesthetic” is a concept from the philosophy of art of the 18th century according to which the perception of beauty occurs by means of a special process distinct from the appraisal of ordinary objects. Hence, our appreciation of a sublime painting is presumed to be cognitively distinct from our appreciation of, say, an apple. The field of “neuroaesthetics” has adopted this distinction between art and non-art objects by seeking to identify brain areas that specifically mediate the aesthetic appreciation of artworks.

However, studies from neuroscience and evolutionary biology challenge this separation of art from non-art. Human neuroimaging studies have convincingly shown that the brain areas involved in aesthetic responses to artworks overlap with those that mediate the appraisal of objects of evolutionary importance, such as the desirability of foods or the attractiveness of potential mates. Hence, it is unlikely that there are brain systems specific to the appreciation of artworks; instead there are general aesthetic systems that determine how appealing an object is, be that a piece of cake or a piece of music.

We set out to understand which parts of the brain are involved in aesthetic appraisal. We gathered 93 neuroimaging studies of vision, hearing, taste and smell, and used statistical analyses to determine which brain areas were most consistently activated across these 93 studies. We focused on studies of positive aesthetic responses, and left out the sense of touch, because there were not enough studies to arrive at reliable conclusions.

The results showed that the most important part of the brain for aesthetic appraisal was the anterior insula, a part of the brain that sits within one of the deep folds of the cerebral cortex. This was a surprise. The anterior insula is typically associated with emotions of negative quality, such as disgust and pain, making it an unusual candidate for being the brain’s “aesthetic center.” Why would a part of the brain known to be important for the processing of pain and disgust turn out to the most important area for the appreciation of art?

[div class=attrib]Read entire article here.[end-div]

[div class=attrib]Image: The Birth of Venus by Sandro Botticelli. Courtesy of Wikipedia.[end-div]

A Great Mind Behind the Big Bang

Davide Castelvecchi over at Degrees of Freedom visits with one of the founding fathers of modern cosmology, Alan Guth.

Now professor of physics at MIT, Guth originated the now widely accepted theory of the inflationary universe. Guth’s idea, with subsequent supporting mathematics, was that the nascent universe passed through a phase of exponential expansion. In 2009, he was awarded the 2009 Isaac Newton Medal by the British Institute of Physics.

[div class=attrib]From Scientific American:[end-div]

On the night of December 6, 1979–32 years ago today–Alan Guth had the “spectacular realization” that would soon turn cosmology on its head. He imagined a mind-bogglingly brief event, at the very beginning of the big bang, during which the entire universe expanded exponentially, going from microscopic to cosmic size. That night was the birth of the concept of cosmic inflation.

Such an explosive growth, supposedly fueled by a mysterious repulsive force, could solve in one stroke several of the problems that had plagued the young theory of the big bang. It would explain why space is so close to being spatially flat (the “flatness problem”) and why the energy distribution in the early universe was so uniform even though it would not have had the time to level out uniformly (the “horizon problem”), as well as solve a riddle in particle physics: why there seems to be no magnetic monopoles, or in other words why no one has ever isolated “N” and “S” poles the way we can isolate “+” and “-” electrostatic charges; theory suggested that magnetic monopoles should be pretty common.

In fact, as he himself narrates in his highly recommendable book, The Inflationary Universe, at the time Guth was a particle physicist (on a stint at the Stanford Linear Accelerator Center, and struggling to find a permanent job) and his idea came to him while he was trying to solve the monopole problem.

Twenty-five years later, in the summer of 2004, I asked Guth–by then a full professor at MIT and a leading figure of cosmology– for his thoughts on his legacy and how it fit with the discovery of dark energy and the most recent ideas coming out of string theory.

The interview was part of my reporting for a feature on inflation that appeared in the December 2004 issue of Symmetry magazine. (It was my first feature article, other than the ones I had written as a student, and it’s still one of my favorites.)

To celebrate “inflation day,” I am reposting, in a sligthly edited form, the transcript of that interview.

DC: When you first had the idea of inflation, did you anticipate that it would turn out to be so influential?

AG: I guess the answer is no. But by the time I realized that it was a plausible solution to the monopole problem and to the flatness problem, I became very excited about the fact that, if it was correct, it would be a very important change in cosmology. But at that point, it was still a big if in my mind. Then there was a gradual process of coming to actually believe that it was right.

DC: What’s the situation 25 years later?

AG: I would say that inflation is the conventional working model of cosmology. There’s still more data to be obtained, and it’s very hard to really confirm inflation in detail. For one thing, it’s not really a detailed theory, it’s a class of theories. Certainly the details of inflation we don’t know yet. I think that it’s very convincing that the basic mechanism of inflation is correct. But I don’t think people necessarily regard it as proven.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Guth. Courtesy of Scientific American.[end-div]

Remembering Lynn Margulis: Pioneering Evolutionary Biologist

The world lost pioneering biologist Lynn Margulis on November 22.

One of her key contributions to biology, and in fact, to our overall understanding of the development of complex life, was her theory of the symbiotic origin of the nucleated cell, or symbiogenesis. Almost 50 years ago Margulis first argued that such complex nucleated, or eukaryotic, cells were formed from the association of different kinds of bacteria. Her idea was both radical and beautiful: that separate organisms, in this case ancestors of modern bacteria, would join together in a permanent relationship to form a new entity, a complex single cell.

Until fairly recently this idea was mostly dismissed by the scientific establishment. Nowadays her pioneering ideas on cell evolution through symbiosis are held as a fundamental scientific breakthrough.

We feature some excerpts below of Margulis’ writings:

[div class=attrib]From the Edge:[end-div]

At any fine museum of natural history — say, in New York, Cleveland, or Paris — the visitor will find a hall of ancient life, a display of evolution that begins with the trilobite fossils and passes by giant nautiloids, dinosaurs, cave bears, and other extinct animals fascinating to children. Evolutionists have been preoccupied with the history of animal life in the last five hundred million years. But we now know that life itself evolved much earlier than that. The fossil record begins nearly four thousand million years ago! Until the 1960s, scientists ignored fossil evidence for the evolution of life, because it was uninterpretable.

I work in evolutionary biology, but with cells and microorganisms. Richard Dawkins, John Maynard Smith, George Williams, Richard Lewontin, Niles Eldredge, and Stephen Jay Gould all come out of the zoological tradition, which suggests to me that, in the words of our colleague Simon Robson, they deal with a data set some three billion years out of date. Eldredge and Gould and their many colleagues tend to codify an incredible ignorance of where the real action is in evolution, as they limit the domain of interest to animals — including, of course, people. All very interesting, but animals are very tardy on the evolutionary scene, and they give us little real insight into the major sources of evolution’s creativity. It’s as if you wrote a four-volume tome supposedly on world history but beginning in the year 1800 at Fort Dearborn and the founding of Chicago. You might be entirely correct about the nineteenth-century transformation of Fort Dearborn into a thriving lakeside metropolis, but it would hardly be world history.

“codifying ignorance” I refer in part to the fact that they miss four out of the five kingdoms of life. Animals are only one of these kingdoms. They miss bacteria, protoctista, fungi, and plants. They take a small and interesting chapter in the book of evolution and extrapolate it into the entire encyclopedia of life. Skewed and limited in their perspective, they are not wrong so much as grossly uninformed.

Of what are they ignorant? Chemistry, primarily, because the language of evolutionary biology is the language of chemistry, and most of them ignore chemistry. I don’t want to lump them all together, because, first of all, Gould and Eldredge have found out very clearly that gradual evolutionary changes through time, expected by Darwin to be documented in the fossil record, are not the way it happened. Fossil morphologies persist for long periods of time, and after stasis, discontinuities are observed. I don’t think these observations are even debatable. John Maynard Smith, an engineer by training, knows much of his biology secondhand. He seldom deals with live organisms. He computes and he reads. I suspect that it’s very hard for him to have insight into any group of organisms when he does not deal with them directly. Biologists, especially, need direct sensory communication with the live beings they study and about which they write.

Reconstructing evolutionary history through fossils — paleontology — is a valid approach, in my opinion, but paleontologists must work simultaneously with modern-counterpart organisms and with “neontologists” — that is, biologists. Gould, Eldredge, and Lewontin have made very valuable contributions. But the Dawkins-Williams-Maynard Smith tradition emerges from a history that I doubt they see in its Anglophone social context. Darwin claimed that populations of organisms change gradually through time as their members are weeded out, which is his basic idea of evolution through natural selection. Mendel, who developed the rules for genetic traits passing from one generation to another, made it very clear that while those traits reassort, they don’t change over time. A white flower mated to a red flower has pink offspring, and if that pink flower is crossed with another pink flower the offspring that result are just as red or just as white or just as pink as the original parent or grandparent. Species of organisms, Mendel insisted, don’t change through time. The mixture or blending that produced the pink is superficial. The genes are simply shuffled around to come out in different combinations, but those same combinations generate exactly the same types. Mendel’s observations are incontrovertible.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Lynn Margulis. Courtesy edge.org.[end-div]

The Mystery of Anaesthesia

Contemporary medical and surgical procedures have been completely transformed through the use of patient anaesthesia. Prior to the first use of diethyl ether as an anaesthetic in the United States in 1842, surgery, even for minor ailments, was often a painful process of last resort.

Nowadays the efficacy of anaesthesia is without question. Yet despite the development of ever more sophisticated compounds and methods of administration little is still known about how anaesthesia actually works.

Linda Geddes over at New Scientist has a fascinating article reviewing recent advancements in our understanding of anaesthesia, and its relevance in furthering our knowledge of consciousness in general.

[div class=attrib]From the New Scientist:[end-div]

I have had two operations under general anaesthetic this year. On both occasions I awoke with no memory of what had passed between the feeling of mild wooziness and waking up in a different room. Both times I was told that the anaesthetic would make me feel drowsy, I would go to sleep, and when I woke up it would all be over.

What they didn’t tell me was how the drugs would send me into the realms of oblivion. They couldn’t. The truth is, no one knows.

The development of general anaesthesia has transformed surgery from a horrific ordeal into a gentle slumber. It is one of the commonest medical procedures in the world, yet we still don’t know how the drugs work. Perhaps this isn’t surprising: we still don’t understand consciousness, so how can we comprehend its disappearance?

That is starting to change, however, with the development of new techniques for imaging the brain or recording its electrical activity during anaesthesia. “In the past five years there has been an explosion of studies, both in terms of consciousness, but also how anaesthetics might interrupt consciousness and what they teach us about it,” says George Mashour, an anaesthetist at the University of Michigan in Ann Arbor. “We’re at the dawn of a golden era.”

Consciousness has long been one of the great mysteries of life, the universe and everything. It is something experienced by every one of us, yet we cannot even agree on how to define it. How does the small sac of jelly that is our brain take raw data about the world and transform it into the wondrous sensation of being alive? Even our increasingly sophisticated technology for peering inside the brain has, disappointingly, failed to reveal a structure that could be the seat of consciousness.

Altered consciousness doesn’t only happen under a general anaesthetic of course – it occurs whenever we drop off to sleep, or if we are unlucky enough to be whacked on the head. But anaesthetics do allow neuroscientists to manipulate our consciousness safely, reversibly and with exquisite precision.

It was a Japanese surgeon who performed the first known surgery under anaesthetic, in 1804, using a mixture of potent herbs. In the west, the first operation under general anaesthetic took place at Massachusetts General Hospital in 1846. A flask of sulphuric ether was held close to the patient’s face until he fell unconscious.

Since then a slew of chemicals have been co-opted to serve as anaesthetics, some inhaled, like ether, and some injected. The people who gained expertise in administering these agents developed into their own medical specialty. Although long overshadowed by the surgeons who patch you up, the humble “gas man” does just as important a job, holding you in the twilight between life and death.

Consciousness may often be thought of as an all-or-nothing quality – either you’re awake or you’re not – but as I experienced, there are different levels of anaesthesia (see diagram). “The process of going into and out of general anaesthesia isn’t like flipping a light switch,” says Mashour. “It’s more akin to a dimmer switch.”

A typical subject first experiences a state similar to drunkenness, which they may or may not be able to recall later, before falling unconscious, which is usually defined as failing to move in response to commands. As they progress deeper into the twilight zone, they now fail to respond to even the penetration of a scalpel – which is the point of the exercise, after all – and at the deepest levels may need artificial help with breathing.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Replica of the inhaler used by William T. G. Morton in 1846 in the first public demonstration of surgery using ether. Courtesy of Wikipedia. [end-div]

The Debunking Handbook

A valuable resource if you ever find yourself having to counter and debunk a myth and misinformation. It applies equally regardless of the type of myth in debate: Santa, creationism, UFOs, political discourse, climate science denial, science denial in general. You can find the download here.

[div class=attrib]From Skeptical Science:[end-div]

The Debunking Handbook, a guide to debunking misinformation, is now freely available to download. Although there is a great deal of psychological research on misinformation, there’s no summary of the literature that offers practical guidelines on the most effective ways of reducing the influence of myths. The Debunking Handbook boils the research down into a short, simple summary, intended as a guide for communicators in all areas (not just climate) who encounter misinformation.

The Handbook explores the surprising fact that debunking myths can sometimes reinforce the myth in peoples’ minds. Communicators need to be aware of the various backfire effects and how to avoid them, such as:

  • The Familiarity Backfire Effect
  • The Overkill Backfire Effect
  • The Worldview Backfire Effect

It also looks at a key element to successful debunking: providing an alternative explanation. The Handbook is designed to be useful to all communicators who have to deal with misinformation (eg – not just climate myths).

[div class=attrib]Read more here.[end-div]

Cool Images of a Hot Star

Astronomers and planetary photographers, both amateur and professional, have been having an inspiring time recently in watching the Sun. Some of the most gorgeous images of our nearest star come courtesy of photographer Alan Friedman. One such spectacular image shows several huge, 50,000 mile high, solar flares, and groups of active sunspots larger than our planet. See more of Freidman’s captivating images at his personal website.

According to MSNBC:

For the past couple of weeks, astronomers have been tracking groups of sunspots as they move across the sun’s disk. Those active regions have been shooting off flares and outbursts of electrically charged particles into space — signaling that the sun is ramping up toward the peak of its 11-year activity cycle. Physicists expect that peak, also known as “Solar Max,” to come in 2013.

A full frontal view from New York photographer Alan Friedman shows the current activity in detail, as seen in a particular wavelength known as hydrogen-alpha. The colors have been tweaked to turn the sun look like a warm, fuzzy ball, with lacy prominences licking up from the edge of the disk.

Friedman focused on one flare in particular over the weekend: In the picture you see at right, the colors have been reversed to produce a dark sun and dusky prominence against the light background of space.

[div class=attirb]Read more of this article here.[end-div]

[div class=attrib]Image: Powerful sunspots and gauzy-looking prominences can be seen in Alan Friedman’s photo of the sun, shown in hydrogen-alpha wavelengths. Courtesy of MSNBC / Copyright Alan Friedman, avertedimagination.com.[end-div]

The Infant Universe

Long before the first galaxy clusters and the first galaxies appeared in our universe, and before the first stars, came the first basic elements — hydrogen, helium and lithium.

Results from a just published study identify these raw materials from what is theorized to be the universe’s first few minutes of existence.

[div class=attrib]From Scientific American:[end-div]

By peering into the distance with the biggest and best telescopes in the world, astronomers have managed to glimpse exploding stars, galaxies and other glowing cosmic beacons as they appeared just hundreds of millions of years after the big bang. They are so far away that their light is only now reaching Earth, even though it was emitted more than 13 billion years ago.

Astronomers have been able to identify those objects in the early universe because their bright glow has remained visible even after a long, universe-spanning journey. But spotting the raw materials from which the first cosmic structures formed—the gas produced as the infant universe expanded and cooled in the first few minutes after the big bang—has not been possible. That material is not itself luminous, and everywhere astronomers have looked they have found not the primordial light-element gases hydrogen, helium and lithium from the big bang but rather material polluted by heavier elements, which form only in stellar interiors and in cataclysms such as supernovae.

Now a group of researchers reports identifying the first known pockets of pristine gas, two relics of those first minutes of the universe’s existence. The team found a pair of gas clouds that contain no detectable heavy elements whatsoever by looking at distant quasars and the intervening material they illuminate. Quasars are bright objects powered by a ravenous black hole, and the spectral quality of their light reveals what it passed through on its way to Earth, in much the same way that the lamp of a projector casts the colors of film onto a screen. The findings appeared online November 10 in Science.

“We found two gas clouds that show a significant abundance of hydrogen, so we know that they are there,” says lead study author Michele Fumagalli, a graduate student at the University of California, Santa Cruz. One of the clouds also shows traces of deuterium, also known as heavy hydrogen, the nucleus of which contains not only a proton, as ordinary hydrogen does, but also a neutron. Deuterium should have been produced in big bang nucleosynthesis but is easily destroyed, so its presence is indicative of a pristine environment. The amount of deuterium present agrees with theoretical predictions about the mixture of elements that should have emerged from the big bang. “But we don’t see any trace of heavier elements like carbon, oxygen and iron,” Fumagalli says. “That’s what tells us that this is primordial gas.”

The newfound gas clouds, as Fumagalli and his colleagues see them, existed about two billion years after the big bang, at an epoch of cosmic evolution known as redshift 3. (Redshift is a sort of cosmological distance measure, corresponding to the degree that light waves have been stretched on their trip across an expanding universe.) By that time the first generation of stars, initially comprising only the primordial light elements, had formed and were distributing the heavier elements they forged via nuclear fusion reactions into interstellar space.

But the new study shows that some nooks of the universe remained pristine long after stars had begun to spew heavy elements. “They have looked for these special corners of the universe, where things just haven’t been polluted yet,” says Massachusetts Institute of Technology astronomer Rob Simcoe, who did not contribute to the new study. “Everyplace else that we’ve looked in these environments, we do find these heavy elements.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Simulation by Ceverino, Dekel and Primack. Courtesy of Scientific American.[end-div]

One Pale Blue Dot, 55 Languages and 11 Billion Miles

It was Carl Sagan’s birthday last week (November 9, to be precise). He would have been 77 years old — he returned to “star-stuff” in 1996. Thoughts of this charming astronomer and cosmologist reminded us of a project with which he was intimately involved — the Voyager program.

In 1977, NASA launched two spacecraft to explore Jupiter and Saturn. The spacecraft performed so well that their missions were extended several times: first, to journey farther in the outer reaches of our solar system and explore the planets Neptune and Uranus; and second, to fly beyond our solar system into interstellar space. And, by all accounts both craft are now close to this boundary. The farthest, Voyager I, is currently over 11 billion miles away. For a real-time check on its distance, visit  JPL’s Voyager site here. JPL is NASA’s Jet Propulsion Lab in Pasadena, CA.

Some may recall that Carl Sagan presided over the selection and installation of content from the Earth onto a gold plated disk that each Voyager carries on its continuing mission. The disk contains symbolic explanations of our planet and solar system, as well as images of its inhabitants and greetings spoken in 55 languages. After much wrangling over concerns about damaging Voyager’s imaging instruments by peering back at the Sun, Sagan was instrumental in having NASA reorient Voyager I’s camera back towards the Earth. This enabled the craft to snap one last set of images of our planet from its vantage point in deep space. One poignant image became know as the “Pale Blue Dot”, and Sagan penned some characteristically eloquent and philosophical words about this image in his book, Pale Blue Dot: A Vision of the Human Future in Space.

[div class=attrib]From Carl Sagan:[end-div]

From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Look again at that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

[div class=attrib]About the image from NASA:[end-div]

From Voyager’s great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters – violet, blue and green – and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification.

To ease identification we have drawn a gray circle around the image of the Earth.

[div class=attrib]Image courtesy of NASA / JPL.[end-div]

Growing Complex Organs From Scratch

In early 2010 a Japanese research team grew retina-like structures from a culture of mouse embryonic stem cells. Now, only a year later, the same team at the RIKEN Center for Developmental Biology announced their success in growing a much more complex structure following a similar process — a mouse pituitary gland. This is seen as another major step towards bioengineering replacement organs for human transplantation.

[div class=attrib]From Technology Review:[end-div]

The pituitary gland is a small organ at the base of the brain that produces many important hormones and is a key part of the body’s endocrine system. It’s especially crucial during early development, so the ability to simulate its formation in the lab could help researchers better understand how these developmental processes work. Disruptions in the pituitary have also been associated with growth disorders, such as gigantism, and vision problems, including blindness.

The study, published in this week’s Nature, moves the medical field even closer to being able to bioengineer complex organs for transplant in humans.

The experiment wouldn’t have been possible without a three-dimensional cell culture. The pituitary gland is an independent organ, but it can’t develop without chemical signals from the hypothalamus, the brain region that sits just above it. With a three-dimensional culture, the researchers could grow both types of tissue together, allowing the stem cells to self-assemble into a mouse pituitary. “Using this method, we could mimic the early mouse development more smoothly, since the embryo develops in 3-D in vivo,” says Yoshiki Sasai, the lead author of the study.

The researchers had a vague sense of the signaling factors needed to form a pituitary gland, but they had to figure out the exact components and sequence through trial and error. The winning combination consisted of two main steps, which required the addition of two growth factors and a drug to stimulate a developmental protein called sonic hedgehog (named after the video game). After about two weeks, the researchers had a structure that resembled a pituitary gland.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]New gland: After 13 days in culture, mouse embryonic stem cells had self-assembled the precursor pouch, shown here, that gives rise to the pituitary gland. Image courtesy of Technlogy Review / Nature.[end-div]

Why Do We Overeat? Supersizing and Social Status

[div class=attrib]From Wired:[end-div]

Human beings are notoriously terrible at knowing when we’re no longer hungry. Instead of listening to our stomach – a very stretchy container – we rely on all sorts of external cues, from the circumference of the dinner plate to the dining habits of those around us. If the serving size is twice as large (and American serving sizes have grown 40 percent in the last 25 years), we’ll still polish it off. And then we’ll go have dessert.

Consider a clever study done by Brian Wansink, a professor of marketing at Cornell. He used a bottomless bowl of soup – there was a secret tube that kept on refilling the bowl with soup from below – to demonstrate that how much people eat is largely dependent on how much you give them. The group with the bottomless bowl ended up consuming nearly 70 percent more than the group with normal bowls. What’s worse, nobody even noticed that they’d just slurped far more soup than normal.

Or look at this study, done in 2006 by psychologists at the University of Pennsylvania. One day, they left out a bowl of chocolate M&M’s in an upscale apartment building. Next to the bowl was a small scoop. The following day, they refilled the bowl with M&M’s but placed a much larger scoop beside it. The result would not surprise anyone who has ever finished a Big Gulp soda or a supersized serving of McDonald’s fries: when the scoop size was increased, people took 66 percent more M&M’s. Of course, they could have taken just as many candies on the first day; they simply would have had to take a few more scoops. But just as larger serving sizes cause us to eat more, the larger scoop made the residents more gluttonous.

Serving size isn’t the only variable influencing how much we consume. As M.F.K. Fisher noted, eating is a social activity, intermingled with many of our deeper yearnings and instincts. And this leads me to a new paper by David Dubois, Derek Ruckner and Adam Galinsky, psychologists at HEC Paris and the Kellogg School of Management. The question they wanted to answer is why people opt for bigger serving sizes. If we know that we’re going to have a tough time not eating all those French fries, then why do we insist on ordering them? What drives us to supersize?

The hypothesis of Galinsky, et. al. is that supersizing is a subtle marker of social status.

Needless to say, this paper captures a tragic dynamic behind overeating. It appears that one of the factors causing us to consume too much food is a lack of social status, as we try to elevate ourselves by supersizing meals. Unfortunately, this only leads to rampant weight gain which, as the researchers note, “jeopardizes future rank through the accompanying stigma of being overweight.” In other words, it’s a sad feedback loop of obesity, a downward spiral of bigger serving sizes that diminish the very status we’re trying to increase.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Super Size Me movie. Image courtesy of Wikipedia.[end-div]

The Hiddeous Sound of Chalk on a Blackboard

We promise. There is no screeching embedded audio of someone slowly dragging a piece of chalk, or worse, fingernails, across a blackboard! Though, even the thought of this sound causes many to shudder. Why? A plausible explanation over at Wired UK.

[div class=attrib]From Wired:[end-div]

Much time has been spent, over the past century, on working out exactly what it is about the sound of fingernails on a blackboard that’s so unpleasant. A new study pins the blame on psychology and the design of our ear canals.

Previous research on the subject suggested that the sound is acoustically similar to the warning call of a primate, but that theory was debunked after monkeys responded to amplitude-matched white noise and other high-pitched sounds, whereas humans did not. Another study, in 1986, manipulated a recording of blackboard scraping and found that the medium-pitched frequencies are the source of the adverse reaction, rather than the the higher pitches (as previously thought). The work won author Randolph Blake an Ig Nobel Prize in 2006.

The latest study, conducted by musicologists Michael Oehler of the Macromedia University for Media and Communication in Cologne, Germany, and Christoph Reuter of the University of Vienna, looked at other sounds that generate a similar reaction — including chalk on slate, styrofoam squeaks, a plate being scraped by a fork, and the ol’ fingernails on blackboard.

Some participants were told the genuine source of the sound, and others were told that the sounds were part of a contemporary music composition. Researchers asked the participants to rank which were the worst, and also monitored physical indicators of distress — heart rate, blood pressure and the electrical conductivity of skin.

They found that disturbing sounds do cause a measurable physical reaction, with skin conductivity changing significantly, and that the frequencies involved with unpleasant sounds also lie firmly within the range of human speech — between 2,000 and 4,000 Hz. Removing those frequencies from the sound made them much easier to listen to. But, interestingly, removing the noisy, scraping part of the sound made little difference.

A powerful psychological component was identified. If the listeners knew that the sound was fingernails on the chalkboard, they rated it as more unpleasant than if they were told it was from a musical composition. Even when they thought it was from music, however, their skin conductivity still changed consistently, suggesting that the physical part of the response remained.

[div class=attrib]Read the full article here.[end-div]

[div class=attrib]Images courtesy of Wired / Flickr.[end-div]

The Battle of Evidence and Science versus Belief and Magic

An insightful article over at the Smithsonian ponders the national (U.S.) decline in the trust of science. Regardless of the topic in question — climate change, health supplements, vaccinations, air pollution, “fracking”, evolution — and regardless of the specific position on a particular topic, scientific evidence continues to be questioned, ignored, revised, and politicized. And perhaps it is in this last issue, that of politics, that we may see a possible cause for a growing national pandemic of denialism. The increasingly fractured, fractious and rancorous nature of the U.S. political system threatens to undermine all debate and true skepticism, whether based on personal opinion or scientific fact.

[div class=attrib]From the Smithsonian:[end-div]

A group of scientists and statisticians led by the University of California at Berkeley set out recently to conduct an independent assessment of climate data and determine once and for all whether the planet has warmed in the last century and by how much. The study was designed to address concerns brought up by prominent climate change skeptics, and it was funded by several groups known for climate skepticism. Last week, the group released its conclusions: Average land temperatures have risen by about 1.8 degrees Fahrenheit since the middle of the 20th century. The result matched the previous research.

The skeptics were not happy and immediately claimed that the study was flawed.

Also in the news last week were the results of yet another study that found no link between cell phones and brain cancer. Researchers at the Institute of Cancer Epidemiology in Denmark looked at data from 350,000 cell phone users over an 18-year period and found they were no more likely to develop brain cancer than people who didn’t use the technology.

But those results still haven’t killed the calls for more monitoring of any potential link.

Study after study finds no link between autism and vaccines (and plenty of reason to worry about non-vaccinated children dying from preventable diseases such as measles). But a quarter of parents in a poll released last year said that they believed that “some vaccines cause autism in healthy children” and 11.5 percent had refused at least one vaccination for their child.

Polls say that Americans trust scientists more than, say, politicians, but that trust is on the decline. If we’re losing faith in science, we’ve gone down the wrong path. Science is no more than a process (as recent contributors to our “Why I Like Science” series have noted), and skepticism can be a good thing. But for many people that skepticism has grown to the point that they can no longer accept good evidence when they get it, with the result that “we’re now in an epidemic of fear like one I’ve never seen and hope never to see again,” says Michael Specter, author of Denialism, in his TEDTalk below.

If you’re reading this, there’s a good chance that you think I’m not talking about you. But here’s a quick question: Do you take vitamins? There’s a growing body of evidence that vitamins and dietary supplements are no more than a placebo at best and, in some cases, can actually increase the risk of disease or death. For example, a study earlier this month in the Archives of Internal Medicine found that consumption of supplements, such as iron and copper, was associated with an increased risk of death among older women. In a related commentary, several doctors note that the concept of dietary supplementation has shifted from preventing deficiency (there’s a good deal of evidence for harm if you’re low in, say, folic acid) to one of trying to promote wellness and prevent disease, and many studies are showing that more supplements do not equal better health.

But I bet you’ll still take your pills tomorrow morning. Just in case.

[div class=attrib]Read the entire article here.[end-div]

Science at its Best: The Universe is Expanding AND Accelerating

The 2011 Nobel Prize in Physics was recently awarded to three scientists: Adam Riess, Saul Perlmutter and Brian Schmidt. Their computations and observations of a very specific type of exploding star upended decades of commonly accepted beliefs of our universe. Namely, that the expansion of the universe is accelerating.

Prior to their observations, first publicly articulated in 1998, general scientific consensus held that the universe would expand at a steady rate forever or slow, and eventually fold back in on itself in a cosmic Big Crunch.

The discovery by Riess, Perlmutter and Schmidt laid the groundwork for the idea that a mysterious force called “dark energy” is fueling the acceleration. This dark energy is now believed to make up 75 percent of the universe. Direct evidence of dark energy is lacking, but most cosmologists now accept that universal expansion is indeed accelerating.

Re-published here are the notes and a page scan from Riess’s logbook that led to this year’s Nobel Prize, which show the value of the scientific process:

[div class=attrib]The original article is courtesy of Symmetry Breaking:[end-div]

In the fall of 1997, I was leading the calibration and analysis of data gathered by the High-z Supernova Search Team, one of two teams of scientists—the other was the Supernova Cosmology Project—trying to determine the fate of our universe: Will it expand forever, or will it halt and contract, resulting in the Big Crunch?

To find the answer, we had to determine the mass of the universe. It can be calculated by measuring how much the expansion of the universe is slowing.

First, we had to find cosmic candles—distant objects of known brightness—and use them as yardsticks. On this page, I checked the reliability of the supernovae, or exploding stars, that we had collected to serve as our candles. I found that the results they yielded for the present expansion rate of the universe (known as the Hubble constant) did not appear to be affected by the age or dustiness of their host galaxies.

Next, I used the data to calculate ?M, the relative mass of the universe.

It was significantly negative!

The result, if correct, meant that the assumption of my analysis was wrong. The expansion of the universe was not slowing. It was speeding up! How could that be?

I spent the next few days checking my calculation. I found one could explain the acceleration by introducing a vacuum energy, also called the cosmological constant, that pushes the universe apart. In March 1998, we submitted these results, which were published in September 1998.

Today, we know that 74 percent of the universe consists of this dark energy. Understanding its nature remains one of the most pressing tasks for physicists and astronomers alike.

Adam Riess, Johns Hopkins University

The discovery, and many others like it both great and small, show the true power of the scientific process. Scientific results are open for constant refinement, or re-evaluation or refutation and re-interpretation. The process leads to inexorable progress towards greater and greater knowledge and understanding, and eventually to truth that most skeptics can embrace. That is, until the next and better theory and corresponding results come along.

[div class=attrib]Image courtesy of Symmetry Breaking, Adam Riess.[end-div]

When Will I Die?

Would you like to know when you will die?

This is a fundamentally personal and moral question which many may prefer to keep unanswered.  That said, while scientific understanding of aging is making great strides it cannot yet provide an answer to the question. Though it may only be a matter of time.

Giles Tremlett over at the Guardian gives us a personal account of the fascinating science of telomeres, the end-caps on our chromosomes, and why they potentially hold a key to that most fateful question.

[div class=attrib]From the Guardian:[end-div]

As a taxi takes me across Madrid to the laboratories of Spain’s National Cancer Research Centre, I am fretting about the future. I am one of the first people in the world to provide a blood sample for a new test, which has been variously described as a predictor of how long I will live, a waste of time or a handy indicator of how well (or badly) my body is ageing. Today I get the results.

Some newspapers, to the dismay of the scientists involved, have gleefully announced that the test – which measures the telomeres (the protective caps on the ends of my chromosomes) – can predict when I will die. Am I about to find out that, at least statistically, my days are numbered? And, if so, might new telomere research suggesting we can turn back the hands of the body’s clock and make ourselves “biologically younger” come to my rescue?

The test is based on the idea that biological ageing grinds at your telomeres. And, although time ticks by uniformly, our bodies age at different rates. Genes, environment and our own personal habits all play a part in that process. A peek at your telomeres is an indicator of how you are doing. Essentially, they tell you whether you have become biologically younger or older than other people born at around the same time.

The key measure, explains María Blasco, a 45-year-old molecular biologist, head of Spain’s cancer research centre and one of the world’s leading telomere researchers, is the number of short telomeres. Blasco, who is also one of the co-founders of the Life Length company which is offering the tests, says that short telomeres do not just provide evidence of ageing. They also cause it. Often compared to the plastic caps on a shoelace, there is a critical level at which the fraying becomes irreversible and triggers cell death. “Short telomeres are causal of disease because when they are below a [certain] length they are damaging for the cells. The stem cells of our tissues do not regenerate and then we have ageing of the tissues,” she explains. That, in a cellular nutshell, is how ageing works. Eventually, so many of our telomeres are short that some key part of our body may stop working.

The research is still in its early days but extreme stress, for example, has been linked to telomere shortening. I think back to a recent working day that took in three countries, three news stories, two international flights, a public lecture and very little sleep. Reasonable behaviour, perhaps, for someone in their 30s – but I am closer to my 50s. Do days like that shorten my expected, or real, life-span?

[div class=attrib]Read more of this article here.[end-div]

[div class]Image: chromosomes capped by telomeres (white), courtesy of Wikipedia.[end-div]

Human Evolution Marches On

[div class=attrib]From Wired:[end-div]

Though ongoing human evolution is difficult to see, researchers believe they’ve found signs of rapid genetic changes among the recent residents of a small Canadian town.

Between 1800 and 1940, mothers in Ile aux Coudres, Quebec gave birth at steadily younger ages, with the average age of first maternity dropping from 26 to 22. Increased fertility, and thus larger families, could have been especially useful in the rural settlement’s early history.

According to University of Quebec geneticist Emmanuel Milot and colleagues, other possible explanations, such as changing cultural or environmental influences, don’t fit. The changes appear to reflect biological evolution.

“It is often claimed that modern humans have stopped evolving because cultural and technological advancements have annihilated natural selection,” wrote Milot’s team in their Oct. 3 Proceedings of the National Academy of Sciences paper. “Our study supports the idea that humans are still evolving. It also demonstrates that microevolution is detectable over just a few generations.”

Milot’s team based their study on detailed birth, marriage and death records kept by the Catholic church in Ile aux Coudres, a small and historically isolated French-Canadian island town in the Gulf of St. Lawrence. It wasn’t just the fact that average first birth age — a proxy for fertility — dropped from 26 to 22 in 140 years that suggested genetic changes. After all, culture or environment might have been wholly responsible, as nutrition and healthcare are for recent, rapid changes in human height. Rather, it was how ages dropped that caught their eye.

The patterns fit with models of gene-influenced natural selection. Moreover, thanks to the detailed record-keeping, it was possible to look at other possible explanations. Were better nutrition responsible, for example, improved rates of infant and juvenile mortality should have followed; they didn’t. Neither did the late-19th century transition from farming to more diversified professions.

[div class=attrib]Read more here.[end-div]

Faster Than Light Travel

The world of particle physics is agog with recent news of an experiment that shows a very unexpected result – sub-atomic particles traveling faster than the speed of light. If verified and independently replicated the results would violate one of the universe’s fundamental properties described by Einstein in the Special Theory of Relativity. The speed of light — 186,282 miles per second (299,792 kilometers per second) — has long been considered an absolute cosmic speed limit.

Stranger still, over the last couple of days news of this anomalous result has even been broadcast on many cable news shows.

The experiment known as OPERA is a collaboration between France’s National Institute for Nuclear and Particle Physics Research and Italy’s Gran Sasso National Laboratory. Over the course of three years scientists fired a neutrino beam 454 miles (730 kilometers) underground from Geneva to a receiver in Italy. Their measurements show that neutrinos arrived an average of 60 nanoseconds sooner than light would have done. This doesn’t seem like a great amount, after all is only 60 billionths of a second, however the small difference could nonetheless undermine a hundred years of physics.

Understandably most physicists remain skeptical of the result, until further independent experiments are used to confirm the measurements or not. However, all seem to agree that if the result is confirmed this would be a monumental finding and would likely reshape modern physics and our understanding of the universe.

[div class=attrib]More on this intriguing story here courtesy of ARs Technica, which also offers a detailed explanation of several possible sources of error that may have contributed to the faster-than-light measurements.[end-div]

The Sins of Isaac Newton

Aside from founding classical mechanics — think universal gravitation and laws of motion, laying the building blocks of calculus, and inventing the reflecting telescope Isaac Newton made time for spiritual pursuits. In fact, Newton was a highly religious individual (though a somewhat unorthodox Christian).

So, although Newton is best remembered for his monumental work, Philosophiæ Naturalis Principia Mathematica, he kept a lesser known, but no-less detailed journal of his sins while a freshman at Cambridge. A list of Newton’s most “heinous” self-confessed, moral failings follows below.

[div class=attrib]From io9:[end-div]

10. Making a feather while on Thy day.

Anyone remember the Little House series, where every day they worked their prairie-wind-chapped asses off and risked getting bitten by badgers and nearly lost eyes to exploding potatoes (all true), but never complained about anything until they hit Sunday and literally had to do nothing all day? That was hundreds of years after Newton. And Newton was even more bored than the Little House people, although he was sorry about it later. He confesses everything from making a mousetrap on Sunday, to playing chimes, to helping a roommate with a school project, to making pies, to ‘squirting water’ on the Sabbath.

9. Having uncleane thoughts words and actions and dreamese.

Well, to be fair, he was only a boy at this time. He may have had all the unclean thoughts in the world, but Newton, on his death bed, is well known for saying he is proudest of dying a virgin. And this is from the guy who invented the Laws of Motion.

8. Robbing my mothers box of plums and sugar.

Clearly he needed to compensate for lack of carnal pleasure with some other kind of physical comfort. It seems that Newton had a sweet tooth. There’s this ‘robbery.’ There’s the aforementioned pies, although they might be savory pies. And in another confession he talks about how he had ‘gluttony in his sickness.’ The guy needed to eat.

7. Using unlawful means to bring us out of distresses.

This is a strange sin because it’s so vague. Could it be that the ‘distresses’ were financial, leading to another confessed sin of ‘Striving to cheat with a brass halfe crowne.’ Some biographers think that his is a sexual confession and his ‘distresses’ were carnal. Newton isn’t just saying that he used immoral means, but unlawful ones. What law did he break?

6. Using Wilford’s towel to spare my own.

Whatever else Newton was, he was a terrible roommate. Although he was a decent student, he was reputed to be bad at personal relationships with anyone, at any time. This sin, using someone’s towel, was probably more a big deal during a time when plague was running through the countryside. He also confesses to, “Denying my chamberfellow of the knowledge of him that took him for a sot.”

And his sweet tooth still reigned. Any plums anyone left out would probably be gone by the time they got back. He confessed the sin of “Stealing cherry cobs from Eduard Storer.” Just to top it off, Newton confessed to ‘peevishness’ with people over and over in his journal. He was clearly a moody little guy. No word on whether he apologized to them about it, but he apologized to God, and surely that was enough.

[div class=attrib]More of the article here.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

The Teen Brain: Work In Progress or Adaptive Network?

[div class=attrib]From Wired:[end-div]

Ever since the late-1990s, when researchers discovered that the human brain takes into our mid-20s to fully develop — far longer than previously thought — the teen brain has been getting a bad rap. Teens, the emerging dominant narrative insisted, were “works in progress” whose “immature brains” left them in a state “akin to mental retardation” — all titles from prominent papers or articles about this long developmental arc.

In a National Geographic feature to be published next week, however, I highlight a different take: A growing view among researchers that this prolonged developmental arc is less a matter of delayed development than prolonged flexibility. This account of the adolescent brain — call it the “adaptive adolescent” meme rather than the “immature brain” meme — “casts the teen less as a rough work than as an exquisitely sensitive, highly adaptive creature wired almost perfectly for the job of moving from the safety of home into the complicated world outside.” The teen brain, in short, is not dysfunctional; it’s adaptive. .

Carl Zimmer over at Discover gives us some further interesting insights into recent studies of teen behavior.

[div class=attrib]From Discover:[end-div]

Teenagers are a puzzle, and not just to their parents. When kids pass from childhood to adolescence their mortality rate doubles, despite the fact that teenagers are stronger and faster than children as well as more resistant to disease. Parents and scientists alike abound with explanations. It is tempting to put it down to plain stupidity: Teenagers have not yet learned how to make good choices. But that is simply not true. Psychologists have found that teenagers are about as adept as adults at recognizing the risks of dangerous behavior. Something else is at work.

Scientists are finally figuring out what that “something” is. Our brains have networks of neurons that weigh the costs and benefits of potential actions. Together these networks calculate how valuable things are and how far we’ll go to get them, making judgments in hundredths of a second, far from our conscious awareness. Recent research reveals that teen brains go awry because they weigh those consequences in peculiar ways.

… Neuroscientist B. J. Casey and her colleagues at the Sackler Institute of the Weill Cornell Medical College believe the unique way adolescents place value on things can be explained by a biological oddity. Within our reward circuitry we have two separate systems, one for calculating the value of rewards and another for assessing the risks involved in getting them. And they don’t always work together very well.

… The trouble with teens, Casey suspects, is that they fall into a neurological gap. The rush of hormones at puberty helps drive the reward-system network toward maturity, but those hormones do nothing to speed up the cognitive control network. Instead, cognitive control slowly matures through childhood, adolescence, and into early adulthood. Until it catches up, teenagers are stuck with strong responses to rewards without much of a compensating response to the associated risks.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Kitra Cahana, National Geographic.[end-div]