Category Archives: BigBang

Letting Go of Regrets

[div class=attrib]From Mind Matters over at Scientific American:[end-div]

The poem “Maud Muller” by John Greenleaf Whittier aptly ends with the line, “For of all sad words of tongue or pen, The saddest are these: ‘It might have been!’” What if you had gone for the risky investment that you later found out made someone else rich, or if you had had the guts to ask that certain someone to marry you? Certainly, we’ve all had instances in our lives where hindsight makes us regret not sticking our neck out a bit more.

But new research suggests that when we are older these kinds of ‘if only!’ thoughts about the choices we made may not be so good for our mental health. One of the most important determinants of our emotional well being in our golden years might be whether we learn to stop worrying about what might have been.

In a new paper published in Science, researchers from the University Medical Center Hamburg-Eppendorf in Hamburg, Germany, report evidence from two experiments which suggest that one key to aging well might involve learning to let go of regrets about missed opportunities. Stafanie Brassen and her colleagues looked at how healthy young participants (mean age: 25.4 years), healthy older participants (65.8 years), and older participants who had developed depression for the first time later in life (65.6 years) dealt with regret, and found that the young and older depressed patients seemed to hold on to regrets about missed opportunities while the healthy older participants seemed to let them go.

To measure regret over missed opportunities, the researchers adapted an established risk taking task into a clever game in which the participants looked at eight wooden boxes lined up in a row on a computer screen and could choose to reveal the contents of the boxes one at a time, from left to right. Seven of the boxes had gold in them, which the participants would earn if they chose to open them. One box, however, had a devil in it. What happens if they open the box with the devil in it? They lose that round and any gold they earned so far with it.

Importantly, the participants could choose to cash out early and keep any gold they earned up to that point. Doing this would reveal the location of the devil and coincidently all of the gold they missed out on. Sometimes this wouldn’t be a big deal, because the devil would be in the next box. No harm, no foul.  But sometimes the devil might be several boxes away. In this case, you might have missed out on a lot of potential earnings, and this had the potential to induce feelings of regret.

In their first experiment, Brassen and colleagues had all of the participants play this ‘devil game’ during a functional magnetic resonance (fMRI) brain scan.  They wanted to test whether young participants, older depressed, and healthy older participants responded differently to missed opportunities during the game, and whether these differences might also be reflected in activity in one area of the brain called the ventral striatum (an area known to very active when we experience regret) and another area of the brain called the anterior cingulate (an area known to be active when controlling our emotions).

Brassen and her colleagues found that for healthy older participants, the area of the brain which is usually active during the experience of regret, the ventral striatum, was much less active during rounds of the game where they missed out on a lot of money, suggesting that the healthily aging brains were not processing regret in the same way the young and depressed older brains were. Also, when they looked at the emotion controlling center of the brain, the anterior cingulate, the researchers found that this area was much more active in the healthy older participants than the other two groups. Interestingly, Brassen and her colleagues found that the bigger the missed opportunity, the greater the activity in this area for healthy older participants, which suggests that their brains were actively mitigating their experience of regret.

[div class=attrib]Read the entire article after the jump.[end-div]

Growing Eyes in the Lab

[div class=attrib]From Nature:[end-div]

A stem-cell biologist has had an eye-opening success in his latest effort to mimic mammalian organ development in vitro. Yoshiki Sasai of the RIKEN Center for Developmental Biology (CBD) in Kobe, Japan, has grown the precursor of a human eye in the lab.

The structure, called an optic cup, is 550 micrometres in diameter and contains multiple layers of retinal cells including photoreceptors. The achievement has raised hopes that doctors may one day be able to repair damaged eyes in the clinic. But for researchers at the annual meeting of the International Society for Stem Cell Research in Yokohama, Japan, where Sasai presented the findings this week, the most exciting thing is that the optic cup developed its structure without guidance from Sasai and his team.

“The morphology is the truly extraordinary thing,” says Austin Smith, director of the Centre for Stem Cell Research at the University of Cambridge, UK.

Until recently, stem-cell biologists had been able to grow embryonic stem-cells only into two-dimensional sheets. But over the past four years, Sasai has used mouse embryonic stem cells to grow well-organized, three-dimensional cerebral-cortex1, pituitary-gland2 and optic-cup3 tissue. His latest result marks the first time that anyone has managed a similar feat using human cells.

Familiar patterns
The various parts of the human optic cup grew in mostly the same order as those in the mouse optic cup. This reconfirms a biological lesson: the cues for this complex formation come from inside the cell, rather than relying on external triggers.

In Sasai’s experiment, retinal precursor cells spontaneously formed a ball of epithelial tissue cells and then bulged outwards to form a bubble called an eye vesicle. That pliable structure then folded back on itself to form a pouch, creating the optic cup with an outer wall (the retinal epithelium) and an inner wall comprising layers of retinal cells including photoreceptors, bipolar cells and ganglion cells. “This resolves a long debate,” says Sasai, over whether the development of the optic cup is driven by internal or external cues.

There were some subtle differences in the timing of the developmental processes of the human and mouse optic cups. But the biggest difference was the size: the human optic cup had more than twice the diameter and ten times the volume of that of the mouse. “It’s large and thick,” says Sasai. The ratios, similar to those seen in development of the structure in vivo, are significant. “The fact that size is cell-intrinsic is tremendously interesting,” says Martin Pera, a stem-cell biologist at the University of Southern California, Los Angeles.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Discover Magazine.[end-div]

The 100 Million Year Collision

Four billion, or so, years from now, our very own Milky Way galaxy is expected to begin a slow but enormous collision with its galactic sibling, the Andromeda galaxy. Cosmologists predict the ensuing galactic smash will take around 100 million years to complete. It’s a shame we’ll not be around to witness the spectacle.

[div class=attrib]From Scientific American:[end-div]

The galactic theme in the context of planets and life is an interesting one. Take our own particular circumstances. As unappealingly non-Copernican as it is, there is no doubt that the Milky Way galaxy today is ‘special’. This should not be confused with any notion that special galaxy=special humans, since it’s really not clear yet that the astrophysical specialness of the galaxy has significant bearing on the likelihood of us sitting here picking our teeth. Nonetheless, the scientific method being what it is, we need to pay attention to any and all observations with as little bias as possible – so asking the question of what a ‘special’ galaxy might mean for life is OK, just don’t get too carried away.

First of all the Milky Way galaxy is big. As spiral galaxies go it’s in the upper echelons of diameter and mass. In the relatively nearby universe, it and our nearest large galaxy, Andromeda, are the sumo’s in the room. This immediately makes it somewhat unusual, the great majority of galaxies in the observable universe are smaller. The relationship to Andromeda is also very particular. In effect the Milky Way and Andromeda are a binary pair, our mutual distortion of spacetime is resulting in us barreling together at about 80 miles a second. In about 4 billion years these two galaxies will begin a ponderous collision lasting for perhaps 100 million years or so. It will be a soft type of collision – individual stars are so tiny compared to the distances between them that they themselves are unlikely to collide, but the great masses of gas and dust in the two galaxies will smack together – triggering the formation of new stars and planetary systems.

Some dynamical models (including those in the most recent work based on Hubble telescope measurements) suggest that our solar system could be flung further away from the center of the merging galaxies, others indicate it could end up thrown towards the newly forming stellar core of a future Goliath galaxy (Milkomeda?). Does any of this matter for life? For us the answer may be moot. In about only 1 billion years the Sun will have grown luminous enough that the temperate climate we enjoy on the Earth may be long gone. In 3-4 billion years it may be luminous enough that Mars, if not utterly dried out and devoid of atmosphere by then, could sustain ‘habitable‘ temperatures. Depending on where the vagaries of gravitational dynamics take the solar system as Andromeda comes lumbering through, we might end up surrounded by the pop and crackle of supernova as the collision-induced formation of new massive stars gets underway. All in all it doesn’t look too good. But for other places, other solar systems that we see forming today, it could be a very different story.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Composition of Milky Way and Andromeda. Courtesy of NASA, ESA, Z. Levay and R. van der Marel (STScI), T. Hallas, and A. Mellinger).[end-div]

Zen and the Art of Meditation Messaging

Quite often you will be skimming a book or leafing through pages of your favorite magazine and you will recall having “seen” a specific word. However, you will not remember having read that page or section or having looked at that particular word. But, without fail, when you retrace your steps and look back you will find that specific word, that word that you did not consciously “see”. So, what’s going on?

[div class=attrib]From the New Scientist:[end-div]

MEDITATION increases our ability to tap into the hidden recesses of our brain that are usually outside the reach of our conscious awareness.

That’s according to Madelijn Strick of Utrecht University in the Netherlands and colleagues, who tested whether meditation has an effect on our ability to pick up subliminal messages.

The brain registers subliminal messages, but we are often unable to recall them consciously. To investigate, the team recruited 34 experienced practitioners of Zen meditation and randomly assigned them to either a meditation group or a control group. The meditation group was asked to meditate for 20 minutes in a session led by a professional Zen master. The control group was asked to merely relax for 20 minutes.

The volunteers were then asked 20 questions, each with three or four correct answers – for instance: “Name one of the four seasons”. Just before the subjects saw the question on a computer screen one potential answer – such as “spring” – flashed up for a subliminal 16 milliseconds.

The meditation group gave 6.8 answers, on average, that matched the subliminal words, whereas the control group gave just 4.9 (Consciousness and Cognition, DOI: 10.1016/j.concog.2012.02.010).

Strick thinks that the explanation lies in the difference between what the brain is paying attention to and what we are conscious of. Meditators are potentially accessing more of what the brain has paid attention to than non-meditators, she says.

“It is a truly exciting development that the second wave of rigorous, scientific meditation research is now yielding concrete results,” says Thomas Metzinger, at Johannes Gutenberg University in Mainz, Germany. “Meditation may be best seen as a process that literally expands the space of conscious experience.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Yoga.am.[end-div]

Mutant Gravity and Dark Magnetism

Scientific consensus states that our universe is not only expanding, but expanding at an ever-increasing rate. So, sometime in the very distant future (tens of billions of years) our Milky Way galaxy will be mostly alone, accompanied only by its close galactic neighbors, such as Andromeda. All else in the universe will have receded beyond the horizon of visible light. And, yet for all the experimental evidence, no one knows the precise cause(s) of this acceleration or even of the expansion itself. But, there is no shortage of bold new theories.

[div class=attrib]From New Scientist:[end-div]

WE WILL be lonely in the late days of the cosmos. Its glittering vastness will slowly fade as countless galaxies retreat beyond the horizon of our vision. Tens of billions of years from now, only a dense huddle of nearby galaxies will be left, gazing out into otherwise blank space.

That gloomy future comes about because space is expanding ever faster, allowing far-off regions to slip across the boundary from which light has time to reach us. We call the author of these woes dark energy, but we are no nearer to discovering its identity. Might the culprit be a repulsive force that emerges from the energy of empty spaceMovie Camera, or perhaps a modification of gravity at the largest scales? Each option has its charms, but also profound problems.

But what if that mysterious force making off with the light of the cosmos is an alien echo of light itself? Light is just an expression of the force of electromagnetism, and vast electromagnetic waves of a kind forbidden by conventional physics, with wavelengths trillions of times larger than the observable universe, might explain dark energy’s baleful presence. That is the bold notion of two cosmologists who think that such waves could also account for the mysterious magnetic fields that we see threading through even the emptiest parts of our universe. Smaller versions could be emanating from black holes within our galaxy.

It is almost two decades since we realised that the universe is running away with itself. The discovery came from observations of supernovae that were dimmer, and so further away, than was expected, and earned its discoverers the Nobel prize in physics in 2011.

Prime suspect in the dark-energy mystery is the cosmological constant, an unchanging energy which might emerge from the froth of short-lived, virtual particles that according to quantum theory are fizzing about constantly in otherwise empty space.

Mutant gravity

To cause the cosmic acceleration we see, dark energy would need to have an energy density of about half a joule per cubic kilometre of space. When physicists try to tot up the energy of all those virtual particles, however, the answer comes to either exactly zero (which is bad), or something so enormous that empty space would rip all matter to shreds (which is very bad). In this latter case the answer is a staggering 120 orders of magnitude out, making it a shoo-in for the least accurate prediction in all of physics.

This stumbling block has sent some researchers down another path. They argue that in dark energy we are seeing an entirely new side to gravity. At distances of many billions of light years, it might turn from an attractive to a repulsive force.

But it is dangerous to be so cavalier with gravity. Einstein’s general theory of relativity describes gravity as the bending of space and time, and predicts the motions of planets and spacecraft in our own solar system with cast-iron accuracy. Try bending the theory to make it fit acceleration on a cosmic scale, and it usually comes unstuck closer to home.

That hasn’t stopped many physicists persevering along this route. Until recently, Jose Beltrán and Antonio Maroto were among them. In 2008 at the Complutense University of Madrid, Spain, they were playing with a particular version of a mutant gravity model called a vector-tensor theory, which they had found could mimic dark energy. Then came a sudden realisation. The new theory was supposed to be describing a strange version of gravity, but its equations bore an uncanny resemblance to some of the mathematics underlying another force. “They looked like electromagnetism,” says Beltrán, now based at the University of Geneva in Switzerland. “We started to think there could be a connection.”

So they decided to see what would happen if their mathematics described not masses and space-time, but magnets and voltages. That meant taking a fresh look at electromagnetism. Like most of nature’s fundamental forces, electromagnetism is best understood as a phenomenon in which things come chopped into little pieces, or quanta. In this case the quanta are photons: massless, chargeless particles carrying fluctuating electric and magnetic fields that point at right angles to their direction of motion.

Alien photons

This description, called quantum electrodynamics or QED, can explain a vast range of phenomena, from the behaviour of light to the forces that bind molecules together. QED has arguably been tested more precisely than any other physical theory, but it has a dark secret. It wants to spit out not only photons, but also two other, alien entities.

The first kind is a wave in which the electric field points along the direction of motion, rather than at right angles as it does with ordinary photons. This longitudinal mode moves rather like a sound wave in air. The second kind, called a temporal mode, has no magnetic field. Instead, it is a wave of pure electric potential, or voltage. Like all quantum entities, these waves come in particle packets, forming two new kinds of photon.

As we have never actually seen either of these alien photons in reality, physicists found a way to hide them. They are spirited away using a mathematical fix called the Lorenz condition, which means that all their attributes are always equal and opposite, cancelling each other out exactly. “They are there, but you cannot see them,” says Beltrán.

Beltrán and Maroto’s theory looked like electromagnetism, but without the Lorenz condition. So they worked through their equations to see what cosmological implications that might have.

The strange waves normally banished by the Lorenz condition may come into being as brief quantum fluctuations – virtual waves in the vacuum – and then disappear again. In the early moments of the universe, however, there is thought to have been an episode of violent expansion called inflation, which was driven by very powerful repulsive gravity. The force of this expansion grabbed all kinds of quantum fluctuations and amplified them hugely. It created ripples in the density of matter, for example, which eventually seeded galaxies and other structures in the universe.

Crucially, inflation could also have boosted the new electromagnetic waves. Beltrán and Maroto found that this process would leave behind vast temporal modes: waves of electric potential with wavelengths many orders of magnitude larger than the observable universe. These waves contain some energy but because they are so vast we do not perceive them as waves at all. So their energy would be invisible, dark… perhaps, dark energy?

Beltrán and Maroto called their idea dark magnetism (arxiv.org/abs/1112.1106). Unlike the cosmological constant, it may be able to explain the actual quantity of dark energy in the universe. The energy in those temporal modes depends on the exact time inflation started. One plausible moment is about 10 trillionths of a second after the big bang, when the universe cooled below a critical temperature and electromagnetism split from the weak nuclear force to become a force in its own right. Physics would have suffered a sudden wrench, enough perhaps to provide the impetus for inflation.

If inflation did happen at this “electroweak transition”, Beltrán and Maroto calculate that it would have produced temporal modes with an energy density close to that of dark energy. The correspondence is only within an order of magnitude, which may not seem all that precise. In comparison with the cosmological constant, however, it is mildly miraculous.

The theory might also explain the mysterious existence of large-scale cosmic magnetic fields. Within galaxies we see the unmistakable mark of magnetic fields as they twist the polarisation of light. Although the turbulent formation and growth of galaxies could boost a pre-existing field, is it not clear where that seed field would have come from.

Even more strangely, magnetic fields seem to have infiltrated the emptiest deserts of the cosmos. Their influence was noticed in 2010 by Andrii Neronov and Ievgen Vovk at the Geneva Observatory. Some distant galaxies emit blistering gamma rays with energies in the teraelectronvolt range. These hugely energetic photons should smack into background starlight on their way to us, creating electrons and positrons that in turn will boost other photons up to gamma energies of around 100 gigaelectronvolts. The trouble is that astronomers see relatively little of this secondary radiation. Neronov and Vovk suggest that is because a diffuse magnetic field is randomly bending the path of electrons and positrons, making their emission more diffuse (Science, vol 32, p 73).

“It is difficult to explain cosmic magnetic fields on the largest scales by conventional mechanisms,” says astrophysicist Larry Widrow of Queen’s University in Kingston, Ontario, Canada. “Their existence in the voids might signal an exotic mechanism.” One suggestion is that giant flaws in space-time called cosmic strings are whipping them up.

With dark magnetism, such a stringy solution would be superfluous. As well as the gigantic temporal modes, dark magnetism should also lead to smaller longitudinal waves bouncing around the cosmos. These waves could generate magnetism on the largest scales and in the emptiest voids.

To begin with, Beltrán and Maroto had some qualms. “It is always dangerous to modify a well-established theory,” says Beltrán. Cosmologist Sean Carroll at the California Institute of Technology in Pasadena, echoes this concern. “They are doing extreme violence to electromagnetism. There are all sorts of dangers that things might go wrong,” he says. Such meddling could easily throw up absurdities, predicting that electromagnetic forces are different from what we actually see.

The duo soon reassured themselves, however. Although the theory means that temporal and longitudinal modes can make themselves felt, the only thing that can generate them is an ultra-strong gravitational field such as the repulsive field that sprang up in the era of inflation. So within the atom, in all our lab experiments, and out there among the planets, electromagnetism carries on in just the same way as QED predicts.

Carroll is not convinced. “It seems like a long shot,” he says. But others are being won over. Gonzalo Olmo, a cosmologist at the University of Valencia, Spain, was initially sceptical but is now keen. “The idea is fantastic. If we quantise electromagnetic fields in an expanding universe, the effect follows naturally.”

So how might we tell whether the idea is correct? Dark magnetism is not that easy to test. It is almost unchanging, and would stretch space in almost exactly the same way as a cosmological constant, so we can’t tell the two ideas apart simply by watching how cosmic acceleration has changed over time.

Ancient mark

Instead, the theory might be challenged by peering deep into the cosmic microwave background, a sea of radiation emitted when the universe was less than 400,000 years old. Imprinted on this radiation are the original ripples of matter density caused by inflation, and it may bear another ancient mark. The turmoil of inflation should have energised gravitational waves, travelling warps in space-time that stretch and squeeze everything they pass through. These waves should affect the polarisation of cosmic microwaves in a distinctive way, which could tell us about the timing and the violence of inflation. The European Space Agency’s Planck spacecraft might just spot this signature. If Planck or a future mission finds that inflation happened before the electroweak transition, at a higher energy scale, then that would rule out dark magnetism in its current form.

Olmo thinks that the theory might anyhow need some numerical tweaking, so that might not be fatal, although it would be a blow to lose the link between the electroweak transition and the correct amount of dark energy.

One day, we might even be able to see the twisted light of dark magnetism. In its present incarnation with inflation at the electroweak scale, the longitudinal waves would all have wavelengths greater than a few hundred million kilometres, longer than the distance from Earth to the sun. Detecting a light wave efficiently requires an instrument not much smaller than the wavelength, but in the distant future it might just be possible to pick up such waves using space-based radio telescopes linked up across the solar system. If inflation kicked in earlier at an even higher energy, as suggested by Olmo, some of the longitudinal waves could be much shorter. That would bring them within reach of Earth-based technology. Beltrán suggests that they might be detected with the Square Kilometre Array – a massive radio instrument due to come on stream within the next decade.

If these dark electromagnetic waves can be created by strong gravitational fields, then they could also be produced by the strongest fields in the cosmos today, those generated around black holes. Beltrán suggests that waves may be emitted by the black hole at the centre of the Milky Way. They might be short enough for us to see – but they could easily be invisibly faint. Beltrán and Maroto are planning to do the calculations to find out.

One thing they have calculated from their theory is the voltage of the universe. The voltage of the vast temporal waves of electric potential started at zero when they were first created at the time of inflation, and ramped up steadily. Today, it has reached a pretty lively 1027 volts, or a billion billion gigavolts.

Just as well for us that it has nowhere to discharge. Unless, that is, some other strange quirk of cosmology brings a parallel universe nearby. The encounter would probably destroy the universe as we know it, but at least then our otherwise dark and lonely future would end with the mother of all lightning bolts.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Graphic courtesy of NASA / WMAP.[end-div]

Why Daydreaming is Good

Most of us, editor of theDiagonal included, have known this for a while. We’ve known that letting the mind wander aimlessly is crucial to creativity and problem-solving.

[div class=attrib]From Wired:[end-div]

It’s easy to underestimate boredom. The mental condition, after all, is defined by its lack of stimulation; it’s the mind at its most apathetic. This is why the poet Joseph Brodsky described boredom as a “psychological Sahara,” a cognitive desert “that starts right in your bedroom and spurns the horizon.” The hands of the clock seem to stop; the stream of consciousness slows to a drip. We want to be anywhere but here.

However, as Brodsky also noted, boredom and its synonyms can also become a crucial tool of creativity. “Boredom is your window,” the poet declared. “Once this window opens, don’t try to shut it; on the contrary, throw it wide open.”

Brodsky was right. The secret isn’t boredom per se: It’s how boredom makes us think. When people are immersed in monotony, they automatically lapse into a very special form of brain activity: mind-wandering. In a culture obsessed with efficiency, mind-wandering is often derided as a lazy habit, the kind of thinking we rely on when we don’t really want to think. (Freud regarded mind-wandering as an example of “infantile” thinking.) It’s a sign of procrastination, not productivity.

In recent years, however, neuroscience has dramatically revised our views of mind-wandering. For one thing, it turns out that the mind wanders a ridiculous amount. Last year, the Harvard psychologists Daniel Gilbert and Matthew A. Killingsworth published a fascinating paper in Science documenting our penchant for disappearing down the rabbit hole of our own mind. The scientists developed an iPhone app that contacted 2,250 volunteers at random intervals, asking them about their current activity and levels of happiness. It turns out that people were engaged in mind-wandering 46.9 percent of the time. In fact, the only activity in which their minds were not constantly wandering was love making. They were able to focus for that.

What’s happening inside the brain when the mind wanders? A lot. In 2009, a team led by Kalina Christoff of UBC and Jonathan Schooler of UCSB used “experience sampling” inside an fMRI machine to capture the brain in the midst of a daydream. (This condition is easy to induce: After subjects were given an extremely tedious task, they started to mind-wander within seconds.) Although it’s been known for nearly a decade that mind wandering is a metabolically intense process — your cortex consumes lots of energy when thinking to itself — this study further helped to clarify the sequence of mental events:

Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions—two brain systems that so far have been assumed to work in opposition—suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Two things worth noting here. The first is the reference to the default network. The name is literal: We daydream so easily and effortlessly that it appears to be our default mode of thought. The second is the simultaneous activation in executive and default regions, suggesting that mind wandering isn’t quite as mindless as we’d long imagined. (That’s why it seems to require so much executive activity.) Instead, a daydream seems to exist in the liminal space between sleep dreaming and focused attentiveness, in which we are still awake but not really present.

Last week, a team of Austrian scientists expanded on this result in PLoS ONE. By examining 17 patients with unresponsive wakefulness syndrome (UWS), 8 patients in a minimally conscious state (MCS), and 25 healthy controls, the researchers were able to detect the brain differences along this gradient of consciousness. The key difference was an inability among the most unresponsive patients to “deactivate” their default network. This suggests that these poor subjects were trapped within a daydreaming loop, unable to exercise their executive regions to pay attention to the world outside. (Problems with the deactivation of the default network have also been observed in patients with Alzheimer’s and schizophrenia.) The end result is that their mind’s eye is always focused inwards.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A daydreaming gentleman; from an original 1912 postcard published in Germany. Courtesy of Wikipedia.[end-div]

Something Out of Nothing

The debate on how the universe came to be rages on. Perhaps, however, we are a little closer to understanding why there is “something”, including us, rather than “nothing”.

[div class=attrib]From Scientific American:[end-div]

Why is there something rather than nothing? This is one of those profound questions that is easy to ask but difficult to answer. For millennia humans simply said, “God did it”: a creator existed before the universe and brought it into existence out of nothing. But this just begs the question of what created God—and if God does not need a creator, logic dictates that neither does the universe. Science deals with natural (not supernatural) causes and, as such, has several ways of exploring where the “something” came from.

Multiple universes. There are many multiverse hypotheses predicted from mathematics and physics that show how our universe may have been born from another universe. For example, our universe may be just one of many bubble universes with varying laws of nature. Those universes with laws similar to ours will produce stars, some of which collapse into black holes and singularities that give birth to new universes—in a manner similar to the singularity that physicists believe gave rise to the big bang.

M-theory. In his and Leonard Mlodinow’s 2010 book, The Grand Design, Stephen Hawking embraces “M-theory” (an extension of string theory that includes 11 dimensions) as “the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself.”

Quantum foam creation. The “nothing” of the vacuum of space actually consists of subatomic spacetime turbulence at extremely small distances measurable at the Planck scale—the length at which the structure of spacetime is dominated by quantum gravity. At this scale, the Heisenberg uncertainty principle allows energy to briefly decay into particles and antiparticles, thereby producing “something” from “nothing.”

Nothing is unstable. In his new book, A Universe from Nothing, cosmologist Lawrence M. Krauss attempts to link quantum physics to Einstein’s general theory of relativity to explain the origin of a universe from nothing: “In quantum gravity, universes can, and indeed always will, spontaneously appear from nothing. Such universes need not be empty, but can have matter and radiation in them, as long as the total energy, including the negative energy associated with gravity [balancing the positive energy of matter], is zero.” Furthermore, “for the closed universes that might be created through such mechanisms to last for longer than infinitesimal times, something like inflation is necessary.” Observations show that the universe is in fact flat (there is just enough matter to slow its expansion but not to halt it), has zero total energy and underwent rapid inflation, or expansion, soon after the big bang, as described by inflationary cosmology. Krauss concludes: “Quantum gravity not only appears to allow universes to be created from noth ing—meaning … absence of space and time—it may require them. ‘Nothing’—in this case no space, no time, no anything!—is unstable.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: There’s Nothing Out There. Courtesy of Rolfe Kanefsky / Image Entertainment.[end-div]

The Illusion of Free Will

A plethora of recent articles and books from the neuroscience community adds weight to the position that human free will does not exist. Our exquisitely complex brains construct a rather compelling illusion, however we are just observers, held captive to impulses that are completely driven by our biology. And, for that matter, much of this biological determinism is unavailable to our conscious minds.

James Atlas provides a recent summary of current thinking.

[div class=attrib]From the New York Times:[end-div]

WHY are we thinking so much about thinking these days? Near the top of best-seller lists around the country, you’ll find Jonah Lehrer’s “Imagine: How Creativity Works,” followed by Charles Duhigg’s book “The Power of Habit: Why We Do What We Do in Life and Business,” and somewhere in the middle, where it’s held its ground for several months, Daniel Kahneman’s “Thinking, Fast and Slow.” Recently arrived is “Subliminal: How Your Unconscious Mind Rules Your Behavior,” by Leonard Mlodinow.

It’s the invasion of the Can’t-Help-Yourself books.

Unlike most pop self-help books, these are about life as we know it — the one you can change, but only a little, and with a ton of work. Professor Kahneman, who won the Nobel Prize in economic science a decade ago, has synthesized a lifetime’s research in neurobiology, economics and psychology. “Thinking, Fast and Slow” goes to the heart of the matter: How aware are we of the invisible forces of brain chemistry, social cues and temperament that determine how we think and act? Has the concept of free will gone out the window?

These books possess a unifying theme: The choices we make in day-to-day life are prompted by impulses lodged deep within the nervous system. Not only are we not masters of our fate; we are captives of biological determinism. Once we enter the portals of the strange neuronal world known as the brain, we discover that — to put the matter plainly — we have no idea what we’re doing.

Professor Kahneman breaks down the way we process information into two modes of thinking: System 1 is intuitive, System 2 is logical. System 1 “operates automatically and quickly, with little or no effort and no sense of voluntary control.” We react to faces that we perceive as angry faster than to “happy” faces because they contain a greater possibility of danger. System 2 “allocates attention to the effortful mental activities that demand it, including complex computations.” It makes decisions — or thinks it does. We don’t notice when a person dressed in a gorilla suit appears in a film of two teams passing basketballs if we’ve been assigned the job of counting how many times one team passes the ball. We “normalize” irrational data either by organizing it to fit a made-up narrative or by ignoring it altogether.

The effect of these “cognitive biases” can be unsettling: A study of judges in Israel revealed that 65 percent of requests for parole were granted after meals, dropping steadily to zero until the judges’ “next feeding.” “Thinking, Fast and Slow” isn’t prescriptive. Professor Kahneman shows us how our minds work, not how to fiddle with what Gilbert Ryle called the ghost in the machine.

“The Power of Habit” is more proactive. Mr. Duhigg’s thesis is that we can’t change our habits, we can only acquire new ones. Alcoholics can’t stop drinking through willpower alone: they need to alter behavior — going to A.A. meetings instead of bars, for instance — that triggers the impulse to drink. “You have to keep the same cues and rewards as before, and feed the craving by inserting a new routine.”

“The Power of Habit” and “Imagine” belong to a genre that has become increasingly conspicuous over the last few years: the hortatory book, armed with highly sophisticated science, that demonstrates how we can achieve our ambitions despite our sensory cluelessness.

[div class=attrib]Read the entire article following the jump.[end-div]

The Connectome: Slicing and Reconstructing the Brain

[tube]1nm1i4CJGwY[/tube]

[div class=attrib]From the Guardian:[end-div]

There is a macabre brilliance to the machine in Jeff Lichtman’s laboratory at Harvard University that is worthy of a Wallace and Gromit film. In one end goes brain. Out the other comes sliced brain, courtesy of an automated arm that wields a diamond knife. The slivers of tissue drop one after another on to a conveyor belt that zips along with the merry whirr of a cine projector.

Lichtman’s machine is an automated tape-collecting lathe ultramicrotome (Atlum), which, according to the neuroscientist, is the tool of choice for this line of work. It produces long strips of sticky tape with brain slices attached, all ready to be photographed through a powerful electron microscope.

When these pictures are combined into 3D images, they reveal the inner wiring of the organ, a tangled mass of nervous spaghetti. The research by Lichtman and his co-workers has a goal in mind that is so ambitious it is almost unthinkable.

If we are ever to understand the brain in full, they say, we must know how every neuron inside is wired up.

Though fanciful, the payoff could be profound. Map out our “connectome” – following other major “ome” projects such as the genome and transcriptome – and we will lay bare the biological code of our personalities, memories, skills and susceptibilities. Somewhere in our brains is who we are.

To use an understatement heard often from scientists, the job at hand is not trivial. Lichtman’s machine slices brain tissue into exquisitely thin wafers. To turn a 1mm thick slice of brain into neural salami takes six days in a process that yields about 30,000 slices.

But chopping up the brain is the easy part. When Lichtman began this work several years ago, he calculated how long it might take to image every slice of a 1cm mouse brain. The answer was 7,000 years. “When you hear numbers like that, it does make your pulse quicken,” Lichtman said.

The human brain is another story. There are 85bn neurons in the 1.4kg (3lbs) of flesh between our ears. Each has a cell body (grey matter) and long, thin extensions called dendrites and axons (white matter) that reach out and link to others. Most neurons have lots of dendrites that receive information from other nerve cells, and one axon that branches on to other cells and sends information out.

On average, each neuron forms 10,000 connections, through synapses with other nerve cells. Altogether, Lichtman estimates there are between 100tn and 1,000tn connections between neurons.

Unlike the lung, or the kidney, where the whole organ can be understood, more or less, by grasping the role of a handful of repeating physiological structures, the brain is made of thousands of specific types of brain cell that look and behave differently. Their names – Golgi, Betz, Renshaw, Purkinje – read like a roll call of the pioneers of neuroscience.

Lichtman, who is fond of calculations that expose the magnitude of the task he has taken on, once worked out how much computer memory would be needed to store a detailed human connectome.

“To map the human brain at the cellular level, we’re talking about 1m petabytes of information. Most people think that is more than the digital content of the world right now,” he said. “I’d settle for a mouse brain, but we’re not even ready to do that. We’re still working on how to do one cubic millimetre.”

He says he is about to submit a paper on mapping a minuscule volume of the mouse connectome and is working with a German company on building a multibeam microscope to speed up imaging.

For some scientists, mapping the human connectome down to the level of individual cells is verging on overkill. “If you want to study the rainforest, you don’t need to look at every leaf and every twig and measure its position and orientation. It’s too much detail,” said Olaf Sporns, a neuroscientist at Indiana University, who coined the term “connectome” in 2005.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the Connectome Project / Guardian.[end-div]

Quantum Computer Leap

The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.

[div class=attrib]From the Observer:[end-div]

The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.

Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.

Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.

It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.

Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.[end-div]

Spacetime as an Emergent Phenomenon

A small, but growing, idea in theoretical physics and cosmology is that spacetime may be emergent. That is, spacetime emerges from something much more fundamental, in much the same way that our perception of temperature emerges from the motion and characteristics of underlying particles.

[div class=attrib]More on this new front in our quest to answer the most basic of questions from FQXi:[end-div]

Imagine if nothing around you was real. And, no, not in a science-fiction Matrix sense, but in an actual science-fact way.

Technically, our perceived reality is a gigantic series of approximations: The tables, chairs, people, and cell phones that we interact with every day are actually made up of tiny particles—as all good schoolchildren learn. From the motion and characteristics of those particles emerge the properties that we see and feel, including color and temperature. Though we don’t see those particles, because they are so much smaller than the phenomena our bodies are built to sense, they govern our day-to-day existence.

Now, what if spacetime is emergent too? That’s the question that Joanna Karczmarek, a string theorist at the University of British Columbia, Vancouver, is attempting to answer. As a string theorist, Karczmarek is familiar with imagining invisible constituents of reality. String theorists posit that at a fundamental level, matter is made up of unthinkably tiny vibrating threads of energy that underlie subatomic particles, such as quarks and electrons. Most string theorists, however, assume that such strings dance across a pre-existing and fundamental stage set by spacetime. Karczmarek is pushing things a step further, by suggesting that spacetime itself is not fundamental, but made of more basic constituents.

Having carried out early research in atomic, molecular and optical physics, Karczmarek shifted into string theory because she “was more excited by areas where less was known”—and looking for the building blocks from which spacetime arises certainly fits that criteria. The project, funded by a $40,000 FQXi grant, is “high risk but high payoff,” Karczmarek says.

Although one of only a few string theorists to address the issue, Karczmarek is part of a growing movement in the wider physics community to create a theory that shows spacetime is emergent. (See, for instance, “Breaking the Universe’s Speed Limit.”) The problem really comes into focus for those attempting to combine quantum mechanics with Einstein’s theory of general relativity and thus is traditionally tackled directly by quantum gravity researchers, rather than by string theorists, Karczmarek notes.

That may change though. Nathan Seiberg, a string theorist at the Institute for Advanced Study (IAS) in Princeton, New Jersey, has found good reasons for his stringy colleagues to believe that at least space—if not spacetime—is emergent. “With space we can sort of imagine how it might work,” Sieberg says. To explain how, Seiberg uses an everyday example—the emergence of an apparently smooth surface of water in a bowl. “If you examine the water at the level of particles, there is no smooth surface. It looks like there is, but this is an approximation,” Seiberg says. Similarly, he has found examples in string theory where some spatial dimensions emerge when you take a step back from the picture (arXiv:hep-th/0601234v1). “At shorter distances it doesn’t look like these dimensions are there because they are quantum fluctuations that are very rapid,” Seiberg explains. “In fact, the notion of space ceases to make sense, and eventually if you go to shorter and shorter distances you don’t even need it for the formulation of the theory.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Nature.[end-div]

Vampire Wedding and the Moral Molecule

Attend a wedding. Gather the hundred or so guests, and take their blood. Take samples that is. Then, measure the levels of a hormone called oxytocin. This is where neuroeconomist Paul Zak’s story beings — around a molecular messenger thought to be responsible for facilitating trust and empathy in all our intimate relationships.

[div class=attrib]From “The Moral Molecule” by Paul J. Zak, to be published May 10, courtesy of the Wall Street Journal:[end-div]

Could a single molecule—one chemical substance—lie at the very center of our moral lives?

Research that I have done over the past decade suggests that a chemical messenger called oxytocin accounts for why some people give freely of themselves and others are coldhearted louts, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and why women tend to be nicer and more generous than men. In our blood and in the brain, oxytocin appears to be the chemical elixir that creates bonds of trust not just in our intimate relationships but also in our business dealings, in politics and in society at large.

Known primarily as a female reproductive hormone, oxytocin controls contractions during labor, which is where many women encounter it as Pitocin, the synthetic version that doctors inject in expectant mothers to induce delivery. Oxytocin is also responsible for the calm, focused attention that mothers lavish on their babies while breast-feeding. And it is abundant, too, on wedding nights (we hope) because it helps to create the warm glow that both women and men feel during sex, a massage or even a hug.

Since 2001, my colleagues and I have conducted a number of experiments showing that when someone’s level of oxytocin goes up, he or she responds more generously and caringly, even with complete strangers. As a benchmark for measuring behavior, we relied on the willingness of our subjects to share real money with others in real time. To measure the increase in oxytocin, we took their blood and analyzed it. Money comes in conveniently measurable units, which meant that we were able to quantify the increase in generosity by the amount someone was willing to share. We were then able to correlate these numbers with the increase in oxytocin found in the blood.

Later, to be certain that what we were seeing was true cause and effect, we sprayed synthetic oxytocin into our subjects’ nasal passages—a way to get it directly into their brains. Our conclusion: We could turn the behavioral response on and off like a garden hose. (Don’t try this at home: Oxytocin inhalers aren’t available to consumers in the U.S.)

More strikingly, we found that you don’t need to shoot a chemical up someone’s nose, or have sex with them, or even give them a hug in order to create the surge in oxytocin that leads to more generous behavior. To trigger this “moral molecule,” all you have to do is give someone a sign of trust. When one person extends himself to another in a trusting way—by, say, giving money—the person being trusted experiences a surge in oxytocin that makes her less likely to hold back and less likely to cheat. Which is another way of saying that the feeling of being trusted makes a person more…trustworthy. Which, over time, makes other people more inclined to trust, which in turn…

If you detect the makings of an endless loop that can feed back onto itself, creating what might be called a virtuous circle—and ultimately a more virtuous society—you are getting the idea.

Obviously, there is more to it, because no one chemical in the body functions in isolation, and other factors from a person’s life experience play a role as well. Things can go awry. In our studies, we found that a small percentage of subjects never shared any money; analysis of their blood indicated that their oxytocin receptors were malfunctioning. But for everyone else, oxytocin orchestrates the kind of generous and caring behavior that every culture endorses as the right way to live—the cooperative, benign, pro-social way of living that every culture on the planet describes as “moral.” The Golden Rule is a lesson that the body already knows, and when we get it right, we feel the rewards immediately.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]CPK model of the Oxitocin molecule C43H66N12O12S2. Courtesy of Wikipedia.[end-div]

Your Brain Today

Progress in neuroscience continues to accelerate, and one of the principal catalysts of this progress is neuroscientist David Eagleman. We excerpt a recent article about Eagleman’s research, into amongst other things, synaesthesia, sensory substitution, time perception, neurochemical basis for attraction, and consciousness.

[div class=attrib]From the Telegraph:[end-div]

It ought to be quite intimidating, talking to David Eagleman. He is one of the world’s leading neuroscientists, after all, known for his work on time perception, synaesthesia and the use of neurology in criminal justice. But as anyone who has read his best-selling books or listened to his TED talks online will know, he has a gift for communicating complicated ideas in an accessible and friendly way — Brian Cox with an American accent.

He lives in Houston, Texas, with his wife and their two-month-old baby. When we Skype each other, he is sitting in a book-lined study and he doesn’t look as if his nights are being too disturbed by mewling. No bags under his eyes. In fact, with his sideburns and black polo shirt he looks much younger than his 41 years, positively boyish. His enthusiasm for his subject is boyish, too, as he warns me, he “speaks fast”.

He sure does. And he waves his arms around. We are talking about the minute calibrations and almost instantaneous assessments the brain makes when members of the opposite sex meet, one of many brain-related subjects covered in his book Incognito: The Secret Lives of the Brain, which is about to be published in paperback.

“Men are consistently more attracted to women with dilated eyes,” he says. “Because that corresponds with sexual excitement.”

Still, I say, not exactly a romantic discovery, is it? How does this theory go down with his wife? “Well she’s a neuroscientist like me so we joke about it all the time, like when I grow a beard. Women will always say they don’t like beards, but when you do the study it turns out they do, and the reason is it’s a secondary sex characteristic that indicates sexual development, the thing that separates the men from the boys.”

Indeed, according to Eagleman, we mostly run on unconscious autopilot. Our neural systems have been carved by natural selection to solve problems that were faced by our ancestors. Which brings me to another of his books, Why The Net Matters. As the father of children who spend a great deal of their time on the internet, I want to know if he thinks it is changing their brains.

“It certainly is,” he says, “especially in the way we seek information. When we were growing up it was all about ‘just in case’ information, the Battle of Hastings and so on. Now it is ‘just in time’ learning, where a kid looks something up online if he needs to know about it. This means kids today are becoming less good at memorising, but in other ways their method of learning is superior to ours because it targets neurotransmitters in the brain, ones that are related to curiosity, emotional salience and interactivity. So I think there might be some real advantages to where this is going. Kids are becoming faster at searching for information. When you or I read, our eyes scan down the page, but for a Generation-Y kid, their eyes will have a different set of movements, top, then side, then bottom and that is the layout of webpages.”

In many ways Eagleman’s current status as “the poster boy of science’s most fashionable field” (as the neuroscientist was described in a recent New Yorker profile) seems entirely apt given his own upbringing. His mother was a biology teacher, his father a psychiatrist who was often called upon to evaluate insanity pleas. Yet Eagleman says he wasn’t drawn to any of this. “Growing up, I didn’t see my career path coming at all, because in tenth grade I always found biology gross, dissecting rats and frogs. But in college I started reading about the brain and then I found myself consuming anything I could on the subject. I became hooked.”

Eagleman’s mother has described him as an “unusual child”. He wrote his first words at two, and at 12 he was explaining Einstein’s theory of relativity to her. He also liked to ask for a list of 400 random objects then repeat them back from memory, in reverse order. At Rice University, Houston, he majored in electrical engineering, but then took a sabbatical, joined the Israeli army as a volunteer, spent a semester at Oxford studying political science and literature and finally moved to LA to try and become a stand-up comedian. It didn’t work out and so he returned to Rice, this time to study neurolinguistics. After this came his doctorate and his day job as a professor running a laboratory at Baylor College of Medicine, Houston (he does his book writing at night, doesn’t have hobbies and has never owned a television).

I ask if he has encountered any snobbery within the scientific community for being an academic who has “dumbed down” by writing popular science books that spend months on the New York Times bestseller list? “I have to tell you, that was one of my concerns, and I can definitely find evidence of that. Online, people will sometimes say terrible things about me, but they are the exceptions that illustrate a more benevolent rule. I give talks on university campuses and the students there tell me they read my books because they synthesise large swathes of data in a readable way.”

He actually thinks there is an advantage for scientists in making their work accessible to non-scientists. “I have many tens of thousands of neuroscience details in my head and the process of writing about them and trying to explain them to an eighth grader makes them become clearer in my own mind. It crystallises them.”

I tell him that my copy of Incognito is heavily annotated and there is one passage where I have simply written a large exclamation mark. It concerns Eric Weihenmayer who, in 2001, became the first blind person to climb Mount Everest. Today he climbs with a grid of more than six hundred tiny electrodes in his mouth. This device allows him to see with his tongue. Although the tongue is normally a taste organ, its moisture and chemical environment make it a good brain-machine interface when a tingly electrode grid is laid on its surface. The grid translates a video input into patterns of electrical pulses, allowing the tongue to discern qualities usually ascribed to vision such as distance, shape, direction of movement and size.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of ALAMY / Telegraph.[end-div]

Cocktail Party Science and Multitasking


The hit drama Mad Men shows us that cocktail parties can be fun — colorful drinks and colorful conversations with a host of very colorful characters. Yet cocktail parties also highlight one of our limitations, the inability to multitask. We are single-threaded animals despite the constant and simultaneous bombardment for our attention from all directions, and to all our senses.

Melinda Beck over at the WSJ Health Journal summarizes recent research that shows the deleterious effects of our attempts to multitask — why it’s so hard and why it’s probably not a good idea anyway, especially while driving.

[div class=attrib]From the Wall Street Journal:[end-div]

You’re at a party. Music is playing. Glasses are clinking. Dozens of conversations are driving up the decibel level. Yet amid all those distractions, you can zero in on the one conversation you want to hear.

This ability to hyper-focus on one stream of sound amid a cacophony of others is what researchers call the “cocktail-party effect.” Now, scientists at the University of California in San Francisco have pinpointed where that sound-editing process occurs in the brain—in the auditory cortex just behind the ear, not in areas of higher thought. The auditory cortex boosts some sounds and turns down others so that when the signal reaches the higher brain, “it’s as if only one person was speaking alone,” says principle investigator Edward Chang.

These findings, published in the journal Nature last week, underscore why people aren’t very good at multitasking—our brains are wired for “selective attention” and can focus on only one thing at a time. That innate ability has helped humans survive in a world buzzing with visual and auditory stimulation. But we keep trying to push the limits with multitasking, sometimes with tragic consequences. Drivers talking on cellphones, for example, are four times as likely to get into traffic accidents as those who aren’t.

Many of those accidents are due to “inattentional blindness,” in which people can, in effect, turn a blind eye to things they aren’t focusing on. Images land on our retinas and are either boosted or played down in the visual cortex before being passed to the brain, just as the auditory cortex filters sounds, as shown in the Nature study last week. “It’s a push-pull relationship—the more we focus on one thing, the less we can focus on others,” says Diane M. Beck, an associate professor of psychology at the University of Illinois.

That people can be completely oblivious to things in their field of vision was demonstrated famously in the “Invisible Gorilla experiment” devised at Harvard in the 1990s. Observers are shown a short video of youths tossing a basketball and asked to count how often the ball is passed by those wearing white. Afterward, the observers are asked several questions, including, “Did you see the gorilla?” Typically, about half the observers failed to notice that someone in a gorilla suit walked through the scene. They’re usually flabbergasted because they’re certain they would have noticed something like that.

“We largely see what we expect to see,” says Daniel Simons, one of the study’s creators and now a professor of psychology at the University of Illinois. As he notes in his subsequent book, “The Invisible Gorilla,” the more attention a task demands, the less attention we can pay to other things in our field of vision. That’s why pilots sometimes fail to notice obstacles on runways and radiologists may overlook anomalies on X-rays, especially in areas they aren’t scrutinizing.

And it isn’t just that sights and sounds compete for the brain’s attention. All the sensory inputs vie to become the mind’s top priority.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Getty Images / Wall Street Journal.[end-div]

Science and Politics

The tension between science, religion and politics that began several millennia ago continues unabated.

[div class=attrib]From ars technica:[end-div]

In the US, science has become a bit of a political punching bag, with a number of presidential candidates accusing climatologists of fraud, even as state legislators seek to inject phony controversies into science classrooms. It’s enough to make one long for the good old days when science was universally respected. But did those days ever actually exist?

A new look at decades of survey data suggests that there was never a time when science was universally respected, but one political group in particular—conservative voters—has seen its confidence in science decline dramatically over the last 30 years.

The researcher behind the new work, North Carolina’s Gordon Gauchat, figures there are three potential trajectories for the public’s view of science. One possibility is that the public, appreciating the benefits of the technological advances that science has helped to provide, would show a general increase in its affinity for science. An alternative prospect is that this process will inevitably peak, either because there are limits to how admired a field can be, or because a more general discomfort with modernity spills over to a field that helped bring it about.

The last prospect Gauchat considers is that there has been a change in views about science among a subset of the population. He cites previous research that suggests some view the role of science as having changed from one where it enhances productivity and living standards to one where it’s the primary justification for regulatory policies. “Science has always been politicized,” Gauchat writes. “What remains unclear is how political orientations shape public trust in science.”

To figure out which of these trends might apply, he turned to the General Social Survey, which has been gathering information on the US public’s views since 1972. During that time, the survey consistently contained a series of questions about confidence in US institutions, including the scientific community. The answers are divided pretty crudely—”a great deal,” “only some,” and “hardly any”—but they do provide a window into the public’s views on science. (In fact, “hardly any” was the choice of less than 7 percent of the respondents, so Gauchat simply lumped it in with “only some” for his analysis.)

The data showed a few general trends. For much of the study period, moderates actually had the lowest levels of confidence in science, with liberals typically having the highest; the levels of trust for both these groups were fairly steady across the 34 years of data. Conservatives were the odd one out. At the very start of the survey in 1974, they actually had the highest confidence in scientific institutions. By the 1980s, however, they had dropped so that they had significantly less trust than liberals did; in recent years, they’ve become the least trusting of science of any political affiliation.

Examining other demographic trends, Gauchat noted that the only other group to see a significant decline over time is regular churchgoers. Crunching the data, he states, indicates that “The growing force of the religious right in the conservative movement is a chief factor contributing to conservatives’ distrust in science.” This decline in trust occurred even among those who had college or graduate degrees, despite the fact that advanced education typically correlated with enhanced trust in science.

[div class=attrib]Read the entire article after the jump:[end-div]

Runner’s High: How and Why

There is a small but mounting body of evidence that supports the notion of the so-called Runner’s High, a state of euphoria attained by athletes during and immediately following prolonged and vigorous exercise. But while the neurochemical basis for this may soon be understood little is known as to why this happens. More on the how and the why from Scicurious Brain.

[div class=attrib]From the Scicurious over at Scientific American:[end-div]

I just came back from an 11 mile run. The wind wasn’t awful like it usually is, the sun was out, and I was at peace with the world, and right now, I still am. Later, I know my knees will be yelling at me and my body will want nothing more than to lie down. But right now? Right now I feel FANTASTIC.

What I am in the happy, zen-like, yet curiously energetic throes of is what is popularly known as the “runner’s high”. The runner’s high is a state of bliss achieved by athletes (not just runners) during and immediately following prolonged and intense exercise. It can be an extremely powerful, emotional experience. Many athletes will say they get it (and indeed, some would say we MUST get it, because otherwise why would we keep running 26.2 miles at a stretch?), but what IS it exactly? For some people it’s highly emotional, for some it’s peaceful, and for some it’s a burst of energy. And there are plenty of other people who don’t appear to get it at all. What causes it? Why do some people get it and others don’t?

Well, the short answer is that we don’t know. As I was coming back from my run, blissful and emotive enough that the sight of a small puppy could make me weepy with joy, I began to wonder myself…what is up with me? As I re-hydrated and and began to sift through the literature, I found…well, not much. But what I did find suggests two competing hypothesis: the endogenous opioid hypothesis and the cannabinoid hypothesis.

The endogenous opioid hypothesis

This hypothesis of the runner’s high is based on a study showing that enorphins, endogenous opioids, are released during intense physical activity. When you think of the word “opioids”, you probably think of addictive drugs like opium or morphine. But your body also produces its own versions of these chemicals (called ‘endogenous’ or produced within an organism), usually in response to times of physical stress. Endogenous opioids can bind to the opioid receptors in your brain, which affect all sorts of systems. Opioid receptor activations can help to blunt pain, something that is surely present at the end of a long workout. Opioid receptors can also act in reward-related areas such as the striatum and nucleus accumbens. There, they can inhibit the release of inhibitory transmitters and increase the release of dopamine, making strenuous physical exercise more pleasurable. Endogenous opioid production has been shown to occur during the runner’s high in humans and well as after intense exercise in rats.

The cannabinoid hypothesis

Not only does the brain release its own forms of opioid chemicals, it also releases its own form of cannabinoids. When we usually talk about cannabinoids, we think about things like marijuana or the newer synthetic cannabinoids, which act upon cannabinoid receptors in the brain to produce their effects. But we also produce endogenous cannabinoids (called endocannabinoids), such as anandamide, which also act upon those same receptors. Studies have shown that deletion of cannabinoid receptor 1 decreases wheel running in mice, and that intense exercise causes increases in anadamide in humans.

Not only how, but why?

There isn’t a lot out there on HOW the runner’s high might occur, but there is even less on WHY. There are several hypotheses out there, but none of them, as far as I can tell, are yet supported by evidence. First there is the hypothesis of a placebo effect due to achieving goals. The idea is that you expect yourself to achieve a difficult goal, and then feel great when you do. While the runner’s high does have some things in common with goal achievement, it doesn’t really explain why people get them on training runs or regular runs, when they are not necessarily pushing themselves extremely hard.

[div class=attrib]Read the entire article after the jump, (no pun intended).[end-div]

[div class=attrib]Image courtesy of Cincinnati.com.[end-div]

So Where Is Everybody?

Astrobiologist Caleb Scharf brings us up to date on Fermi’s Paradox — which asks why, given that our galaxy is so old, haven’t other sentient intergalactic travelers found us. The answer may come from a video game.

[div class=attrib]From Scientific American:[end-div]

Right now, all across the planet, millions of people are engaged in a struggle with enormous implications for the very nature of life itself. Making sophisticated tactical decisions and wrestling with chilling and complex moral puzzles, they are quite literally deciding the fate of our existence.

Or at least they are pretending to.

The video game Mass Effect has now reached its third and final installment; a huge planet-destroying, species-wrecking, epic finale to a story that takes humanity from its tentative steps into interstellar space to a critical role in a galactic, and even intergalactic saga. It’s awfully good, even without all the fantastic visual design or gameplay, at the heart is a rip-roaring plot and countless backstories that tie the experience into one of the most carefully and completely imagined sci-fi universes out there.

As a scientist, and someone who will sheepishly admit to a love of videogames (from countless hours spent as a teenager coding my own rather inferior efforts, to an occasional consumer’s dip into the lushness of what a multi-billion dollar industry can produce), the Mass Effect series is fascinating for a number of reasons. The first of which is the relentless attention to plausible background detail. Take for example the task of finding mineral resources in Mass Effect 2. Flying your ship to different star systems presents you with a bird’s eye view of the planets, each of which has a fleshed out description – be it inhabited, or more often, uninhabitable. These have been torn from the annals of the real exoplanets, gussied up a little, but still recognizable. There are hot Jupiters, and icy Neptune-like worlds. There are gassy planets, rocky planets, and watery planets of great diversity in age, history and elemental composition. It’s a surprisingly good representation of what we now think is really out there.

But the biggest idea, the biggest piece of fiction-meets-genuine-scientific-hypothesis is the overarching story of Mass Effect. It directly addresses one of the great questions of astrobiology – is there intelligent life elsewhere in our galaxy, and if so, why haven’t we intersected with it yet? The first serious thinking about this problem seems to have arisen during a lunchtime chat in the 1940?s where the famous physicist Enrico Fermi (for whom the fundamental particle type ‘fermion’ is named) is supposed to have asked “Where is Everybody?” The essence of the Fermi Paradox is that since our galaxy is very old, perhaps 10 billion years old, unless intelligent life is almost impossibly rare it will have arisen ages before we came along. Such life will have had time to essentially span the Milky Way, even if spreading out at relatively slow sub-light speeds, it – or its artificial surrogates, machines – will have reached every nook and cranny. Thus we should have noticed it, or been noticed by it, unless we are truly the only example of intelligent life.

The Fermi Paradox comes with a ton of caveats and variants. It’s not hard to think of all manner of reasons why intelligent life might be teeming out there, but still not have met us – from self-destructive behavior to the realistic hurdles of interstellar travel. But to my mind Mass Effect has what is perhaps one of the most interesting, if not entertaining, solutions. This will spoil the story; you have been warned.

Without going into all the colorful details, the central premise is that a hugely advanced and ancient race of artificially intelligent machines ‘harvests’ all sentient, space-faring life in the Milky Way every 50,000 years. These machines otherwise lie dormant out in the depths of intergalactic space. They have constructed and positioned an ingenious web of technological devices (including the Mass Effect relays, providing rapid interstellar travel) and habitats within the Galaxy that effectively sieve through the rising civilizations, helping the successful flourish and multiply, ripening them up for eventual culling. The reason for this? Well, the plot is complex and somewhat ambiguous, but one thing that these machines do is use the genetic slurry of millions, billions of individuals from a species to create new versions of themselves.

It’s a grand ol’ piece of sci-fi opera, but it also provides a neat solution to the Fermi Paradox via a number of ideas: a) The most truly advanced interstellar species spends most of its time out of the Galaxy in hibernation. b) Purging all other sentient (space-faring) life every 50,000 years puts a stop to any great spreading across the Galaxy. c) Sentient, space-faring species are inevitably drawn into the technological lures and habitats left for them, and so are less inclined to explore.

These make it very unlikely that until a species is capable of at least proper interplanetary space travel (in the game humans have to reach Mars to become aware of what’s going on at all) it will have to conclude that the Galaxy is a lonely place.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image: Intragalactic life. Courtesy of J. Schombert, U. Oregon.[end-div]

Your Molecular Ancestors

[div class=attrib]From Scientific American:[end-div]

Well, perhaps your great-to-the-hundred-millionth-grandmother was.

Understanding the origins of life and the mechanics of the earliest beginnings of life is as important for the quest to unravel the Earth’s biological history as it is for the quest to seek out other life in the universe. We’re pretty confident that single-celled organisms – bacteria and archaea – were the first ‘creatures’ to slither around on this planet, but what happened before that is a matter of intense and often controversial debate.

One possibility for a precursor to these organisms was a world without DNA, but with the bare bone molecular pieces that would eventually result in the evolutionary move to DNA and its associated machinery. This idea was put forward by an influential paper in the journal Nature in 1986 by Walter Gilbert (winner of a Nobel in Chemistry), who fleshed out an idea by Carl Woese – who had earlier identified the Archaea as a distinct branch of life. This ancient biomolecular system was called the RNA-world, since it consists of ribonucleic acid sequences (RNA) but lacks the permanent storage mechanisms of deoxyribonucleic acids (DNA).

A key part of the RNA-world hypothesis is that in addition to carrying reproducible information in their sequences, RNA molecules can also perform the duties of enzymes in catalyzing reactions – sustaining a busy, self-replicating, evolving ecosystem. In this picture RNA evolves away until eventually items like proteins come onto the scene, at which point things can really gear up towards more complex and familiar life. It’s an appealing picture for the stepping-stones to life as we know it.

In modern organisms a very complex molecular structure called the ribosome is the critical machine that reads the information in a piece of messenger-RNA (that has spawned off the original DNA) and then assembles proteins according to this blueprint by snatching amino acids out of a cell’s environment and putting them together. Ribosomes are amazing, they’re also composed of a mix of large numbers of RNA molecules and protein molecules.

But there’s a possible catch to all this, and it relates to the idea of a protein-free RNA-world some 4 billion years ago.

[div class=attrib]Read more after the jump:[end-div]

[div class=attrib]Image: RNA molecule. Courtesy of Wired / Universitat Pampeu Fabra.[end-div]

Male Brain + Female = Jello

[div class=attrib]From Scientific American:[end-div]

In one experiment, just telling a man he would be observed by a female was enough to hurt his psychological performance.

Movies and television shows are full of scenes where a man tries unsuccessfully to interact with a pretty woman. In many cases, the potential suitor ends up acting foolishly despite his best attempts to impress. It seems like his brain isn’t working quite properly and according to new findings, it may not be.

Researchers have begun to explore the cognitive impairment that men experience before and after interacting with women. A 2009 study demonstrated that after a short interaction with an attractive woman, men experienced a decline in mental performance. A more recent study suggests that this cognitive impairment takes hold even w hen men simply anticipate interacting with a woman who they know very little about.

Sanne Nauts and her colleagues at Radboud University Nijmegen in the Netherlands ran two experiments using men and women university students as participants. They first collected a baseline measure of cognitive performance by having the students complete a Stroop test. Developed in 1935 by the psychologist John Ridley Stroop, the test is a common way of assessing our ability to process competing information. The test involves showing people a series of words describing different colors that are printed in different colored inks. For example, the word “blue” might be printed in green ink and the word “red” printed in blue ink. Participants are asked to name, as quickly as they can, the color of the ink that the words are written in. The test is cognitively demanding because our brains can’t help but process the meaning of the word along with the color of the ink. When people are mentally tired, they tend to complete the task at a slower rate.

After completing the Stroop Test, participants in Nauts’ study were asked to take part in another supposedly unrelated task. They were asked to read out loud a number of Dutch words while sitting in front of a webcam. The experimenters told them that during this “lip reading task” an observer would watch them over the webcam. The observer was given either a common male or female name. Participants were led to believe that this person would see them over the web cam, but they would not be able to interact with the person. No pictures or other identifying information were provided about the observer—all the participants knew was his or her name. After the lip reading task, the participants took another Stroop test. Women’s performance on the second test did not differ, regardless of the gender of their observer. However men who thought a woman was observing them ended up performing worse on the second Stroop test. This cognitive impairment occurred even though the men had not interacted with the female observer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock/Iconogenic.[end-div]

There’s the Big Bang theory and then there’s The Big Bang Theory

Now in it’s fifth season on U.S. television, The Big Bang Theory has made serious geekiness fun and science cool. In fact, the show is rising in popularity to such an extent that a Google search for “big bang theory” ranks the show first and above all other more learned scientific entires.

Brad Hooker from Symmetry Breaking asks some deep questions of David Saltzberg, science advisor to The Big Bang Theory.

[div class=attrib]From Symmetry Breaking:[end-div]

For those who live, breathe and laugh physics, one show entangles them all: The Big Bang Theory. Now in its fifth season on CBS, the show follows a group of geeks, including a NASA engineer, an astrophysicist and two particle physicists.

Every episode has at least one particle physics joke. On faster-than-light neutrinos: “Is this observation another Swiss export full of more holes than their cheese?” On Saul Perlmutter clutching the Nobel Prize: “What’s the matter, Saul? You afraid somebody’s going to steal it, like you stole Einstein’s cosmological constant?”

To make these jokes timely and accurate, while sprinkling the sets with authentic scientific plots and posters, the show’s writers depend on one physicist, David Saltzberg. Since the first episode, Saltzberg’s dose of realism has made science chic again, and has even been credited with increasing admissions to physics programs. Symmetry writer Brad Hooker asked the LHC physicist, former Tevatron researcher and University of California, Los Angeles professor to explain how he walks the tightrope between science and sitcom.

Brad: How many of your suggestions are put into the show?

David: In general, when they ask for something, they use it. But it’s never anything that’s funny or moves the story along. It’s the part that you don’t need to understand. They explained to me in the beginning that you can watch an I Love Lucy rerun and not understand Spanish, but understand that Ricky Ricardo is angry. That’s all the level of science understanding needed for the show.

B: These references are current. Astrophysicist Saul Perlmutter of Lawrence Berkeley National Laboratory was mentioned on the show just weeks after winning the Nobel Prize for discovering the accelerating expansion of the universe.

D: Right. And you may wonder why they chose Saul Perlmutter, as opposed to the other two winners. It just comes down to that they liked the sound of his name better. Things like that matter. The writers think of the script in terms of music and the rhythm of the lines. I usually give them multiple choices because I don’t know if they want something short or long or something with odd sounds in it. They really think about that kind of thing.

B: Do the writers ever ask you to explain the science and it goes completely over their heads?

D: We respond by email so I don’t really know. But I don’t think it goes over their heads because you can Wikipedia anything.

One thing was a little difficult for me: they asked for a spoof of the Born-Oppenheimer approximation, which is harder than it sounds. But for the most part it’s just a matter of narrowing it down to a few choices. There are so many ways to go through it and I deliberately chose things that are current.

First of all, these guys live in our universe—they’re talking about the things we physicists are talking about. And also, there isn’t a whole lot of science journalism out there. It’s been cut back a lot. In getting the words out there, whether it’s “dark matter” or “topological insulators,” hopefully some fraction of the audience will Google it.

B: Are you working with any other science advisors? I know one character is a neurobiologist.

D: Luckily the actress who portrays her, Mayim Bialik, is also a neuroscientist. She has a PhD in neuroscience from UCLA. So that worked out really well because I don’t know all of physics, let alone all of science. What I’m able to do with the physics is say, “Well, we don’t really talk like that even though it’s technically correct.” And I can’t do that for biology, but she can.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of The Big Bang Theory, Warner Bros.[end-div]

Everything Comes in Threes

[div class=attrib]From the Guardian:[end-div]

Last week’s results from the Daya Bay neutrino experiment were the first real measurement of the third neutrino mixing angle, ?13 (theta one-three). There have been previous experiments which set limits on the angle, but this is the first time it has been shown to be significantly different from zero.

Since ?13 is a fundamental parameter in the Standard Model of particle physics1, this would be an important measurement anyway. But there’s a bit more to it than that.

Neutrinos – whatever else they might be doing – mix up amongst themselves as they travel through space. This is a quantum mechanical effect, and comes from the fact that there are two ways of defining the three types of neutrino.

You can define them by the way they are produced. So a neutrino which is produced (or destroyed) in conjunction with an electron is an “electron neutrino”. If a muon is involved, it’s a “muon neutrino”. The third one is a “tau neutrino”. We call this the “flavour”.

Or you can define them by their masses. Usually we just call this definition neutrinos 1, 2 and 3.

The two definitions don’t line up, and there is a matrix which tells you how much of each “flavour” neutrino overlaps with each “mass” one. This is the neutrino mixing matrix. Inside this matrix in the standard model there are potentially four parameters describing how the neutrinos mix.

You could just have two-way mixing. For example, the flavour states might just mix up neutrino 1 and 2, and neutrino 2 and 3. This would be the case if the angle ?13 were zero. If it is bigger than zero (as Daya Bay have now shown) then neutrino 1 also mixes with neutrino 3. In this case, and only in this case, a fourth parameter is also allowed in the matrix. This fourth parameter (?) is one we haven’t measured yet, but now we know it is there. And the really important thing is, if it is there, and also not zero, then it introduces an asymmetry between matter and antimatter.

This is important because currently we don’t know why there is more matter than antimatter around. We also don’t know why there are three copies of neutrinos (and indeed of each class of fundamental particle). But we know that three copies is minimum number which allows some difference in the way matter and antimatter experience the weak nuclear force. This is the kind of clue which sets off big klaxons in the minds of physicists: New physics hiding somewhere here! It strongly suggests that these two not-understood facts are connected by some bigger, better theory than the one we have.

We’ve already measured a matter-antimatter difference for quarks; a non-zero ?13 means there can be a difference for neutrinos too. More clues.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: The first use of a hydrogen bubble chamber to detect neutrinos, on November 13, 1970. A neutrino hit a proton in a hydrogen atom. The collision occurred at the point where three tracks emanate on the right of the photograph. Courtesy of Wikipedia.[end-div]

Have Wormhole, Will Travel

Intergalactic travel just became a lot easier, well, if only theoretically at the moment.

[div class=attrib]From New Scientist:[end-div]

IT IS not every day that a piece of science fiction takes a step closer to nuts-and-bolts reality. But that is what seems to be happening to wormholes. Enter one of these tunnels through space-time, and a few short steps later you may emerge near Pluto or even in the Andromeda galaxy millions of light years away.

You probably won’t be surprised to learn that no one has yet come close to constructing such a wormhole. One reason is that they are notoriously unstable. Even on paper, they have a tendency to snap shut in the blink of an eye unless they are propped open by an exotic form of matter with negative energy, whose existence is itself in doubt.

Now, all that has changed. A team of physicists from Germany and Greece has shown that building wormholes may be possible without any input from negative energy at all. “You don’t even need normal matter with positive energy,” says Burkhard Kleihaus of the University of Oldenburg in Germany. “Wormholes can be propped open with nothing.”

The findings raise the tantalising possibility that we might finally be able to detect a wormhole in space. Civilisations far more advanced than ours may already be shuttling back and forth through a galactic-wide subway system constructed from wormholes. And eventually we might even be able to use them ourselves as portals to other universes.

Wormholes first emerged in Einstein’s general theory of relativity, which famously shows that gravity is nothing more than the hidden warping of space-time by energy, usually the mass-energy of stars and galaxies. Soon after Einstein published his equations in 1916, Austrian physicist Ludwig Flamm discovered that they also predicted conduits through space and time.

But it was Einstein himself who made detailed investigations of wormholes with Nathan Rosen. In 1935, they concocted one consisting of two black holes, connected by a tunnel through space-time. Travelling through their wormhole was only possible if the black holes at either end were of a special kind. A conventional black hole has such a powerful gravitational field that material sucked in can never escape once it has crossed what is called the event horizon. The black holes at the end of an Einstein-Rosen wormhole would be unencumbered by such points of no return.

Einstein and Rosen’s wormholes seemed a mere curiosity for another reason: their destination was inconceivable. The only connection the wormholes offered from our universe was to a region of space in a parallel universe, perhaps with its own stars, galaxies and planets. While today’s theorists are comfortable with the idea of our universe being just one of many, in Einstein and Rosen’s day such a multiverse was unthinkable.

Fortunately, it turned out that general relativity permitted the existence of another type of wormhole. In 1955, American physicist John Wheeler showed that it was possible to connect two regions of space in our universe, which would be far more useful for fast intergalactic travel. He coined the catchy name wormhole to add to black holes, which he can also take credit for.

The trouble is the wormholes of Wheeler and Einstein and Rosen all have the same flaw. They are unstable. Send even a single photon of light zooming through and it instantly triggers the formation of an event horizon, which effectively snaps shut the wormhole.

Bizarrely, it is the American planetary astronomer Carl Sagan who is credited with moving the field on. In his science fiction novel, Contact, he needed a quick and scientifically sound method of galactic transport for his heroine – played by Jodie Foster in the movie. Sagan asked theorist Kip Thorne at the California Institute of Technology in Pasadena for help, and Thorne realised a wormhole would do the trick. In 1987, he and his graduate students Michael Morris and Uri Yertsever worked out the recipe to create a traversable wormhole. It turned out that the mouths could be kept open by hypothetical material possessing a negative energy. Given enough negative energy, such a material has a repulsive form of gravity, which physically pushes open the wormhole mouth.

Negative energy is not such a ridiculous idea. Imagine two parallel metal plates sitting in a vacuum. If you place them close together the vacuum between them has negative energy – that is, less energy than the vacuum outside. This is because a normal vacuum is like a roiling sea of waves, and the waves that are too big to fit between the plates are naturally excluded. This leaves less energy inside the plates than outside.

Unfortunately, this kind of negative energy exists in quantities far too feeble to prop open a wormhole mouth. Not only that but a Thorne-Morris-Yertsever wormhole that is big enough for someone to crawl through requires a tremendous amount of energy – equivalent to the energy pumped out in a year by an appreciable fraction of the stars in the galaxy.

Back to the drawing board then? Not quite. There may be a way to bypass those difficulties. All the wormholes envisioned until recently assume that Einstein’s theory of gravity is correct. In fact, this is unlikely to be the case. For a start, the theory breaks down at the heart of a black hole, as well as at the beginning of time in the big bang. Also, quantum theory, which describes the microscopic world of atoms, is incompatible with general relativity. Since quantum theory is supremely successful – explaining everything from why the ground is solid to how the sun shines – many researchers believe that Einstein’s theory of gravity must be an approximation of a deeper theory.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image of a traversable wormhole which connects the place in front of the physical institutes of Tübingen University with the sand dunes near Boulogne sur Mer in the north of France. Courtesy of Wikipedia.[end-div]

Need Creative Inpiration? Take a New Route to Work

[div class=attrib]From Miller-McCune:[end-div]

Want to boost your creativity? Tomorrow morning, pour some milk into an empty bowl, and then add the cereal.

That may sound, well, flaky. But according to a newly published study, preparing a common meal in reverse order may stimulate innovative thinking.

Avoiding conventional behavior at the breakfast table “can help people break their cognitive patterns, and thus lead them to think more flexibly and creatively,” according to a research team led by psychologist Simone Ritter of Radboud University Nijmegen in the Netherlands.

She and her colleagues, including Rodica Ioana Damian of the University of California, Davis, argue that “active involvement in an unusual event” can trigger higher levels of creativity. They note this activity can take many forms, from studying abroad for a semester to coping with the unexpected death of a loved one.
But, writing in the Journal of Experimental Social Psychology, they provide evidence that something simpler will suffice.

The researchers describe an experiment in which Dutch university students were asked to prepare a breakfast sandwich popular in the Netherlands.

Half of them did so in the conventional manner: They put a slice of bread on a plate, buttered the bread and then placed chocolate chips on top. The others — prompted by a script on a computer screen — first put chocolate chips on a plate, then buttered a slice of bread and finally “placed the bread butter-side-down on the dish with the chocolate chips.”

After completing their culinary assignment, they turned their attention to the “Unusual Uses Task,” a widely used measure of creativity. They were given two minutes to generate uses for a brick and another two minutes to come up with as many answers as they could to the question: “What makes sound?”

“Cognitive flexibility” was scored not by counting how many answers they came up with, but rather by the number of categories those answers fell into. For the “What makes sound?” test, a participant whose answers were all animals or machines received a score of one, while someone whose list included “dog,” “car” and “ocean” received a three.

“A high cognitive flexibility score indicates an ability to switch between categories, overcome fixedness, and thus think more creativity,” Ritter and her colleagues write.
On both tests, those who made their breakfast treat backwards had higher scores. Breaking their normal sandwich-making pattern apparently opened them up; their minds wandered more freely, allowing for more innovative thought.

[div class=attrib]Read the entire article here.[end-div]

What’s in a Name?

Are you a Leszczynska or a Bob? And, do you wish to be liked? Well, sorry Leszczynska. It turns out that having an easily pronounceable name makes you more likable.

[div class=attrib]From Wired:[end-div]

Though it might seem impossible, and certainly inadvisable, to judge a person by their name, a new study suggests our brains try anyway.

The more pronounceable a person’s name is, the more likely people are to favor them.

“When we can process a piece of information more easily, when it’s easier to comprehend, we come to like it more,” said psychologist Adam Alter of New York University and co-author of a Journal of Experimental Social Psychology study published in December.

Fluency, the idea that the brain favors information that’s easy to use, dates back to the 1960s, when researchers found that people most liked images of Chinese characters if they’d seen them many times before.

Researchers since then have explored other roles that names play, how they affect our judgment and to what degree.

Studies have shown, for example, that people can partly predict a person’s income and education using only their first name. Childhood is perhaps the richest area for name research: Boys with girls’ names are more likely to be suspended from school. And the less popular a name is, the more likely a child is to be delinquent.

In 2005, Alter and his colleagues explored how pronounceability of company names affects their performance in the stock market. Stripped of all obvious influences, they found companies with simpler names and ticker symbols traded better than the stocks of more difficult-to-pronounce companies.

“The effect is often very, very hard to quantify because so much depends on context, but it’s there and measurable,” Alter said. “You can’t avoid it.”

But how much does pronunciation guide our perceptions of people? To find out, Alter and colleagues Simon Laham and Peter Koval of the University of Melbourne carried out five studies.

In the first, they asked 19 female and 16 male college students to rank 50 surnames according to their ease or difficulty of pronunciation, and according to how much they liked or disliked them. In the second, they had 17 females and 7 male students vote for hypothetical political candidates solely on the basis of their names. In the third, they asked 55 female and 19 male students to vote on candidates about whom they knew both names and some political positions.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Dave Mosher/Wired.[end-div]

Synaesthesia: Smell the Music

[div class=attrib]From the Economist:[end-div]

THAT some people make weird associations between the senses has been acknowledged for over a century. The condition has even been given a name: synaesthesia. Odd as it may seem to those not so gifted, synaesthetes insist that spoken sounds and the symbols which represent them give rise to specific colours or that individual musical notes have their own hues.

Yet there may be a little of this cross-modal association in everyone. Most people agree that loud sounds are “brighter” than soft ones. Likewise, low-pitched sounds are reminiscent of large objects and high-pitched ones evoke smallness. Anne-Sylvie Crisinel and Charles Spence of Oxford University think something similar is true between sound and smell.

Ms Crisinel and Dr Spence wanted to know whether an odour sniffed from a bottle could be linked to a specific pitch, and even a specific instrument. To find out, they asked 30 people to inhale 20 smells—ranging from apple to violet and wood smoke—which came from a teaching kit for wine-tasting. After giving each sample a good sniff, volunteers had to click their way through 52 sounds of varying pitches, played by piano, woodwind, string or brass, and identify which best matched the smell. The results of this study, to be published later this month in Chemical Senses, are intriguing.

The researchers’ first finding was that the volunteers did not think their request utterly ridiculous. It rather made sense, they told them afterwards. The second was that there was significant agreement between volunteers. Sweet and sour smells were rated as higher-pitched, smoky and woody ones as lower-pitched. Blackberry and raspberry were very piano. Vanilla had elements of both piano and woodwind. Musk was strongly brass.

It is not immediately clear why people employ their musical senses in this way to help their assessment of a smell. But gone are the days when science assumed each sense worked in isolation. People live, say Dr Spence and Ms Crisinel, in a multisensory world and their brains tirelessly combine information from all sources to make sense, as it were, of what is going on around them. Nor is this response restricted to humans. Studies of the brains of mice show that regions involved in olfaction also react to sound.

Taste, too, seems linked to hearing. Ms Crisinel and Dr Spence have previously established that sweet and sour tastes, like smells, are linked to high pitch, while bitter tastes bring lower pitches to mind. Now they have gone further. In a study that will be published later this year they and their colleagues show how altering the pitch and instruments used in background music can alter the way food tastes.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of cerebromente.org.br.[end-div]

Spooky Action at a Distance Explained

[div class=attrib]From Scientific American:[end-div]

Quantum entanglement is such a mainstay of modern physics that it is worth reflecting on how long it took to emerge. What began as a perceptive but vague insight by Albert Einstein languished for decades before becoming a branch of experimental physics and, increasingly, modern technology.

Einstein’s two most memorable phrases perfectly capture the weirdness of quantum mechanics. “I cannot believe that God plays dice with the universe” expressed his disbelief that randomness in quantum physics was genuine and impervious to any causal explanation. “Spooky action at a distance” referred to the fact that quantum physics seems to allow influences to travel faster than the speed of light. This was, of course, disturbing to Einstein, whose theory of relativity prohibited any such superluminal propagation.

These arguments were qualitative. They were targeted at the worldview offered by quantum theory rather than its predictive power. Niels Bohr is commonly seen as the patron saint of quantum physics, defending it against Einstein’s repeated onslaughts. He is usually said to be the ultimate winner in this battle of wits. However, Bohr’s writing was terribly obscure. He was known for saying “never express yourself more clearly than you are able to think,” a motto which he adhered to very closely. His arguments, like Einstein’s, were qualitative, verging on highly philosophical. The Einstein-Bohr dispute, although historically important, could not be settled experimentally—and the experiment is the ultimate judge of validity of any theoretical ideas in physics. For decades, the phenomenon was all but ignored.

All that changed with John Bell. In 1964 he understood how to convert the complaints about “dice-playing” and “spooky action at a distance” into a simple inequality involving measurements on two particles. The inequality is satisfied in a world where God does not play dice and there is no spooky action. The inequality is violated if the fates of the two particles are intertwined, so that if we measure a property of one of them, we immediately know the same property of the other one—no matter how far apart the particles are from each other. This state where particles behave like twin brothers is said to be entangled, a term introduced by Erwin Schrödinger.

[div class=attrib]Read the whole article here.[end-div]

Women and Pain

New research suggests that women feel pain more intensely than men.

[div class=attrib]From Scientific American:[end-div]

When a woman falls ill, her pain may be more intense than a man’s, a new study suggests.

Across a number of different diseases, including diabetes, arthritis and certain respiratory infections, women in the study reported feeling more pain than men, the researchers said.

The study is one of the largest to examine sex differences in human pain perception. The results are in line with earlier findings, and reveal that sex differences in pain sensitivity may be present in many more diseases than previously thought.

Because pain is subjective, the researchers can’t know for sure whether women, in fact, experience more pain than men. A number of factors, including a person’s mood and whether they take pain medication, likely influence how much pain they say they’re in.

In all, the researchers assessed sex differences in reported pain for more than 250 diseases and conditions.

For almost every diagnosis, women reported higher average pain scores than men. Women’s scores were, on average, 20 percent higher than men’s scores, according to the study.

Women with lower back pain, and knee and leg strain consistently reported higher scores than men. Women also reported feeling more pain in the neck (for conditions such as torticollis, in which the neck muscles twist or spasm) and sinuses (during sinus infections) than did men, a result not found by previous research.

It could be that women assign different numbers to the level of pain they perceive compared with men, said Roger B. Fillingim, a pain researcher at the University of Florida College of Dentistry, who was not involved with the new study.

But the study was large, and the findings are backed up by previous work, Fillingim said.

“I think the most [simple] explanation is that women are indeed experiencing higher levels of pain than men,” Fillingim said.

The reason for this is not known, Fillingim said. Past research suggests a number of factors contribute to perceptions of pain level, including hormones, genetics and psychological factors, which may vary between men and women, Fillingim said. It’s also possible the pain systems work differently in men and women, or women experience more severe forms of disease than men, he said.

[div class=attrib]Read the entire article here.[end-div]

[div class]Image courtesy of CNN.[end-div]

The More Things Stay the Same, the More They Change?

[div class=attrib]From Scientific American:[end-div]

Some things never change. physicists call them the constants of nature. Such quantities as the velocity of light, c, Newton’s constant of gravitation, G, and the mass of the electron, me, are assumed to be the same at all places and times in the universe. They form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe. Physics has progressed by making ever more accurate measurements of their values.

And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why constants take the special numerical values that they do (given the choice of units). In SI units, c is 299,792,458; G is 6.673 × 10–11; and me is 9.10938188 × 10–31 —numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or “theory of everything.” Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature.

In recent years, however, the status of the constants has grown more muddied, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time—as many as seven more. One  implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional “shadows.”

Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number—10500 —of possible “worlds” with different self-consistent sets of laws and constants. So far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to one, but we have to remain open to the unnerving possibility that our known universe is but one of many—a part of a multiverse—and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws.

No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable uni verse could be one of many isolated oases surrounded by an infinity of lifeless space—a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be.

Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants.

[div class=attrib]Read the entire article here.[end-div]