Tag Archives: neuroscience

Single-tasking is Human

If you’re an office worker you will relate. Recently, you will have participated on a team meeting or conference call only to have at least one person say, when asked a question, “sorry can you please repeat that, I was multitasking.”

Many of us believe, or have been tricked into believing, that doing multiple things at once makes us more productive. This phenomenon was branded by business as multitasking. After all, if computers could do it, then why not humans. Yet, experience shows that humans are woefully inadequate at performing multiple concurrent tasks that require dedicated attention. Of course, humans are experts at walking and chewing gum at the same time. However, in the majority of cases these activities require very little involvement from the higher functions of the brain. There is a growing body of anecdotal and experimental evidence that shows poorer performance on multiple tasks done concurrently versus the same tasks performed sequentially. In fact, for quite some time, researchers have shown that dealing with multiple streams of information at once is a real problem for our limited brains.

Yet, most businesses seem to demand or reward multitasking behavior. And damagingly, the multitasking epidemic now seems to be the norm in the home as well.

[div class=attrib]From the WSJ:[end-div]

In the few minutes it takes to read this article, chances are you’ll pause to check your phone, answer a text, switch to your desktop to read an email from the boss’s assistant, or glance at the Facebook or Twitter messages popping up in the corner of your screen. Off-screen, in your open-plan office, crosstalk about a colleague’s preschooler might lure you away, or a co-worker may stop by your desk for a quick question.

And bosses wonder why it is tough to get any work done.

Distraction at the office is hardly new, but as screens multiply and managers push frazzled workers to do more with less, companies say the problem is worsening and is affecting business.

While some firms make noises about workers wasting time on the Web, companies are realizing the problem is partly their own fault.

Even though digital technology has led to significant productivity increases, the modern workday seems custom-built to destroy individual focus. Open-plan offices and an emphasis on collaborative work leave workers with little insulation from colleagues’ chatter. A ceaseless tide of meetings and internal emails means that workers increasingly scramble to get their “real work” done on the margins, early in the morning or late in the evening. And the tempting lure of social-networking streams and status updates make it easy for workers to interrupt themselves.

“It is an epidemic,” says Lacy Roberson, a director of learning and organizational development at eBay Inc. At most companies, it’s a struggle “to get work done on a daily basis, with all these things coming at you,” she says.

Office workers are interrupted—or self-interrupt—roughly every three minutes, academic studies have found, with numerous distractions coming in both digital and human forms. Once thrown off track, it can take some 23 minutes for a worker to return to the original task, says Gloria Mark, a professor of informatics at the University of California, Irvine, who studies digital distraction.

Companies are experimenting with strategies to keep workers focused. Some are limiting internal emails—with one company moving to ban them entirely—while others are reducing the number of projects workers can tackle at a time.

Last year, Jamey Jacobs, a divisional vice president at Abbott Vascular, a unit of health-care company Abbott Laboratories learned that his 200 employees had grown stressed trying to squeeze in more heads-down, focused work amid the daily thrum of email and meetings.

“It became personally frustrating that they were not getting the things they wanted to get done,” he says. At meetings, attendees were often checking email, trying to multitask and in the process obliterating their focus.

Part of the solution for Mr. Jacobs’s team was that oft-forgotten piece of office technology: the telephone.

Mr. Jacobs and productivity consultant Daniel Markovitz found that employees communicated almost entirely over email, whether the matter was mundane, such as cake in the break room, or urgent, like an equipment issue.

The pair instructed workers to let the importance and complexity of their message dictate whether to use cellphones, office phones or email. Truly urgent messages and complex issues merited phone calls or in-person conversations, while email was reserved for messages that could wait.

Workers now pick up the phone more, logging fewer internal emails and say they’ve got clarity on what’s urgent and what’s not, although Mr. Jacobs says staff still have to stay current with emails from clients or co-workers outside the group.

[div class=attrib]Read the entire article after the jump, and learn more in this insightful article on multitasking over at Big Think.[end-div]

[div class=attrib]Image courtesy of Big Think.[end-div]

Hearing and Listening

Auditory neuroscientist Seth Horowitz guides us through the science of hearing and listening in his new book, “The Universal Sense: How Hearing Shapes the Mind.” He clarifies the important distinction between attentive listening with the mind and the more passive act of hearing, and laments the many modern distractions that threaten our ability to listen effectively.

[div class=attrib]From the New York Times:[end-div]

HERE’S a trick question. What do you hear right now?

If your home is like mine, you hear the humming sound of a printer, the low throbbing of traffic from the nearby highway and the clatter of plastic followed by the muffled impact of paws landing on linoleum — meaning that the cat has once again tried to open the catnip container atop the fridge and succeeded only in knocking it to the kitchen floor.

The slight trick in the question is that, by asking you what you were hearing, I prompted your brain to take control of the sensory experience — and made you listen rather than just hear. That, in effect, is what happens when an event jumps out of the background enough to be perceived consciously rather than just being part of your auditory surroundings. The difference between the sense of hearing and the skill of listening is attention.

Hearing is a vastly underrated sense. We tend to think of the world as a place that we see, interacting with things and people based on how they look. Studies have shown that conscious thought takes place at about the same rate as visual recognition, requiring a significant fraction of a second per event. But hearing is a quantitatively faster sense. While it might take you a full second to notice something out of the corner of your eye, turn your head toward it, recognize it and respond to it, the same reaction to a new or sudden sound happens at least 10 times as fast.

This is because hearing has evolved as our alarm system — it operates out of line of sight and works even while you are asleep. And because there is no place in the universe that is totally silent, your auditory system has evolved a complex and automatic “volume control,” fine-tuned by development and experience, to keep most sounds off your cognitive radar unless they might be of use as a signal that something dangerous or wonderful is somewhere within the kilometer or so that your ears can detect.

This is where attention kicks in.

Attention is not some monolithic brain process. There are different types of attention, and they use different parts of the brain. The sudden loud noise that makes you jump activates the simplest type: the startle. A chain of five neurons from your ears to your spine takes that noise and converts it into a defensive response in a mere tenth of a second — elevating your heart rate, hunching your shoulders and making you cast around to see if whatever you heard is going to pounce and eat you. This simplest form of attention requires almost no brains at all and has been observed in every studied vertebrate.

More complex attention kicks in when you hear your name called from across a room or hear an unexpected birdcall from inside a subway station. This stimulus-directed attention is controlled by pathways through the temporoparietal and inferior frontal cortex regions, mostly in the right hemisphere — areas that process the raw, sensory input, but don’t concern themselves with what you should make of that sound. (Neuroscientists call this a “bottom-up” response.)

But when you actually pay attention to something you’re listening to, whether it is your favorite song or the cat meowing at dinnertime, a separate “top-down” pathway comes into play. Here, the signals are conveyed through a dorsal pathway in your cortex, part of the brain that does more computation, which lets you actively focus on what you’re hearing and tune out sights and sounds that aren’t as immediately important.

In this case, your brain works like a set of noise-suppressing headphones, with the bottom-up pathways acting as a switch to interrupt if something more urgent — say, an airplane engine dropping through your bathroom ceiling — grabs your attention.

Hearing, in short, is easy. You and every other vertebrate that hasn’t suffered some genetic, developmental or environmental accident have been doing it for hundreds of millions of years. It’s your life line, your alarm system, your way to escape danger and pass on your genes. But listening, really listening, is hard when potential distractions are leaping into your ears every fifty-thousandth of a second — and pathways in your brain are just waiting to interrupt your focus to warn you of any potential dangers.

Listening is a skill that we’re in danger of losing in a world of digital distraction and information overload.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Listener (TV series). Courtesy of Shaftsbury Films, CTV / Wikipedia.[end-div]

The Great Blue Monday Fallacy

A yearlong survey of moodiness shows that the so-called Monday Blues may be more figment of the imagination than fact.

[div class=attrib]From the New York Times:[end-div]

DESPITE the beating that Mondays have taken in pop songs — Fats Domino crooned “Blue Monday, how I hate blue Monday” — the day does not deserve its gloomy reputation.

Two colleagues and I recently published an analysis of a remarkable yearlong survey by the Gallup Organization, which conducted 1,000 live interviews a day, asking people across the United States to recall their mood in the prior day. We scoured the data for evidence that Monday was bluer than Tuesday or Wednesday. We couldn’t find any.

Mood was evaluated with several adjectives measuring positive or negative feelings. Spanish-only speakers were queried in Spanish. Interviewers spoke to people in every state on cellphones and land lines. The data unequivocally showed that Mondays are as pleasant to Americans as the three days that follow, and only a trifle less joyful than Fridays. Perhaps no surprise, people generally felt good on the weekend — though for retirees, the distinction between weekend and weekdays was only modest.

Likewise, day-of-the-week mood was gender-blind. Over all, women assessed their daily moods more negatively than men did, but relative changes from day to day were similar for both sexes.

And yet still, the belief in blue Mondays persists.

Several years ago, in another study, I examined expectations about mood and day of the week: two-thirds of the sample nominated Monday as the “worst” day of the week. Other research has confirmed that this sentiment is widespread, despite the fact that, well, we don’t really feel any gloomier on that day.

The question is, why? Why do we believe something that our own immediate experience indicates simply isn’t true?

As it turns out, the blue Monday mystery highlights a phenomenon familiar to behavioral scientists: that beliefs or judgments about experience can be at odds with actual experience. Indeed, the disconnection between beliefs and experience is common.

Vacations, for example, are viewed more pleasantly after they are over compared with how they were experienced at the time. And motorists who drive fancy cars report having more fun driving than those who own more modest vehicles, though in-car monitoring shows this isn’t the case. The same is often true in reverse as well: we remember pain or symptoms of illness at higher levels than real-time experience suggests, in part because we ignore symptom-free periods in between our aches and pains.

HOW do we make sense of these findings? The human brain has vast, but limited, capacities to store, retrieve and process information. Yet we are often confronted with questions that challenge these capacities. And this is often when the disconnect between belief and experience occurs. When information isn’t available for answering a question — say, when it did not make it into our memories in the first place — we use whatever information is available, even if it isn’t particularly relevant to the question at hand.

When asked about pain for the last week, most people cannot completely remember all of its ups and downs over seven days. However, we are likely to remember it at its worst and may use that as a way of summarizing pain for the entire week. When asked about our current satisfaction with life, we may focus on the first things that come to mind — a recent spat with a spouse or maybe a compliment from the boss at work.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: “I Don’t Like Mondays” single cover. Courtesy of The Boomtown Rats / Ensign Records.[end-div]

The Rise of Neurobollocks

For readers of thediagonal in North America “neurobollocks” would roughly translate to “neurobullshit”.

So what is this growing “neuro-trend”, why is there an explosion in “neuro-babble” and all things with a “neuro-” prefix, and is Malcolm Gladwell to blame?

[div class=attrib]From the New Statesman:[end-div]

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Amazon.[end-div]

Empathy and Touch

[div class=attrib]From Scientific American:[end-div]

When a friend hits her thumb with a hammer, you don’t have to put much effort into imagining how this feels. You know it immediately. You will probably tense up, your “Ouch!” may arise even quicker than your friend’s, and chances are that you will feel a little pain yourself. Of course, you will then thoughtfully offer consolation and bandages, but your initial reaction seems just about automatic. Why?

Neuroscience now offers you an answer: A recent line of research has demonstrated that seeing other people being touched activates primary sensory areas of your brain, much like experiencing the same touch yourself would do. What these findings suggest is beautiful in its simplicity—that you literally “feel with” others.

There is no denying that the exceptional interpersonal understanding we humans show is by and large a product of our emotional responsiveness. We are automatically affected by other people’s feelings, even without explicit communication. Our involvement is sometimes so powerful that we have to flee it, turning our heads away when we see someone get hurt in a movie. Researchers hold that this capacity emerged long before humans evolved. However, only quite recently has it been given a name: A mere hundred years ago, the word “Empathy”—a combination of the Greek “in” (em-) and “feeling” (pathos)—was coined by the British psychologist E. B. Titchener during his endeavor to translate the German Einfühlungsvermögen (“the ability to feel into”).

Despite the lack of a universally agreed-upon definition of empathy, the mechanisms of sharing and understanding another’s experience have always been of scientific and public interest—and particularly so since the introduction of “mirror neurons.” This important discovery was made two decades ago by  Giacomo Rizzolatti and his co-workers at the University of Parma, who were studying motor neuron properties in macaque monkeys. To compensate for the tedious electrophysiological recordings required, the monkey was occasionally given food rewards. During these incidental actions something unexpected happened: When the monkey, remaining perfectly still, saw the food being grasped by an experimenter in a specific way, some of its motor neurons discharged. Remarkably, these neurons normally fired when the monkey itself grasped the food in this way. It was as if the monkey’s brain was directly mirroring the actions it observed. This “neural resonance,” which was later also demonstrated in humans, suggested the existence of a special type of “mirror” neurons that help us understand other people’s actions.

Do you find yourself wondering, now, whether a similar mirror mechanism could have caused your pungent empathic reaction to your friend maltreating herself with a hammer? A group of scientists led by Christian Keysers believed so. The researchers had their participants watch short movie clips of people being touched, while using functional magnetic resonance imaging (fMRI) to record their brain activity. The brain scans revealed that the somatosensory cortex, a complex of brain regions processing touch information, was highly active during the movie presentations—although participants were not being touched at all. As was later confirmed by other studies, this activity strongly resembled the somatosensory response participants showed when they were actually touched in the same way. A recent study by Esther Kuehn and colleagues even found that, during the observation of a human hand being touched, parts of the somatosensory cortex were particularly active when (judging by perspective) the hand clearly belonged to another person.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Communicating with the Comatose

[div class=attrib]From Scientific American:[end-div]

Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions.

Incredibly, he provided answers. A change in blood flow to certain parts of the man’s injured brain convinced Owen that patient 23 was conscious and able to communicate. It was the first time that anyone had exchanged information with someone in a vegetative state.

Patients in these states have emerged from a coma and seem awake. Some parts of their brains function, and they may be able to grind their teeth, grimace or make random eye movements. They also have sleep–wake cycles. But they show no awareness of their surroundings, and doctors have assumed that the parts of the brain needed for cognition, perception, memory and intention are fundamentally damaged. They are usually written off as lost.

Owen’s discovery, reported in 2010, caused a media furore. Medical ethicist Joseph Fins and neurologist Nicholas Schiff, both at Weill Cornell Medical College in New York, called it a “potential game changer for clinical practice”. The University of Western Ontario in London, Canada, soon lured Owen away from Cambridge with Can$20 million (US$19.5 million) in funding to make the techniques more reliable, cheaper, more accurate and more portable — all of which Owen considers essential if he is to help some of the hundreds of thousands of people worldwide in vegetative states. “It’s hard to open up a channel of communication with a patient and then not be able to follow up immediately with a tool for them and their families to be able to do this routinely,” he says.

Many researchers disagree with Owen’s contention that these individuals are conscious. But Owen takes a practical approach to applying the technology, hoping that it will identify patients who might respond to rehabilitation, direct the dosing of analgesics and even explore some patients’ feelings and desires. “Eventually we will be able to provide something that will be beneficial to patients and their families,” he says.

Still, he shies away from asking patients the toughest question of all — whether they wish life support to be ended — saying that it is too early to think about such applications. “The consequences of asking are very complicated, and we need to be absolutely sure that we know what to do with the answers before we go down this road,” he warns.

Lost and found
With short, reddish hair and beard, Owen is a polished speaker who is not afraid of publicity. His home page is a billboard of links to his television and radio appearances. He lectures to scientific and lay audiences with confidence and a touch of defensiveness.

Owen traces the roots of his experiments to the late 1990s, when he was asked to write a review of clinical applications for technologies such as fMRI. He says that he had a “weird crisis of confidence”. Neuroimaging had confirmed a lot of what was known from brain mapping studies, he says, but it was not doing anything new. “We would just tweak a psych test and see what happens,” says Owen. As for real clinical applications: “I realized there weren’t any. We all realized that.”

Owen wanted to find one. He and his colleagues got their chance in 1997, with a 26-year-old patient named Kate Bainbridge. A viral infection had put her in a coma — a condition that generally persists for two to four weeks, after which patients die, recover fully or, in rare cases, slip into a vegetative or a minimally conscious state — a more recently defined category characterized by intermittent hints of conscious activity.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]fMRI axial brain image. Image courtesy of Wikpedia.[end-div]

Addiction: Choice or Disease or Victim of Hijacking?

 

The debate concerning human addictions of all colors and forms rages on. Some would have us believe that addiction is a simple choice shaped by our free will; others would argue that addiction is a chronic disease. Yet, perhaps there may be another more nuanced explanation.

[div class=attrib]From the New York Times:[end-div]

Of all the philosophical discussions that surface in contemporary life, the question of free will — mainly, the debate over whether or not we have it — is certainly one of the most persistent.

That might seem odd, as the average person rarely seems to pause to reflect on whether their choices on, say, where they live, whom they marry, or what they eat for dinner, are their own or the inevitable outcome of a deterministic universe. Still, as James Atlas pointed out last month, the spate of “can’t help yourself” books would indicate that people are in fact deeply concerned with how much of their lives they can control. Perhaps that’s because, upon further reflection, we find that our understanding of free will lurks beneath many essential aspects of our existence.

One particularly interesting variation on this question appears in scientific, academic and therapeutic discussions about addiction. Many times, the question is framed as follows: “Is addiction a disease or a choice?”

The argument runs along these lines: If addiction is a disease, then in some ways it is out of our control and forecloses choices. A disease is a medical condition that develops outside of our control; it is, then, not a matter of choice. In the absence of choice, the addicted person is essentially relieved of responsibility. The addict has been overpowered by her addiction.

The counterargument describes addictive behavior as a choice. People whose use of drugs and alcohol leads to obvious problems but who continue to use them anyway are making choices to do so. Since those choices lead to addiction, blame and responsibility clearly rest on the addict’s shoulders. It then becomes more a matter of free will.

Recent scientific studies on the biochemical responses of the brain are currently tipping the scales toward the more deterministic view — of addiction as a disease. The structure of the brain’s reward system combined with certain biochemical responses and certain environments, they appear to show, cause people to become addicted.

In such studies, and in reports of them to news media, the term “the hijacked brain” often appears, along with other language that emphasizes the addict’s lack of choice in the matter. Sometimes the pleasure-reward system has been “commandeered.” Other times it “goes rogue.” These expressions are often accompanied by the conclusion that there are “addicted brains.”

The word “hijacked” is especially evocative; people often have a visceral reaction to it. I imagine that this is precisely why this term is becoming more commonly used in connection with addiction. But it is important to be aware of the effects of such language on our understanding.

When most people think of a hijacking, they picture a person, sometimes wearing a mask and always wielding some sort of weapon, who takes control of a car, plane or train. The hijacker may not himself drive or pilot the vehicle, but the violence involved leaves no doubt who is in charge. Someone can hijack a vehicle for a variety of reasons, but mostly it boils down to needing to escape or wanting to use the vehicle itself as a weapon in a greater plan. Hijacking is a means to an end; it is always and only oriented to the goals of the hijacker. Innocent victims are ripped from their normal lives by the violent intrusion of the hijacker.

In the “hijacked” view of addiction, the brain is the innocent victim of certain substances — alcohol, cocaine, nicotine or heroin, for example — as well as certain behaviors like eating, gambling or sexual activity. The drugs or the neurochemicals produced by the behaviors overpower and redirect the brain’s normal responses, and thus take control of (hijack) it. For addicted people, that martini or cigarette is the weapon-wielding hijacker who is going to compel certain behaviors.

To do this, drugs like alcohol and cocaine and behaviors like gambling light up the brain’s pleasure circuitry, often bringing a burst of euphoria. Other studies indicate that people who are addicted have lower dopamine and serotonin levels in their brains, which means that it takes more of a particular substance or behavior for them to experience pleasure or to reach a certain threshold of pleasure. People tend to want to maximize pleasure; we tend to do things that bring more of it. We also tend to chase it when it subsides, trying hard to recreate the same level of pleasure we have experienced in the past. It is not uncommon to hear addicts talking about wanting to experience the euphoria of a first high. Often they never reach it, but keep trying. All of this lends credence to the description of the brain as hijacked.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of CNN.[end-div]

Why Daydreaming is Good

Most of us, editor of theDiagonal included, have known this for a while. We’ve known that letting the mind wander aimlessly is crucial to creativity and problem-solving.

[div class=attrib]From Wired:[end-div]

It’s easy to underestimate boredom. The mental condition, after all, is defined by its lack of stimulation; it’s the mind at its most apathetic. This is why the poet Joseph Brodsky described boredom as a “psychological Sahara,” a cognitive desert “that starts right in your bedroom and spurns the horizon.” The hands of the clock seem to stop; the stream of consciousness slows to a drip. We want to be anywhere but here.

However, as Brodsky also noted, boredom and its synonyms can also become a crucial tool of creativity. “Boredom is your window,” the poet declared. “Once this window opens, don’t try to shut it; on the contrary, throw it wide open.”

Brodsky was right. The secret isn’t boredom per se: It’s how boredom makes us think. When people are immersed in monotony, they automatically lapse into a very special form of brain activity: mind-wandering. In a culture obsessed with efficiency, mind-wandering is often derided as a lazy habit, the kind of thinking we rely on when we don’t really want to think. (Freud regarded mind-wandering as an example of “infantile” thinking.) It’s a sign of procrastination, not productivity.

In recent years, however, neuroscience has dramatically revised our views of mind-wandering. For one thing, it turns out that the mind wanders a ridiculous amount. Last year, the Harvard psychologists Daniel Gilbert and Matthew A. Killingsworth published a fascinating paper in Science documenting our penchant for disappearing down the rabbit hole of our own mind. The scientists developed an iPhone app that contacted 2,250 volunteers at random intervals, asking them about their current activity and levels of happiness. It turns out that people were engaged in mind-wandering 46.9 percent of the time. In fact, the only activity in which their minds were not constantly wandering was love making. They were able to focus for that.

What’s happening inside the brain when the mind wanders? A lot. In 2009, a team led by Kalina Christoff of UBC and Jonathan Schooler of UCSB used “experience sampling” inside an fMRI machine to capture the brain in the midst of a daydream. (This condition is easy to induce: After subjects were given an extremely tedious task, they started to mind-wander within seconds.) Although it’s been known for nearly a decade that mind wandering is a metabolically intense process — your cortex consumes lots of energy when thinking to itself — this study further helped to clarify the sequence of mental events:

Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions—two brain systems that so far have been assumed to work in opposition—suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Two things worth noting here. The first is the reference to the default network. The name is literal: We daydream so easily and effortlessly that it appears to be our default mode of thought. The second is the simultaneous activation in executive and default regions, suggesting that mind wandering isn’t quite as mindless as we’d long imagined. (That’s why it seems to require so much executive activity.) Instead, a daydream seems to exist in the liminal space between sleep dreaming and focused attentiveness, in which we are still awake but not really present.

Last week, a team of Austrian scientists expanded on this result in PLoS ONE. By examining 17 patients with unresponsive wakefulness syndrome (UWS), 8 patients in a minimally conscious state (MCS), and 25 healthy controls, the researchers were able to detect the brain differences along this gradient of consciousness. The key difference was an inability among the most unresponsive patients to “deactivate” their default network. This suggests that these poor subjects were trapped within a daydreaming loop, unable to exercise their executive regions to pay attention to the world outside. (Problems with the deactivation of the default network have also been observed in patients with Alzheimer’s and schizophrenia.) The end result is that their mind’s eye is always focused inwards.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A daydreaming gentleman; from an original 1912 postcard published in Germany. Courtesy of Wikipedia.[end-div]

The Illusion of Free Will

A plethora of recent articles and books from the neuroscience community adds weight to the position that human free will does not exist. Our exquisitely complex brains construct a rather compelling illusion, however we are just observers, held captive to impulses that are completely driven by our biology. And, for that matter, much of this biological determinism is unavailable to our conscious minds.

James Atlas provides a recent summary of current thinking.

[div class=attrib]From the New York Times:[end-div]

WHY are we thinking so much about thinking these days? Near the top of best-seller lists around the country, you’ll find Jonah Lehrer’s “Imagine: How Creativity Works,” followed by Charles Duhigg’s book “The Power of Habit: Why We Do What We Do in Life and Business,” and somewhere in the middle, where it’s held its ground for several months, Daniel Kahneman’s “Thinking, Fast and Slow.” Recently arrived is “Subliminal: How Your Unconscious Mind Rules Your Behavior,” by Leonard Mlodinow.

It’s the invasion of the Can’t-Help-Yourself books.

Unlike most pop self-help books, these are about life as we know it — the one you can change, but only a little, and with a ton of work. Professor Kahneman, who won the Nobel Prize in economic science a decade ago, has synthesized a lifetime’s research in neurobiology, economics and psychology. “Thinking, Fast and Slow” goes to the heart of the matter: How aware are we of the invisible forces of brain chemistry, social cues and temperament that determine how we think and act? Has the concept of free will gone out the window?

These books possess a unifying theme: The choices we make in day-to-day life are prompted by impulses lodged deep within the nervous system. Not only are we not masters of our fate; we are captives of biological determinism. Once we enter the portals of the strange neuronal world known as the brain, we discover that — to put the matter plainly — we have no idea what we’re doing.

Professor Kahneman breaks down the way we process information into two modes of thinking: System 1 is intuitive, System 2 is logical. System 1 “operates automatically and quickly, with little or no effort and no sense of voluntary control.” We react to faces that we perceive as angry faster than to “happy” faces because they contain a greater possibility of danger. System 2 “allocates attention to the effortful mental activities that demand it, including complex computations.” It makes decisions — or thinks it does. We don’t notice when a person dressed in a gorilla suit appears in a film of two teams passing basketballs if we’ve been assigned the job of counting how many times one team passes the ball. We “normalize” irrational data either by organizing it to fit a made-up narrative or by ignoring it altogether.

The effect of these “cognitive biases” can be unsettling: A study of judges in Israel revealed that 65 percent of requests for parole were granted after meals, dropping steadily to zero until the judges’ “next feeding.” “Thinking, Fast and Slow” isn’t prescriptive. Professor Kahneman shows us how our minds work, not how to fiddle with what Gilbert Ryle called the ghost in the machine.

“The Power of Habit” is more proactive. Mr. Duhigg’s thesis is that we can’t change our habits, we can only acquire new ones. Alcoholics can’t stop drinking through willpower alone: they need to alter behavior — going to A.A. meetings instead of bars, for instance — that triggers the impulse to drink. “You have to keep the same cues and rewards as before, and feed the craving by inserting a new routine.”

“The Power of Habit” and “Imagine” belong to a genre that has become increasingly conspicuous over the last few years: the hortatory book, armed with highly sophisticated science, that demonstrates how we can achieve our ambitions despite our sensory cluelessness.

[div class=attrib]Read the entire article following the jump.[end-div]

The Connectome: Slicing and Reconstructing the Brain

[tube]1nm1i4CJGwY[/tube]

[div class=attrib]From the Guardian:[end-div]

There is a macabre brilliance to the machine in Jeff Lichtman’s laboratory at Harvard University that is worthy of a Wallace and Gromit film. In one end goes brain. Out the other comes sliced brain, courtesy of an automated arm that wields a diamond knife. The slivers of tissue drop one after another on to a conveyor belt that zips along with the merry whirr of a cine projector.

Lichtman’s machine is an automated tape-collecting lathe ultramicrotome (Atlum), which, according to the neuroscientist, is the tool of choice for this line of work. It produces long strips of sticky tape with brain slices attached, all ready to be photographed through a powerful electron microscope.

When these pictures are combined into 3D images, they reveal the inner wiring of the organ, a tangled mass of nervous spaghetti. The research by Lichtman and his co-workers has a goal in mind that is so ambitious it is almost unthinkable.

If we are ever to understand the brain in full, they say, we must know how every neuron inside is wired up.

Though fanciful, the payoff could be profound. Map out our “connectome” – following other major “ome” projects such as the genome and transcriptome – and we will lay bare the biological code of our personalities, memories, skills and susceptibilities. Somewhere in our brains is who we are.

To use an understatement heard often from scientists, the job at hand is not trivial. Lichtman’s machine slices brain tissue into exquisitely thin wafers. To turn a 1mm thick slice of brain into neural salami takes six days in a process that yields about 30,000 slices.

But chopping up the brain is the easy part. When Lichtman began this work several years ago, he calculated how long it might take to image every slice of a 1cm mouse brain. The answer was 7,000 years. “When you hear numbers like that, it does make your pulse quicken,” Lichtman said.

The human brain is another story. There are 85bn neurons in the 1.4kg (3lbs) of flesh between our ears. Each has a cell body (grey matter) and long, thin extensions called dendrites and axons (white matter) that reach out and link to others. Most neurons have lots of dendrites that receive information from other nerve cells, and one axon that branches on to other cells and sends information out.

On average, each neuron forms 10,000 connections, through synapses with other nerve cells. Altogether, Lichtman estimates there are between 100tn and 1,000tn connections between neurons.

Unlike the lung, or the kidney, where the whole organ can be understood, more or less, by grasping the role of a handful of repeating physiological structures, the brain is made of thousands of specific types of brain cell that look and behave differently. Their names – Golgi, Betz, Renshaw, Purkinje – read like a roll call of the pioneers of neuroscience.

Lichtman, who is fond of calculations that expose the magnitude of the task he has taken on, once worked out how much computer memory would be needed to store a detailed human connectome.

“To map the human brain at the cellular level, we’re talking about 1m petabytes of information. Most people think that is more than the digital content of the world right now,” he said. “I’d settle for a mouse brain, but we’re not even ready to do that. We’re still working on how to do one cubic millimetre.”

He says he is about to submit a paper on mapping a minuscule volume of the mouse connectome and is working with a German company on building a multibeam microscope to speed up imaging.

For some scientists, mapping the human connectome down to the level of individual cells is verging on overkill. “If you want to study the rainforest, you don’t need to look at every leaf and every twig and measure its position and orientation. It’s too much detail,” said Olaf Sporns, a neuroscientist at Indiana University, who coined the term “connectome” in 2005.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the Connectome Project / Guardian.[end-div]

Your Brain Today

Progress in neuroscience continues to accelerate, and one of the principal catalysts of this progress is neuroscientist David Eagleman. We excerpt a recent article about Eagleman’s research, into amongst other things, synaesthesia, sensory substitution, time perception, neurochemical basis for attraction, and consciousness.

[div class=attrib]From the Telegraph:[end-div]

It ought to be quite intimidating, talking to David Eagleman. He is one of the world’s leading neuroscientists, after all, known for his work on time perception, synaesthesia and the use of neurology in criminal justice. But as anyone who has read his best-selling books or listened to his TED talks online will know, he has a gift for communicating complicated ideas in an accessible and friendly way — Brian Cox with an American accent.

He lives in Houston, Texas, with his wife and their two-month-old baby. When we Skype each other, he is sitting in a book-lined study and he doesn’t look as if his nights are being too disturbed by mewling. No bags under his eyes. In fact, with his sideburns and black polo shirt he looks much younger than his 41 years, positively boyish. His enthusiasm for his subject is boyish, too, as he warns me, he “speaks fast”.

He sure does. And he waves his arms around. We are talking about the minute calibrations and almost instantaneous assessments the brain makes when members of the opposite sex meet, one of many brain-related subjects covered in his book Incognito: The Secret Lives of the Brain, which is about to be published in paperback.

“Men are consistently more attracted to women with dilated eyes,” he says. “Because that corresponds with sexual excitement.”

Still, I say, not exactly a romantic discovery, is it? How does this theory go down with his wife? “Well she’s a neuroscientist like me so we joke about it all the time, like when I grow a beard. Women will always say they don’t like beards, but when you do the study it turns out they do, and the reason is it’s a secondary sex characteristic that indicates sexual development, the thing that separates the men from the boys.”

Indeed, according to Eagleman, we mostly run on unconscious autopilot. Our neural systems have been carved by natural selection to solve problems that were faced by our ancestors. Which brings me to another of his books, Why The Net Matters. As the father of children who spend a great deal of their time on the internet, I want to know if he thinks it is changing their brains.

“It certainly is,” he says, “especially in the way we seek information. When we were growing up it was all about ‘just in case’ information, the Battle of Hastings and so on. Now it is ‘just in time’ learning, where a kid looks something up online if he needs to know about it. This means kids today are becoming less good at memorising, but in other ways their method of learning is superior to ours because it targets neurotransmitters in the brain, ones that are related to curiosity, emotional salience and interactivity. So I think there might be some real advantages to where this is going. Kids are becoming faster at searching for information. When you or I read, our eyes scan down the page, but for a Generation-Y kid, their eyes will have a different set of movements, top, then side, then bottom and that is the layout of webpages.”

In many ways Eagleman’s current status as “the poster boy of science’s most fashionable field” (as the neuroscientist was described in a recent New Yorker profile) seems entirely apt given his own upbringing. His mother was a biology teacher, his father a psychiatrist who was often called upon to evaluate insanity pleas. Yet Eagleman says he wasn’t drawn to any of this. “Growing up, I didn’t see my career path coming at all, because in tenth grade I always found biology gross, dissecting rats and frogs. But in college I started reading about the brain and then I found myself consuming anything I could on the subject. I became hooked.”

Eagleman’s mother has described him as an “unusual child”. He wrote his first words at two, and at 12 he was explaining Einstein’s theory of relativity to her. He also liked to ask for a list of 400 random objects then repeat them back from memory, in reverse order. At Rice University, Houston, he majored in electrical engineering, but then took a sabbatical, joined the Israeli army as a volunteer, spent a semester at Oxford studying political science and literature and finally moved to LA to try and become a stand-up comedian. It didn’t work out and so he returned to Rice, this time to study neurolinguistics. After this came his doctorate and his day job as a professor running a laboratory at Baylor College of Medicine, Houston (he does his book writing at night, doesn’t have hobbies and has never owned a television).

I ask if he has encountered any snobbery within the scientific community for being an academic who has “dumbed down” by writing popular science books that spend months on the New York Times bestseller list? “I have to tell you, that was one of my concerns, and I can definitely find evidence of that. Online, people will sometimes say terrible things about me, but they are the exceptions that illustrate a more benevolent rule. I give talks on university campuses and the students there tell me they read my books because they synthesise large swathes of data in a readable way.”

He actually thinks there is an advantage for scientists in making their work accessible to non-scientists. “I have many tens of thousands of neuroscience details in my head and the process of writing about them and trying to explain them to an eighth grader makes them become clearer in my own mind. It crystallises them.”

I tell him that my copy of Incognito is heavily annotated and there is one passage where I have simply written a large exclamation mark. It concerns Eric Weihenmayer who, in 2001, became the first blind person to climb Mount Everest. Today he climbs with a grid of more than six hundred tiny electrodes in his mouth. This device allows him to see with his tongue. Although the tongue is normally a taste organ, its moisture and chemical environment make it a good brain-machine interface when a tingly electrode grid is laid on its surface. The grid translates a video input into patterns of electrical pulses, allowing the tongue to discern qualities usually ascribed to vision such as distance, shape, direction of movement and size.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of ALAMY / Telegraph.[end-div]

Cocktail Party Science and Multitasking


The hit drama Mad Men shows us that cocktail parties can be fun — colorful drinks and colorful conversations with a host of very colorful characters. Yet cocktail parties also highlight one of our limitations, the inability to multitask. We are single-threaded animals despite the constant and simultaneous bombardment for our attention from all directions, and to all our senses.

Melinda Beck over at the WSJ Health Journal summarizes recent research that shows the deleterious effects of our attempts to multitask — why it’s so hard and why it’s probably not a good idea anyway, especially while driving.

[div class=attrib]From the Wall Street Journal:[end-div]

You’re at a party. Music is playing. Glasses are clinking. Dozens of conversations are driving up the decibel level. Yet amid all those distractions, you can zero in on the one conversation you want to hear.

This ability to hyper-focus on one stream of sound amid a cacophony of others is what researchers call the “cocktail-party effect.” Now, scientists at the University of California in San Francisco have pinpointed where that sound-editing process occurs in the brain—in the auditory cortex just behind the ear, not in areas of higher thought. The auditory cortex boosts some sounds and turns down others so that when the signal reaches the higher brain, “it’s as if only one person was speaking alone,” says principle investigator Edward Chang.

These findings, published in the journal Nature last week, underscore why people aren’t very good at multitasking—our brains are wired for “selective attention” and can focus on only one thing at a time. That innate ability has helped humans survive in a world buzzing with visual and auditory stimulation. But we keep trying to push the limits with multitasking, sometimes with tragic consequences. Drivers talking on cellphones, for example, are four times as likely to get into traffic accidents as those who aren’t.

Many of those accidents are due to “inattentional blindness,” in which people can, in effect, turn a blind eye to things they aren’t focusing on. Images land on our retinas and are either boosted or played down in the visual cortex before being passed to the brain, just as the auditory cortex filters sounds, as shown in the Nature study last week. “It’s a push-pull relationship—the more we focus on one thing, the less we can focus on others,” says Diane M. Beck, an associate professor of psychology at the University of Illinois.

That people can be completely oblivious to things in their field of vision was demonstrated famously in the “Invisible Gorilla experiment” devised at Harvard in the 1990s. Observers are shown a short video of youths tossing a basketball and asked to count how often the ball is passed by those wearing white. Afterward, the observers are asked several questions, including, “Did you see the gorilla?” Typically, about half the observers failed to notice that someone in a gorilla suit walked through the scene. They’re usually flabbergasted because they’re certain they would have noticed something like that.

“We largely see what we expect to see,” says Daniel Simons, one of the study’s creators and now a professor of psychology at the University of Illinois. As he notes in his subsequent book, “The Invisible Gorilla,” the more attention a task demands, the less attention we can pay to other things in our field of vision. That’s why pilots sometimes fail to notice obstacles on runways and radiologists may overlook anomalies on X-rays, especially in areas they aren’t scrutinizing.

And it isn’t just that sights and sounds compete for the brain’s attention. All the sensory inputs vie to become the mind’s top priority.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Getty Images / Wall Street Journal.[end-div]

Runner’s High: How and Why

There is a small but mounting body of evidence that supports the notion of the so-called Runner’s High, a state of euphoria attained by athletes during and immediately following prolonged and vigorous exercise. But while the neurochemical basis for this may soon be understood little is known as to why this happens. More on the how and the why from Scicurious Brain.

[div class=attrib]From the Scicurious over at Scientific American:[end-div]

I just came back from an 11 mile run. The wind wasn’t awful like it usually is, the sun was out, and I was at peace with the world, and right now, I still am. Later, I know my knees will be yelling at me and my body will want nothing more than to lie down. But right now? Right now I feel FANTASTIC.

What I am in the happy, zen-like, yet curiously energetic throes of is what is popularly known as the “runner’s high”. The runner’s high is a state of bliss achieved by athletes (not just runners) during and immediately following prolonged and intense exercise. It can be an extremely powerful, emotional experience. Many athletes will say they get it (and indeed, some would say we MUST get it, because otherwise why would we keep running 26.2 miles at a stretch?), but what IS it exactly? For some people it’s highly emotional, for some it’s peaceful, and for some it’s a burst of energy. And there are plenty of other people who don’t appear to get it at all. What causes it? Why do some people get it and others don’t?

Well, the short answer is that we don’t know. As I was coming back from my run, blissful and emotive enough that the sight of a small puppy could make me weepy with joy, I began to wonder myself…what is up with me? As I re-hydrated and and began to sift through the literature, I found…well, not much. But what I did find suggests two competing hypothesis: the endogenous opioid hypothesis and the cannabinoid hypothesis.

The endogenous opioid hypothesis

This hypothesis of the runner’s high is based on a study showing that enorphins, endogenous opioids, are released during intense physical activity. When you think of the word “opioids”, you probably think of addictive drugs like opium or morphine. But your body also produces its own versions of these chemicals (called ‘endogenous’ or produced within an organism), usually in response to times of physical stress. Endogenous opioids can bind to the opioid receptors in your brain, which affect all sorts of systems. Opioid receptor activations can help to blunt pain, something that is surely present at the end of a long workout. Opioid receptors can also act in reward-related areas such as the striatum and nucleus accumbens. There, they can inhibit the release of inhibitory transmitters and increase the release of dopamine, making strenuous physical exercise more pleasurable. Endogenous opioid production has been shown to occur during the runner’s high in humans and well as after intense exercise in rats.

The cannabinoid hypothesis

Not only does the brain release its own forms of opioid chemicals, it also releases its own form of cannabinoids. When we usually talk about cannabinoids, we think about things like marijuana or the newer synthetic cannabinoids, which act upon cannabinoid receptors in the brain to produce their effects. But we also produce endogenous cannabinoids (called endocannabinoids), such as anandamide, which also act upon those same receptors. Studies have shown that deletion of cannabinoid receptor 1 decreases wheel running in mice, and that intense exercise causes increases in anadamide in humans.

Not only how, but why?

There isn’t a lot out there on HOW the runner’s high might occur, but there is even less on WHY. There are several hypotheses out there, but none of them, as far as I can tell, are yet supported by evidence. First there is the hypothesis of a placebo effect due to achieving goals. The idea is that you expect yourself to achieve a difficult goal, and then feel great when you do. While the runner’s high does have some things in common with goal achievement, it doesn’t really explain why people get them on training runs or regular runs, when they are not necessarily pushing themselves extremely hard.

[div class=attrib]Read the entire article after the jump, (no pun intended).[end-div]

[div class=attrib]Image courtesy of Cincinnati.com.[end-div]

Inward Attention and Outward Attention

New studies show that our brains use two fundamentally different neurological pathways when we focus on our external environment and pay attention to our internal world. Researchers believe this could have important consequences, from finding new methods to manage stress and in treating some types of mental illness.

[div class=attrib]From Scientific American:[end-div]

What’s the difference between noticing the rapid beat of a popular song on the radio and noticing the rapid rate of your heart when you see your crush? Between noticing the smell of fresh baked bread and noticing that you’re out of breath? Both require attention. However, the direction of that attention differs: it is either turned outward, as in the case of noticing a stop sign or a tap on your shoulder, or turned inward, as in the case of feeling full or feeling love.

Scientists have long held that attention – regardless to what – involves mostly the prefrontal cortex, that frontal region of the brain responsible for complex thought and unique to humans and advanced mammals. A recent study by Norman Farb from the University of Toronto published in Cerebral Cortex, however, suggests a radically new view: there are different ways of paying attention. While the prefrontal cortex may indeed be specialized for attending to external information, older and more buried parts of the brain including the “insula” and “posterior cingulate cortex” appear to be specialized in observing our internal landscape.

Most of us prioritize externally oriented attention. When we think of attention, we often think of focusing on something outside of ourselves. We “pay attention” to work, the TV, our partner, traffic, or anything that engages our senses. However, a whole other world exists that most of us are far less aware of: an internal world, with its varied landscape of emotions, feelings, and sensations. Yet it is often the internal world that determines whether we are having a good day or not, whether we are happy or unhappy. That’s why we can feel angry despite beautiful surroundings or feel perfectly happy despite being stuck in traffics. For this reason perhaps, this newly discovered pathway of attention may hold the key to greater well-being.

Although this internal world of feelings and sensations dominates perception in babies, it becomes increasingly foreign and distant as we learn to prioritize the outside world.  Because we don’t pay as much attention to our internal world, it often takes us by surprise. We often only tune into our body when it rings an alarm bell –– that we’re extremely thirsty, hungry, exhausted or in pain. A flush of anger, a choked up feeling of sadness, or the warmth of love in our chest often appear to come out of the blue.

In a collaboration with professors Zindel Segal and Adam Anderson at the University of Toronto, the study compared exteroceptive (externally focused) attention to interoceptive (internally focused) attention in the brain. Participants were instructed to either focus on the sensation of their breath (interoceptive attention) or to focus their attention on words on a screen (exteroceptive attention).  Contrary to the conventional assumption that all attention relies upon the frontal lobe of the brain, the researchers found that this was true of only exteroceptive attention; interoceptive attention used evolutionarily older parts of the brain more associated with sensation and integration of physical experience.

[div class=attrib]Read the entire article after the jump.[end-div]

The Benefits of Bilingualism

[div class=attrib]From the New York Times:[end-div]

SPEAKING two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.

This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development.

They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.

Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one marked with a blue square and the other marked with a red circle.

In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.

The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while driving.

Why does the tussle between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage stemmed primarily from an ability for inhibition that was honed by the exercise of suppressing one language system: this suppression, it was thought, would help train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.

The key difference between bilinguals and monolinguals may be more basic: a heightened ability to monitor the environment. “Bilinguals have to switch languages quite often — you may talk to your father in one language and to your mother in another language,” says Albert Costa, a researcher at the University of Pompeu Fabra in Spain. “It requires keeping track of changes around you in the same way that we monitor our surroundings when driving.” In a study comparing German-Italian bilinguals with Italian monolinguals on monitoring tasks, Mr. Costa and his colleagues found that the bilingual subjects not only performed better, but they also did so with less activity in parts of the brain involved in monitoring, indicating that they were more efficient at it.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image courtesy of Futurity.org.[end-div]

Synaesthesia: Smell the Music

[div class=attrib]From the Economist:[end-div]

THAT some people make weird associations between the senses has been acknowledged for over a century. The condition has even been given a name: synaesthesia. Odd as it may seem to those not so gifted, synaesthetes insist that spoken sounds and the symbols which represent them give rise to specific colours or that individual musical notes have their own hues.

Yet there may be a little of this cross-modal association in everyone. Most people agree that loud sounds are “brighter” than soft ones. Likewise, low-pitched sounds are reminiscent of large objects and high-pitched ones evoke smallness. Anne-Sylvie Crisinel and Charles Spence of Oxford University think something similar is true between sound and smell.

Ms Crisinel and Dr Spence wanted to know whether an odour sniffed from a bottle could be linked to a specific pitch, and even a specific instrument. To find out, they asked 30 people to inhale 20 smells—ranging from apple to violet and wood smoke—which came from a teaching kit for wine-tasting. After giving each sample a good sniff, volunteers had to click their way through 52 sounds of varying pitches, played by piano, woodwind, string or brass, and identify which best matched the smell. The results of this study, to be published later this month in Chemical Senses, are intriguing.

The researchers’ first finding was that the volunteers did not think their request utterly ridiculous. It rather made sense, they told them afterwards. The second was that there was significant agreement between volunteers. Sweet and sour smells were rated as higher-pitched, smoky and woody ones as lower-pitched. Blackberry and raspberry were very piano. Vanilla had elements of both piano and woodwind. Musk was strongly brass.

It is not immediately clear why people employ their musical senses in this way to help their assessment of a smell. But gone are the days when science assumed each sense worked in isolation. People live, say Dr Spence and Ms Crisinel, in a multisensory world and their brains tirelessly combine information from all sources to make sense, as it were, of what is going on around them. Nor is this response restricted to humans. Studies of the brains of mice show that regions involved in olfaction also react to sound.

Taste, too, seems linked to hearing. Ms Crisinel and Dr Spence have previously established that sweet and sour tastes, like smells, are linked to high pitch, while bitter tastes bring lower pitches to mind. Now they have gone further. In a study that will be published later this year they and their colleagues show how altering the pitch and instruments used in background music can alter the way food tastes.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of cerebromente.org.br.[end-div]

Time for An Over-The-Counter Morality Pill?

Stories of people who risk life and limb to help a stranger and those who turn a blind eye are as current as they are ancient. Almost on a daily basis the 24-hours news cycle carries a heartwarming story of someone doing good to or for another; and seemingly just as often comes the story of indifference. Social and psychological researchers have studied this behavior in humans, and animals, for decades. However, only recently has progress been made in identifying some underlying factors. Peter Singer, a professor of bioethics at Princeton University, and researcher Agata Sagan recap some current understanding.

All of this leads to a conundrum: would it be ethical to market a “morality” pill that would make us do more good more often?

[div class=attrib]From the New York Times:[end-div]

Last October, in Foshan, China, a 2-year-old girl was run over by a van. The driver did not stop. Over the next seven minutes, more than a dozen people walked or bicycled past the injured child. A second truck ran over her. Eventually, a woman pulled her to the side, and her mother arrived. The child died in a hospital. The entire scene was captured on video and caused an uproar when it was shown by a television station and posted online. A similar event occurred in London in 2004, as have others, far from the lens of a video camera.

Yet people can, and often do, behave in very different ways.

A news search for the words “hero saves” will routinely turn up stories of bystanders braving oncoming trains, swift currents and raging fires to save strangers from harm. Acts of extreme kindness, responsibility and compassion are, like their opposites, nearly universal.

Why are some people prepared to risk their lives to help a stranger when others won’t even stop to dial an emergency number?

Scientists have been exploring questions like this for decades. In the 1960s and early ’70s, famous experiments by Stanley Milgram and Philip Zimbardo suggested that most of us would, under specific circumstances, voluntarily do great harm to innocent people. During the same period, John Darley and C. Daniel Batson showed that even some seminary students on their way to give a lecture about the parable of the Good Samaritan would, if told that they were running late, walk past a stranger lying moaning beside the path. More recent research has told us a lot about what happens in the brain when people make moral decisions. But are we getting any closer to understanding what drives our moral behavior?

Here’s what much of the discussion of all these experiments missed: Some people did the right thing. A recent experiment (about which we have some ethical reservations) at the University of Chicago seems to shed new light on why.

Researchers there took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.

Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

[div class=attrib]Read more here.[end-div]

Inside the Weird Teenage Brain

[div class=attrib]From the Wall Street Journal:[end-div]

“What was he thinking?” It’s the familiar cry of bewildered parents trying to understand why their teenagers act the way they do.

How does the boy who can thoughtfully explain the reasons never to drink and drive end up in a drunken crash? Why does the girl who knows all about birth control find herself pregnant by a boy she doesn’t even like? What happened to the gifted, imaginative child who excelled through high school but then dropped out of college, drifted from job to job and now lives in his parents’ basement?

Adolescence has always been troubled, but for reasons that are somewhat mysterious, puberty is now kicking in at an earlier and earlier age. A leading theory points to changes in energy balance as children eat more and move less.

At the same time, first with the industrial revolution and then even more dramatically with the information revolution, children have come to take on adult roles later and later. Five hundred years ago, Shakespeare knew that the emotionally intense combination of teenage sexuality and peer-induced risk could be tragic—witness “Romeo and Juliet.” But, on the other hand, if not for fate, 13-year-old Juliet would have become a wife and mother within a year or two.

Our Juliets (as parents longing for grandchildren will recognize with a sigh) may experience the tumult of love for 20 years before they settle down into motherhood. And our Romeos may be poetic lunatics under the influence of Queen Mab until they are well into graduate school.

What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.

The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again.

The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.

Recent studies in the neuroscientist B.J. Casey’s lab at Cornell University suggest that adolescents aren’t reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

From an evolutionary point of view, this all makes perfect sense. One of the most distinctive evolutionary features of human beings is our unusually long, protected childhood. Human children depend on adults for much longer than those of any other primate. That long protected period also allows us to learn much more than any other animal. But eventually, we have to leave the safe bubble of family life, take what we learned as children and apply it to the real adult world.

Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.

[div class=attrib]Read more here.[end-div]

The Unconscious Mind Boosts Creativity

[div class=attrib]From Miller-McCune:[end-div]

New research finds we’re better able to identify genuinely creative ideas when they’ve emerged from the unconscious mind.

Truly creative ideas are both highly prized and, for most of us, maddeningly elusive. If our best efforts produce nothing brilliant, we’re often advised to put aside the issue at hand and give our unconscious minds a chance to work.

Newly published research suggests that is indeed a good idea — but not for the reason you might think.

A study from the Netherlands finds allowing ideas to incubate in the back of the mind is, in a narrow sense, overrated. People who let their unconscious minds take a crack at a problem were no more adept at coming up with innovative solutions than those who consciously deliberated over the dilemma.

But they did perform better on the vital second step of this process: determining which of their ideas was the most creative. That realization provides essential information; without it, how do you decide which solution you should actually try to implement?

Given the value of discerning truly fresh ideas, “we can conclude that the unconscious mind plays a vital role in creative performance,” a research team led by Simone Ritter of the Radboud University Behavioral Science Institute writes in the journal Thinking Skills and Creativity.

In the first of two experiments, 112 university students were given two minutes to come up with creative ideas to an everyday problem: how to make the time spent waiting in line at a cash register more bearable. Half the participants went at it immediately, while the others first spent two minutes performing a distracting task — clicking on circles that appeared on a computer screen. This allowed time for ideas to percolate outside their conscious awareness.

After writing down as many ideas as they could think of, they were asked to choose which of their notions was the most creative.  Participants were scored by the number of ideas they came up with, the creativity level of those ideas (as measured by trained raters), and whether their perception of their most innovative idea coincided with that of the raters.
The two groups scored evenly on both the number of ideas generated and the average creativity of those ideas. But those who had been distracted, and thus had ideas spring from their unconscious minds, were better at selecting their most creative concept.

[div class=attrib]Read the entire article here.[end-div]

Crossword Puzzles and Cognition

[div class=attrib]From the New Scientist:[end-div]

TACKLING a crossword can crowd the tip of your tongue. You know that you know the answers to 3 down and 5 across, but the words just won’t come out. Then, when you’ve given up and moved on to another clue, comes blessed relief. The elusive answer suddenly occurs to you, crystal clear.

The processes leading to that flash of insight can illuminate many of the human mind’s curious characteristics. Crosswords can reflect the nature of intuition, hint at the way we retrieve words from our memory, and reveal a surprising connection between puzzle solving and our ability to recognise a human face.

“What’s fascinating about a crossword is that it involves many aspects of cognition that we normally study piecemeal, such as memory search and problem solving, all rolled into one ball,” says Raymond Nickerson, a psychologist at Tufts University in Medford, Massachusetts. In a paper published earlier this year, he brought profession and hobby together by analysing the mental processes of crossword solving (Psychonomic Bulletin and Review, vol 18, p 217).

1 across: “You stinker!” – audible cry that allegedly marked displacement activity (6)

Most of our mental machinations take place pre-consciously, with the results dropping into our conscious minds only after they have been decided elsewhere in the brain. Intuition plays a big role in solving a crossword, Nickerson observes. Indeed, sometimes your pre-conscious mind may be so quick that it produces the goods instantly.

At other times, you might need to take a more methodical approach and consider possible solutions one by one, perhaps listing synonyms of a word in the clue.

Even if your list doesn’t seem to make much sense, it might reflect the way your pre-conscious mind is homing in on the solution. Nickerson points to work in the 1990s by Peter Farvolden at the University of Toronto in Canada, who gave his subjects four-letter fragments of seven-letter target words (as may happen in some crossword layouts, especially in the US, where many words overlap). While his volunteers attempted to work out the target, they were asked to give any other word that occurred to them in the meantime. The words tended to be associated in meaning with the eventual answer, hinting that the pre-conscious mind solves a problem in steps.

Should your powers of deduction fail you, it may help to let your mind chew over the clue while your conscious attention is elsewhere. Studies back up our everyday experience that a period of incubation can lead you to the eventual “aha” moment. Don’t switch off entirely, though. For verbal problems, a break from the clue seems to be more fruitful if you occupy yourself with another task, such as drawing a picture or reading (Psychological Bulletin, vol 135, p 94).

So if 1 across has you flummoxed, you could leave it and take a nice bath, or better still read a novel. Or just move on to the next clue.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Newspaper crossword puzzle. Courtesy of Polytechnic West.[end-div]

Can Anyone Say “Neuroaesthetics”

As in all other branches of science, there seem to be fascinating new theories, research and discoveries in neuroscience on a daily, if not hourly, basis. With this in mind, brain and cognitive researchers have recently turned their attentions to the science of art, or more specifically to addressing the question “how does the human brain appreciate art?” Yes, welcome to the world of “neuroaesthetics”.

[div class=attrib]From Scientific American:[end-div]

The notion of “the aesthetic” is a concept from the philosophy of art of the 18th century according to which the perception of beauty occurs by means of a special process distinct from the appraisal of ordinary objects. Hence, our appreciation of a sublime painting is presumed to be cognitively distinct from our appreciation of, say, an apple. The field of “neuroaesthetics” has adopted this distinction between art and non-art objects by seeking to identify brain areas that specifically mediate the aesthetic appreciation of artworks.

However, studies from neuroscience and evolutionary biology challenge this separation of art from non-art. Human neuroimaging studies have convincingly shown that the brain areas involved in aesthetic responses to artworks overlap with those that mediate the appraisal of objects of evolutionary importance, such as the desirability of foods or the attractiveness of potential mates. Hence, it is unlikely that there are brain systems specific to the appreciation of artworks; instead there are general aesthetic systems that determine how appealing an object is, be that a piece of cake or a piece of music.

We set out to understand which parts of the brain are involved in aesthetic appraisal. We gathered 93 neuroimaging studies of vision, hearing, taste and smell, and used statistical analyses to determine which brain areas were most consistently activated across these 93 studies. We focused on studies of positive aesthetic responses, and left out the sense of touch, because there were not enough studies to arrive at reliable conclusions.

The results showed that the most important part of the brain for aesthetic appraisal was the anterior insula, a part of the brain that sits within one of the deep folds of the cerebral cortex. This was a surprise. The anterior insula is typically associated with emotions of negative quality, such as disgust and pain, making it an unusual candidate for being the brain’s “aesthetic center.” Why would a part of the brain known to be important for the processing of pain and disgust turn out to the most important area for the appreciation of art?

[div class=attrib]Read entire article here.[end-div]

[div class=attrib]Image: The Birth of Venus by Sandro Botticelli. Courtesy of Wikipedia.[end-div]

The Mystery of Anaesthesia

Contemporary medical and surgical procedures have been completely transformed through the use of patient anaesthesia. Prior to the first use of diethyl ether as an anaesthetic in the United States in 1842, surgery, even for minor ailments, was often a painful process of last resort.

Nowadays the efficacy of anaesthesia is without question. Yet despite the development of ever more sophisticated compounds and methods of administration little is still known about how anaesthesia actually works.

Linda Geddes over at New Scientist has a fascinating article reviewing recent advancements in our understanding of anaesthesia, and its relevance in furthering our knowledge of consciousness in general.

[div class=attrib]From the New Scientist:[end-div]

I have had two operations under general anaesthetic this year. On both occasions I awoke with no memory of what had passed between the feeling of mild wooziness and waking up in a different room. Both times I was told that the anaesthetic would make me feel drowsy, I would go to sleep, and when I woke up it would all be over.

What they didn’t tell me was how the drugs would send me into the realms of oblivion. They couldn’t. The truth is, no one knows.

The development of general anaesthesia has transformed surgery from a horrific ordeal into a gentle slumber. It is one of the commonest medical procedures in the world, yet we still don’t know how the drugs work. Perhaps this isn’t surprising: we still don’t understand consciousness, so how can we comprehend its disappearance?

That is starting to change, however, with the development of new techniques for imaging the brain or recording its electrical activity during anaesthesia. “In the past five years there has been an explosion of studies, both in terms of consciousness, but also how anaesthetics might interrupt consciousness and what they teach us about it,” says George Mashour, an anaesthetist at the University of Michigan in Ann Arbor. “We’re at the dawn of a golden era.”

Consciousness has long been one of the great mysteries of life, the universe and everything. It is something experienced by every one of us, yet we cannot even agree on how to define it. How does the small sac of jelly that is our brain take raw data about the world and transform it into the wondrous sensation of being alive? Even our increasingly sophisticated technology for peering inside the brain has, disappointingly, failed to reveal a structure that could be the seat of consciousness.

Altered consciousness doesn’t only happen under a general anaesthetic of course – it occurs whenever we drop off to sleep, or if we are unlucky enough to be whacked on the head. But anaesthetics do allow neuroscientists to manipulate our consciousness safely, reversibly and with exquisite precision.

It was a Japanese surgeon who performed the first known surgery under anaesthetic, in 1804, using a mixture of potent herbs. In the west, the first operation under general anaesthetic took place at Massachusetts General Hospital in 1846. A flask of sulphuric ether was held close to the patient’s face until he fell unconscious.

Since then a slew of chemicals have been co-opted to serve as anaesthetics, some inhaled, like ether, and some injected. The people who gained expertise in administering these agents developed into their own medical specialty. Although long overshadowed by the surgeons who patch you up, the humble “gas man” does just as important a job, holding you in the twilight between life and death.

Consciousness may often be thought of as an all-or-nothing quality – either you’re awake or you’re not – but as I experienced, there are different levels of anaesthesia (see diagram). “The process of going into and out of general anaesthesia isn’t like flipping a light switch,” says Mashour. “It’s more akin to a dimmer switch.”

A typical subject first experiences a state similar to drunkenness, which they may or may not be able to recall later, before falling unconscious, which is usually defined as failing to move in response to commands. As they progress deeper into the twilight zone, they now fail to respond to even the penetration of a scalpel – which is the point of the exercise, after all – and at the deepest levels may need artificial help with breathing.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Replica of the inhaler used by William T. G. Morton in 1846 in the first public demonstration of surgery using ether. Courtesy of Wikipedia. [end-div]

Book Review: Thinking, Fast and Slow. Daniel Kahneman

Daniel Kahneman brings together for the first time his decades of groundbreaking research and profound thinking in social psychology and cognitive science in his new book, Thinking Fast and Slow. He presents his current understanding of judgment and decision making and offers insight into how we make choices in our daily lives. Importantly, Kahneman describes how we can identify and overcome the cognitive biases that frequently lead us astray. This is an important work by one of our leading thinkers.

[div class=attrib]From Skeptic:[end-div]

The ideas of the Princeton University Psychologist Daniel Kahneman, recipient of the Nobel Prize in Economic Sciences for his seminal work that challenged the rational model of judgment and decision making, have had a profound and widely regarded impact on psychology, economics, business, law and philosophy. Until now, however, he has never brought together his many years of research and thinking in one book. In the highly anticipated Thinking, Fast and Slow, Kahneman introduces the “machinery of the mind.” Two systems drive the way we think and make choices: System One is fast, intuitive, and emotional; System Two is slower, more deliberative, and more logical. Examining how both systems function within the mind, Kahneman exposes the extraordinary capabilities and also the faults and biases of fast thinking, and the pervasive influence of intuitive impressions on our thoughts and our choices. Kahneman shows where we can trust our intuitions and how we can tap into the benefits of slow thinking. He offers practical and enlightening insights into how choices are made in both our business and personal lives, and how we can guard against the mental glitches that often get us into trouble. Kahneman will change the way you think about thinking.

[div class=attrib]Image: Thinking, Fast and Slow, Daniel Kahneman. Courtesy of Publishers Weekly.[end-div]

Movies in the Mind: A Great Leap in Brain Imaging

A common premise of “mad scientists” in science fiction movies: a computer reconstructs video images from someone’s thoughts via a brain scanning device. Yet, now this is no longer the realm of fantasy. Researchers from the University of California at Berkeley have successfully decoded and reconstructed people’s dynamic visual experiences – in this case watching Hollywood movie trailers –using functional Magnetic Resonance Imaging (fMRI) and computer simulation models.

Watch the stunning video clip below showing side-by-side movies of what a volunteer was actually watching and a computer reconstruction of fMRI data from the same volunteer.

[youtube]nsjDnYxJ0bo[/youtube]

The results are a rudimentary first step, with the technology requiring decades of refinement before the fiction of movies, such as Brainstorm, becomes a closer reality. However, this groundbreaking research nonetheless paves the way to a future of tremendous promise in brain science. Imagine the ability to reproduce and share images of our dreams and memories, or peering into the brain of a comatose patient.

[div class=attrib]More from the UC-Berkeley article here.[end-div]

The Teen Brain: Work In Progress or Adaptive Network?

[div class=attrib]From Wired:[end-div]

Ever since the late-1990s, when researchers discovered that the human brain takes into our mid-20s to fully develop — far longer than previously thought — the teen brain has been getting a bad rap. Teens, the emerging dominant narrative insisted, were “works in progress” whose “immature brains” left them in a state “akin to mental retardation” — all titles from prominent papers or articles about this long developmental arc.

In a National Geographic feature to be published next week, however, I highlight a different take: A growing view among researchers that this prolonged developmental arc is less a matter of delayed development than prolonged flexibility. This account of the adolescent brain — call it the “adaptive adolescent” meme rather than the “immature brain” meme — “casts the teen less as a rough work than as an exquisitely sensitive, highly adaptive creature wired almost perfectly for the job of moving from the safety of home into the complicated world outside.” The teen brain, in short, is not dysfunctional; it’s adaptive. .

Carl Zimmer over at Discover gives us some further interesting insights into recent studies of teen behavior.

[div class=attrib]From Discover:[end-div]

Teenagers are a puzzle, and not just to their parents. When kids pass from childhood to adolescence their mortality rate doubles, despite the fact that teenagers are stronger and faster than children as well as more resistant to disease. Parents and scientists alike abound with explanations. It is tempting to put it down to plain stupidity: Teenagers have not yet learned how to make good choices. But that is simply not true. Psychologists have found that teenagers are about as adept as adults at recognizing the risks of dangerous behavior. Something else is at work.

Scientists are finally figuring out what that “something” is. Our brains have networks of neurons that weigh the costs and benefits of potential actions. Together these networks calculate how valuable things are and how far we’ll go to get them, making judgments in hundredths of a second, far from our conscious awareness. Recent research reveals that teen brains go awry because they weigh those consequences in peculiar ways.

… Neuroscientist B. J. Casey and her colleagues at the Sackler Institute of the Weill Cornell Medical College believe the unique way adolescents place value on things can be explained by a biological oddity. Within our reward circuitry we have two separate systems, one for calculating the value of rewards and another for assessing the risks involved in getting them. And they don’t always work together very well.

… The trouble with teens, Casey suspects, is that they fall into a neurological gap. The rush of hormones at puberty helps drive the reward-system network toward maturity, but those hormones do nothing to speed up the cognitive control network. Instead, cognitive control slowly matures through childhood, adolescence, and into early adulthood. Until it catches up, teenagers are stuck with strong responses to rewards without much of a compensating response to the associated risks.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Kitra Cahana, National Geographic.[end-div]

Free Will: An Illusion?

Neuroscientists continue to find interesting experimental evidence that we do not have free will. Many philosophers continue to dispute this notion and cite inconclusive results and lack of holistic understanding of decision-making on the part of brain scientists. An article by Kerri Smith over at Nature lays open this contentious and fascinating debate.

[div class=attrib]From Nature:[end-div]

The experiment helped to change John-Dylan Haynes’s outlook on life. In 2007, Haynes, a neuroscientist at the Bernstein Center for Computational Neuroscience in Berlin, put people into a brain scanner in which a display screen flashed a succession of random letters1. He told them to press a button with either their right or left index fingers whenever they felt the urge, and to remember the letter that was showing on the screen when they made the decision. The experiment used functional magnetic resonance imaging (fMRI) to reveal brain activity in real time as the volunteers chose to use their right or left hands. The results were quite a surprise.

“The first thought we had was ‘we have to check if this is real’,” says Haynes. “We came up with more sanity checks than I’ve ever seen in any other study before.”

The conscious decision to push the button was made about a second before the actual act, but the team discovered that a pattern of brain activity seemed to predict that decision by as many as seven seconds. Long before the subjects were even aware of making a choice, it seems, their brains had already decided.

As humans, we like to think that our decisions are under our conscious control — that we have free will. Philosophers have debated that concept for centuries, and now Haynes and other experimental neuroscientists are raising a new challenge. They argue that consciousness of a decision may be a mere biochemical afterthought, with no influence whatsoever on a person’s actions. According to this logic, they say, free will is an illusion. “We feel we choose, but we don’t,” says Patrick Haggard, a neuroscientist at University College London.

You may have thought you decided whether to have tea or coffee this morning, for example, but the decision may have been made long before you were aware of it. For Haynes, this is unsettling. “I’ll be very honest, I find it very difficult to deal with this,” he says. “How can I call a will ‘mine’ if I don’t even know when it occurred and what it has decided to do?”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Nature.[end-div]

Sleep: Defragmenting the Brain

[div class=attrib]From Neuroskeptic:[end-div]

After a period of heavy use, hard disks tend to get ‘fragmented’. Data gets written all over random parts of the disk, and it gets inefficient to keep track of it all.

That’s why you need to run a defragmentation program occasionally. Ideally, you do this overnight, while you’re asleep, so it doesn’t stop you from using the computer.

A new paper from some Stanford neuroscientists argues that the function of sleep is to reorganize neural connections – a bit like a disk defrag for the brain – although it’s also a bit like compressing files to make more room, and a bit like a system reset: Synaptic plasticity in sleep: learning, homeostasis and disease

The basic idea is simple. While you’re awake, you’re having experiences, and your brain is forming memories. Memory formation involves a process called long-term potentiation (LTP) which is essentially the strengthening of synaptic connections between nerve cells.

Yet if LTP is strengthening synapses, and we’re learning all our lives, wouldn’t the synapses eventually hit a limit? Couldn’t they max out, so that they could never get any stronger?

Worse, the synapses that strengthen during memory are primarily glutamate synapses – and these are dangerous. Glutamate is a common neurotransmitter, and it’s even a flavouring, but it’s also a toxin.

Too much glutamate damages the very cells that receive the messages. Rather like how sound is useful for communication, but stand next to a pneumatic drill for an hour, and you’ll go deaf.

So, if our brains were constantly forming stronger glutamate synapses, we might eventually run into serious problems. This is why we sleep, according to the new paper. Indeed, sleep deprivation is harmful to health, and this theory would explain why.

[div class=attrib]More from theSource here.[end-div]

How Beauty? Why Beauty?

A recent study by Tomohiro Ishizu and Semir Zeki from University College London places the seat of our sense of beauty in the medial orbitofrontal cortex (mOFC). Not very romantic of course, but thoroughly reasonable that this compound emotion would be found in an area of the brain linked with reward and pleasure.

[div class=attrib]The results are described over at Not Exactly Rocket Science / Discover:[end-div]

Tomohiro Ishizu and Semir Zeki from University College London watched the brains of 21 volunteers as they looked at 30 paintings and listened to 30 musical excerpts. All the while, they were lying inside an fMRI scanner, a machine that measures blood flow to different parts of the brain and shows which are most active. The recruits rated each piece as “beautiful”, “indifferent” or “ugly”.

The scans showed that one part of their brains lit up more strongly when they experienced beautiful images or music than when they experienced ugly or indifferent ones – the medial orbitofrontal cortex or mOFC.

Several studies have linked the mOFC to beauty, but this is a sizeable part of the brain with many roles. It’s also involved in our emotions, our feelings of reward and pleasure, and our ability to make decisions. Nonetheless, Ishizu and Zeki found that one specific area, which they call “field A1” consistently lit up when people experienced beauty.

The images and music were accompanied by changes in other parts of the brain as well, but only the mOFC reacted to beauty in both forms. And the more beautiful the volunteers found their experiences, the more active their mOFCs were. That is not to say that the buzz of neurons in this area produces feelings of beauty; merely that the two go hand-in-hand.

Clearly, this is a great start, and as brain scientists get their hands on ever improving fMRI technology and other brain science tools our understanding will only get sharper. However, what still remains very much a puzzle is “why does our sense of beauty exist”?

The researchers go on to explain their results, albeit tentatively:

Our proposal shifts the definition of beauty very much in favour of the perceiving subject and away from the characteristics of the apprehended object. Our definition… is also indifferent to what is art and what is not art. Almost anything can be considered to be art, but only creations whose experience has, as a correlate, activity in mOFC would fall into the classification of beautiful art… A painting by Francis Bacon may be executed in a painterly style and have great artistic merit but may not qualify as beautiful to a subject, because the experience of viewing it does not correlate with activity in his or her mOFC.

In proposing this the researchers certainly seem to have hit on the underlying “how” of beauty, and it’s reliably consistent, though the sample was not large enough to warrant statistical significance. However, the researchers conclude that “A beautiful thing is met with the same neural changes in the brain of a wealthy cultured connoisseur as in the brain of a poor, uneducated novice, as long as both of them find it beautiful.”

But what of the “why” of beauty. Why is the perception of beauty socially and cognitively important and why did it evolve? After all, as Jonah Lehrer over at Wired questions:

But why does beauty exist? What’s the point of marveling at a Rembrandt self portrait or a Bach fugue? To paraphrase Auden, beauty makes nothing happen. Unlike our more primal indulgences, the pleasure of perceiving beauty doesn’t ensure that we consume calories or procreate. Rather, the only thing beauty guarantees is that we’ll stare for too long at some lovely looking thing. Museums are not exactly adaptive.

The answer to this question has stumped the research community for quite some time, and will undoubtedly continue to do so for some time to come. Several recent cognitive research studies hint at possible answers related to reinforcement for curious and inquisitive behavior, reward for and feedback from anticipation responses, and pattern seeking behavior.

[div class=attrib]More from Jonah Lehrer for Wired:[end-div]

What I like about this speculation is that it begins to explain why the feeling of beauty is useful. The aesthetic emotion might have begun as a cognitive signal telling us to keep on looking, because there is a pattern here that we can figure out it. In other words, it’s a sort of a metacognitive hunch, a response to complexity that isn’t incomprehensible. Although we can’t quite decipher this sensation – and it doesn’t matter if the sensation is a painting or a symphony – the beauty keeps us from looking away, tickling those dopaminergic neurons and dorsal hairs. Like curiosity, beauty is a motivational force, an emotional reaction not to the perfect or the complete, but to the imperfect and incomplete. We know just enough to know that we want to know more; there is something here, we just don’t what. That’s why we call it beautiful.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Claude Monet, Water-Lily Pond and Weeping Willow. Image courtesy of Wikipedia / Creative Commons.[end-div]

[div class=attrib]First page of the manuscript of Bach’s lute suite in G Minor. Image courtesy of Wikipedia / Creative Commons.[end-div]

A Reason for Reason

[div class attrib]From Wilson Quarterly:[end-div]

For all its stellar achievements, human reason seems particularly ill suited to, well, reasoning. Study after study demonstrates reason’s deficiencies, such as the oft-noted confirmation bias (the tendency to recall, select, or interpret evidence in a way that supports one’s preexisting beliefs) and people’s poor performance on straightforward logic puzzles. Why is reason so defective?

To the contrary, reason isn’t defective in the least, argue cognitive scientists Hugo Mercier of the University of Pennsylvania and Dan Sperber of the Jean Nicod Institute in Paris. The problem is that we’ve misunderstood why reason exists and measured its strengths and weaknesses against the wrong standards.

Mercier and Sperber argue that reason did not evolve to allow individuals to think through problems and make brilliant decisions on their own. Rather, it serves a fundamentally social purpose: It promotes argument. Research shows that people solve problems more effectively when they debate them in groups—and the interchange also allows people to hone essential social skills. Supposed defects such as the confirmation bias are well fitted to this purpose because they enable people to efficiently marshal the evidence they need in arguing with others.

[div class=attrib]More from theSource here.[end-div]

The Science Behind Dreaming

[div class=attrib]From Scientific American:[end-div]

For centuries people have pondered the meaning of dreams. Early civilizations thought of dreams as a medium between our earthly world and that of the gods. In fact, the Greeks and Romans were convinced that dreams had certain prophetic powers. While there has always been a great interest in the interpretation of human dreams, it wasn’t until the end of the nineteenth century that Sigmund Freud and Carl Jung put forth some of the most widely-known modern theories of dreaming. Freud’s theory centred around the notion of repressed longing — the idea that dreaming allows us to sort through unresolved, repressed wishes. Carl Jung (who studied under Freud) also believed that dreams had psychological importance, but proposed different theories about their meaning.

Since then, technological advancements have allowed for the development of other theories. One prominent neurobiological theory of dreaming is the “activation-synthesis hypothesis,” which states that dreams don’t actually mean anything: they are merely electrical brain impulses that pull random thoughts and imagery from our memories. Humans, the theory goes, construct dream stories after they wake up, in a natural attempt to make sense of it all. Yet, given the vast documentation of realistic aspects to human dreaming as well as indirect experimental evidence that other mammals such as cats also dream, evolutionary psychologists have theorized that dreaming really does serve a purpose. In particular, the “threat simulation theory” suggests that dreaming should be seen as an ancient biological defence mechanism that provided an evolutionary advantage because of  its capacity to repeatedly simulate potential threatening events – enhancing the neuro-cognitive mechanisms required for efficient threat perception and avoidance.

So, over the years, numerous theories have been put forth in an attempt to illuminate the mystery behind human dreams, but, until recently, strong tangible evidence has remained largely elusive.

Yet, new research published in the Journal of Neuroscience provides compelling insights into the mechanisms that underlie dreaming and the strong relationship our dreams have with our memories. Cristina Marzano and her colleagues at the University of Rome have succeeded, for the first time, in explaining how humans remember their dreams. The scientists predicted the likelihood of successful dream recall based on a signature pattern of brain waves. In order to do this, the Italian research team invited 65 students to spend two consecutive nights in their research laboratory.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image: The Knight’s Dream by Antonio de Pereda. Courtesy of Wikipedia / Creative Commons.[end-div]