Tag Archives: neuroscience

A Common Language

Researchers at Cornell’s Cognitive Neuroscience Lab suggest that all humans may share one common ancestral language. This is regardless of our geographic diversity and seemingly independent linguistic family trees.

Having studied linguistics I can attest that one of its fundamental tenets holds that the sound of a word and its meaning tends to be an arbitrary relationship. Recently, a number of fascinating studies have shown that this linkage may not be as arbitrary as first thought.

For instance, words for small, prickly things — across numerous languages — are likely to be made up of high-pitched, “spiky” sounds, known as “kiki”. On the other hand, words for smoother, round objects are likely to contain “ooo” or “ou” sounds, known as “bouba”.

A great overview of the current thinking comes courtesy of  Scientific American’s recent article “‘R’ Is For Red: Common Words Share Similar Sounds in Many Languages“.

From Scientific American:

In English, the word for the sniffing appendage on our face is nose. Japanese also happens to use the consonant n in this word (hana) and so does Turkish (burun). Since the 1900s, linguists have argued that these associations between speech sounds and meanings are purely arbitrary. Yet a new study calls this into question.

Together with his colleagues, Damián Blasi of the University of Zurich analyzed lists of words from 4,298 different languages. In doing so, they discovered that unrelated languages often use the same sounds to refer to the same meaning. For example, the consonant r is often used in words for red—think of French rouge, Spanish rojo, and German rot, but also Turkish k?rm?z?, Hungarian piros, and Maori kura.

The idea is not new. Previous studies have suggested that sound-meaning associations may not be entirely arbitrary, but these studies were limited by small sample sizes (200 languages or fewer) and highly restricted lists of words (such as animals only). Blasi’s study, published this month in Proceedings of the National Academy of Sciences USA, is notable because it included almost two thirds of the world’s languages and used lists of diverse words, including pronouns, body parts,verbs,natural phenomena,and adjectives—such as we, tongue, drink, star and small, respectively.

The scope of the study is unprecedented, says Stanka Fitneva, associate professor of psychology at Queen’s University in Canada, who was not involved in the research. And Gary Lupyan, associate professor of psychology at the University of Wisconsin, adds, “Only through this type of large-scale analysis can worldwide patterns be discovered.”

Read the entire article here.

Send to Kindle

Thoughts As Shapes

wednesday is indigo blue bookcoverJonathan Jackson has a very rare form of a rare neurological condition. He has synesthesia, which is a cross-connection of two (or more) unrelated senses where an perception in one sense causes an automatic experience in another sense. Some synesthetes, for instance, see various sounds or musical notes as distinct colors (chromesthesia), others perceive different words as distinct tastes (lexical-gustatory synesthesia).

Jackson, on the other hand, experiences his thoughts as shapes in a visual mindmap. This is so fascinating I’ve excerpted a short piece of his story below.

Also, if you are further intrigued by this subject I recommend three great reads on the subject: Wednesday Is Indigo Blue: Discovering the Brain of Synesthesia by Richard Cytowic, and David M. Eagleman; Musicophilia: Tales of Music and the Brain, by Oliver Sacks; The Man Who Tasted Shapes by Richard Cytowic.

From the Atlantic:

One spring evening in the mid 2000s, Jonathan Jackson and Andy Linscott sat on some seaside rocks near their college campus, smoking the kind of cigarettes reserved for heartbreak. Linscott was, by his own admission, “emotionally spewing” over a girl, and Jackson was consoling him.

Jackson had always been a particularly good listener. But in the middle of their talk, he did something Linscott found deeply odd.

“He got up and jumped over to this much higher rock,” Linscott says. “He was like, ‘Andy, I’m listening, I just want to get a different angle. I want to see what you’re saying and the shape of your words from a different perspective.’ I was baffled.”

For Jackson, moving physically to think differently about an idea seemed totally natural. “People say, ‘Okay, we need to think about this from a new angle’ all the time!” he says. “But for me that’s literal.”

Jackson has synesthesia, a neurological phenomenon that has long been defined as the co-activation of two or more conventionally unrelated senses. Some synesthetes see music (known as auditory-visual synesthesia) or read letters and numbers in specific hues (grapheme-color synesthesia). But recent research has complicated that definition, exploring where in the sensory process those overlaps start and opening up the term to include types of synesthesia in which senses interact in a much more complex manner.

Read the entire  story here.

Image: Wednesday Is Indigo Blue, bookcover, Courtesy: By Richard E. Cytowic and David M. Eagleman, MIT Press.

Send to Kindle

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Send to Kindle

Your Brain on LSD

Brain-on-LSD

For the first time, researchers have peered inside the brain to study the realtime effect of the psychedelic drug LSD (lysergic acid diethylamide). Yes, neuroscientists scanned the brains of subjects who volunteered to take a trip inside an MRI scanner, all in the name of science.

While the researchers did not seem to document the detailed subjective experiences of their volunteers, the findings suggest that they were experiencing intense dreamlike visions, effectively “seeing with their eyes shut”. Under the influence of LSD many areas of the brain that are usually compartmentalized showed far greater interconnection and intense activity.

LSD was first synthesized in 1938. Its profound psychological properties were studied from the mid-1940s to the early sixties. The substance was later banned — worldwide — after its adoption as a recreational drug.

This new study was conducted by researchers from Imperial College London and The Beckley Foundation, which researches psychoactive substances.

From Guardian:

The profound impact of LSD on the brain has been laid bare by the first modern scans of people high on the drug.

The images, taken from volunteers who agreed to take a trip in the name of science, have given researchers an unprecedented insight into the neural basis for effects produced by one of the most powerful drugs ever created.

A dose of the psychedelic substance – injected rather than dropped – unleashed a wave of changes that altered activity and connectivity across the brain. This has led scientists to new theories of visual hallucinations and the sense of oneness with the universe some users report.

The brain scans revealed that trippers experienced images through information drawn from many parts of their brains, and not just the visual cortex at the back of the head that normally processes visual information. Under the drug, regions once segregated spoke to one another.

Further images showed that other brain regions that usually form a network became more separated in a change that accompanied users’ feelings of oneness with the world, a loss of personal identity called “ego dissolution”.

David Nutt, the government’s former drugs advisor, professor of neuropsychopharmacology at Imperial College London, and senior researcher on the study, said neuroscientists had waited 50 years for this moment. “This is to neuroscience what the Higgs boson was to particle physics,” he said. “We didn’t know how these profound effects were produced. It was too difficult to do. Scientists were either scared or couldn’t be bothered to overcome the enormous hurdles to get this done.”

Read the entire story here.

Image: Different sections of the brain, either on placebo, or under the influence of LSD (lots of orange). Courtesy: Imperial College/Beckley Foundation.

Send to Kindle

Streaming is So 2015

Led Zeppelin-IV

Fellow music enthusiasts and technology early adopters ditch the streaming sounds right now. And, if you still have an iPod, or worse an MP3 or CD player, trash it; trash them all.

The future of music is coming, and it’s beamed and implanted directly into your grey matter. I’m not sure if I like the idea of Taylor Swift inside my head — I’m more of a Pink Floyd and Led Zeppelin person — nor the idea of not having a filter for certain genres (i.e., country music). However, some might like the notion of a digital-DJ brain implant that lays down tracks based on your mood from monitoring your neurochemical mix. It’s only a matter of time.

Thanks, but I’ll stick to vinyl, crackles and all.

From WSJ:

The year is 2040, and as you wait for a drone to deliver your pizza, you decide to throw on some tunes. Once a commodity bought and sold in stores, music is now an omnipresent utility invoked via spoken- word commands. In response to a simple “play,” an algorithmic DJ opens a blended set of songs, incorporating information about your location, your recent activities and your historical preferences—complemented by biofeedback from your implanted SmartChip. A calming set of lo-fi indie hits streams forth, while the algorithm adjusts the beats per minute and acoustic profile to the rain outside and the fact that you haven’t eaten for six hours.

The rise of such dynamically generated music is the story of the age. The album, that relic of the 20th century, is long dead. Even the concept of a “song” is starting to blur. Instead there are hooks, choruses, catchphrases and beats—a palette of musical elements that are mixed and matched on the fly by the computer, with occasional human assistance. Your life is scored like a movie, with swelling crescendos for the good parts, plaintive, atonal plunks for the bad, and fuzz-pedal guitar for the erotic. The DJ’s ability to read your emotional state approaches clairvoyance. But the developers discourage the name “artificial intelligence” to describe such technology. They prefer the term “mood-affiliated procedural remixing.”

Right now, the mood is hunger. You’ve put on weight lately, as your refrigerator keeps reminding you. With its assistance—and the collaboration of your DJ—you’ve come up with a comprehensive plan for diet and exercise, along with the attendant soundtrack. Already, you’ve lost six pounds. Although you sometimes worry that the machines are running your life, it’s not exactly a dystopian experience—the other day, after a fast- paced dubstep remix spurred you to a personal best on your daily run through the park, you burst into tears of joy.

Cultural production was long thought to be an impregnable stronghold of human intelligence, the one thing the machines could never do better than humans. But a few maverick researchers persisted, and—aided by startling, asymptotic advances in other areas of machine learning—suddenly, one day, they could. To be a musician now is to be an arranger. To be a songwriter is to code. Atlanta, the birthplace of “trap” music, is now a locus of brogrammer culture. Nashville is a leading technology incubator. The Capitol Records tower was converted to condos after the label uploaded its executive suite to the cloud.

Read the entire story here.

Image: Led Zeppelin IV album cover. Courtesy of the author.

 

Send to Kindle

The Illness Known As Evil

What turns a seemingly ordinary person (usually male) into a brutal killer or mass-murderer? How does a quiet computer engineer end up as a cold-blooded executioner of innocents on a terrorist video in 2015? Why does one single guard in a concentration camp lead hundreds of thousands to their deaths during the Second World War? Why do we humans perform acts of such unspeakable brutality and horror?

Since the social sciences have existed researchers have weighed these questions. Is it possible that those who commit such acts of evil are host to a disease of the brain? Some have dubbed this Syndrome E, where E stands for evil. Others are not convinced that evil is a neurological condition with biochemical underpinnings. And so the debate, and the violence, rages on.

From the New Scientist:

The idea that a civilised human being might be capable of barbaric acts is so alien that we often blame our animal instincts – the older, “primitive” areas of the brain taking over and subverting their more rational counterparts. But fresh thinking turns this long-standing explanation on its head. It suggests that people perform brutal acts because the “higher”, more evolved, brain overreaches. The set of brain changes involved has been dubbed Syndrome E – with E standing for evil.

In a world where ideological killings are rife, new insights into this problem are sorely needed. But reframing evil as a disease is controversial. Some believe it could provide justification for heinous acts or hand extreme organisations a recipe for radicalising more young people. Others argue that it denies the reality that we all have the potential for evil within us. Proponents, however, say that if evil really is a pathology, then society ought to try to diagnose susceptible individuals and reduce contagion. And if we can do that, perhaps we can put radicalisation into reverse, too.

Following the second world war, the behaviour of guards in Nazi concentration camps became the subject of study, with some researchers seeing them as willing, ideologically driven executioners, others as mindlessly obeying orders. The debate was reignited in the mid-1990s in the wake of the Rwandan genocide and the Srebrenica massacre in Bosnia. In 1996, The Lancet carried an editorial pointing out that no one was addressing evil from a biological point of view. Neurosurgeon Itzhak Fried, at the University of California, Los Angeles, decided to rise to the challenge.

In a paper published in 1997, he argued that the transformation of non-violent individuals into repetitive killers is characterised by a set of symptoms that suggests a common condition, which he called Syndrome E (see “Seven symptoms of evil“). He suggested that this is the result of “cognitive fracture”, which occurs when a higher brain region, the prefrontal cortex (PFC) – involved in rational thought and decision-making – stops paying attention to signals from more primitive brain regions and goes into overdrive.

The idea captured people’s imaginations, says Fried, because it suggested that you could start to define and describe this basic flaw in the human condition. “Just as a constellation of symptoms such as fever and a cough may signify pneumonia, defining the constellation of symptoms that signify this syndrome may mean that you could recognise it in the early stages.” But it was a theory in search of evidence. Neuroscience has come a long way since then, so Fried organised a conference in Paris earlier this year to revisit the concept.

At the most fundamental level, understanding why people kill is about understanding decision-making, and neuroscientists at the conference homed in on this. Fried’s theory starts with the assumption that people normally have a natural aversion to harming others. If he is correct, the higher brain overrides this instinct in people with Syndrome E. How might that occur?

Etienne Koechlin at the École Normale Supérieure in Paris was able to throw some empirical light on the matter by looking at people obeying rules that conflict with their own preferences. He put volunteers inside a brain scanner and let them choose between two simple tasks, guided by their past experience of which would be the more financially rewarding (paying 6 euros versus 4). After a while he randomly inserted rule-based trials: now there was a colour code indicating which of the two tasks to choose, and volunteers were told that if they disobeyed they would get no money.

Not surprisingly, they followed the rule, even when it meant that choosing the task they had learned would earn them a lower pay-off in the free-choice trials. But something unexpected happened. Although rule-following should have led to a simpler decision, they took longer over it, as if conflicted. In the brain scans, both the lateral and the medial regions of the PFC lit up. The former is known to be sensitive to rules; the latter receives information from the limbic system, an ancient part of the brain that processes emotional states, so is sensitive to our innate preferences. In other words, when following the rule, people still considered their personal preference, but activity in the lateral PFC overrode it.

Of course, playing for a few euros is far removed from choosing to kill fellow humans. However, Koechlin believes his results show that our instinctive values endure even when the game changes. “Rules do not change values, just behaviours,” he says. He interprets this as showing that it is normal, not pathological, for the higher brain to override signals coming from the primitive brain. If Fried’s idea is correct, this process goes into overdrive in Syndrome E, helping to explain how an ordinary person overcomes their squeamishness to kill. The same neuroscience may underlie famous experiments conducted by the psychologist Stanley Milgram at Yale University in the 1960s, which revealed the extraordinary lengths to which people would go out of obedience to an authority figure – even administering what they thought were lethal electric shocks to strangers.

Fried suggests that people experience a visceral reaction when they kill for the first time, but some rapidly become desensitised. And the primary instinct not to harm may be more easily overcome when people are “just following orders”. In unpublished work, Patrick Haggard at University College London has used brain scans to show that this is enough to make us feel less responsible for our actions. “There is something about being coerced that produces a different experience of agency,” he says, “as if people are subjectively able to distance themselves from this unpleasant event they are causing.”

However, what is striking about many accounts of mass killing, both contemporary and historical, is that the perpetrators often choose to kill even when not under orders to do so. In his book Ordinary Men, the historian Christopher Browning recounts the case of a Nazi unit called reserve police battalion 101. No member of this unit was forced to kill. A small minority did so eagerly from the start, but they may have had psychopathic or sadistic tendencies. However, the vast majority of those who were reluctant to kill soon underwent a transformation, becoming just as ruthless. Browning calls them “routinised” killers: it was as if, once they had decided to kill, it quickly became a habit.

Habits have long been considered unthinking, semi-automatic behaviours in which the higher brain is not involved. That seems to support the idea that the primitive brain is in control when seemingly normal people become killers. But this interpretation is challenged by new research by neuroscientist Ann Graybiel at the Massachusetts Institute of Technology. She studies people with common psychiatric disorders, such as addiction and depression, that lead them to habitually make bad decisions. In high-risk, high-stakes situations, they tend to downplay the cost with respect to the benefit and accept an unhealthy level of risk. Graybiel’s work suggests the higher brain is to blame.

In one set of experiments, her group trained rats to acquire habits – following certain runs through mazes. The researchers then suppressed the activity of neurons in an area of the PFC that blocks signals coming from a primitive part of the brain called the amygdala. The rats immediately changed their running behaviour – the habit had been broken. “The old idea that the cognitive brain doesn’t have evaluative access to that habitual behaviour, that it’s beyond its reach, is false,” says Graybiel. “It has moment-to-moment evaluative control.” That’s exciting, she says, because it suggests a way to treat people with maladaptive habits such as obsessive-compulsive disorder, or even, potentially, Syndrome E.

What made the experiment possible was a technique known as optogenetics, which allows light to regulate the activity of genetically engineered neurons in the rat PFC. That wouldn’t be permissible in humans, but cognitive or behavioural therapies, or drugs, could achieve the same effect. Graybiel believes it might even be possible to stop people deciding to kill in the first place by steering them away from the kind of cost-benefit analysis that led them to, say, blow themselves up on a crowded bus. In separate experiments with risk-taking rats, her team found that optogenetically decreasing activity in another part of the limbic system that communicates with the PFC, the striatum, made the rats more risk-averse: “We can just turn a knob and radically alter their behaviour,” she says.

Read the entire article here.

Send to Kindle

Time For a New Body, Literally

Brainthatwouldntdie_film_poster

Let me be clear. I’m not referring to a hair transplant, but a head transplant.

A disturbing story has been making the media rounds recently. Dr. Sergio Canavero from the Turin Advanced Neuromodulation Group in Italy, suggests that the time is right to attempt the transplantation of a human head onto a different body. Canavero believes that advances in surgical techniques and immunotherapy are such that a transplantation could be attempted by 2017. Interestingly enough, he has already had several people volunteer for a new body.

Ethics aside, it certainly doesn’t stretch the imagination to believe Hollywood’s elite would clamor for this treatment. Now, I wonder if some people, liking their own body, would want a new head?

From New Scientist:

It’s heady stuff. The world’s first attempt to transplant a human head will be launched this year at a surgical conference in the US. The move is a call to arms to get interested parties together to work towards the surgery.

The idea was first proposed in 2013 by Sergio Canavero of the Turin Advanced Neuromodulation Group in Italy. He wants to use the surgery to extend the lives of people whose muscles and nerves have degenerated or whose organs are riddled with cancer. Now he claims the major hurdles, such as fusing the spinal cord and preventing the body’s immune system from rejecting the head, are surmountable, and the surgery could be ready as early as 2017.

Canavero plans to announce the project at the annual conference of the American Academy of Neurological and Orthopaedic Surgeons (AANOS) in Annapolis, Maryland, in June. Is society ready for such momentous surgery? And does the science even stand up?

The first attempt at a head transplant was carried out on a dog by Soviet surgeon Vladimir Demikhov in 1954. A puppy’s head and forelegs were transplanted onto the back of a larger dog. Demikhov conducted several further attempts but the dogs only survived between two and six days.

The first successful head transplant, in which one head was replaced by another, was carried out in 1970. A team led by Robert White at Case Western Reserve University School of Medicine in Cleveland, Ohio, transplanted the head of one monkey onto the body of another. They didn’t attempt to join the spinal cords, though, so the monkey couldn’t move its body, but it was able to breathe with artificial assistance. The monkey lived for nine days until its immune system rejected the head. Although few head transplants have been carried out since, many of the surgical procedures involved have progressed. “I think we are now at a point when the technical aspects are all feasible,” says Canavero.

This month, he published a summary of the technique he believes will allow doctors to transplant a head onto a new body (Surgical Neurology Internationaldoi.org/2c7). It involves cooling the recipient’s head and the donor body to extend the time their cells can survive without oxygen. The tissue around the neck is dissected and the major blood vessels are linked using tiny tubes, before the spinal cords of each person are cut. Cleanly severing the cords is key, says Canavero.

The recipient’s head is then moved onto the donor body and the two ends of the spinal cord – which resemble two densely packed bundles of spaghetti – are fused together. To achieve this, Canavero intends to flush the area with a chemical called polyethylene glycol, and follow up with several hours of injections of the same stuff. Just like hot water makes dry spaghetti stick together, polyethylene glycol encourages the fat in cell membranes to mesh.

Next, the muscles and blood supply would be sutured and the recipient kept in a coma for three or four weeks to prevent movement. Implanted electrodes would provide regular electrical stimulation to the spinal cord, because research suggests this can strengthen new nerve connections.

When the recipient wakes up, Canavero predicts they would be able to move and feel their face and would speak with the same voice. He says that physiotherapy would enable the person to walk within a year. Several people have already volunteered to get a new body, he says.

The trickiest part will be getting the spinal cords to fuse. Polyethylene glycol has been shown to prompt the growth of spinal cord nerves in animals, and Canavero intends to use brain-dead organ donors to test the technique. However, others are sceptical that this would be enough. “There is no evidence that the connectivity of cord and brain would lead to useful sentient or motor function following head transplantation,” says Richard Borgens, director of the Center for Paralysis Research at Purdue University in West Lafayette, Indiana.

Read the entire article here.

Image: Theatrical poster for the movie The Brain That Wouldn’t Die (1962). Courtesy of Wikipedia.

Send to Kindle

The Great Unknown: Consciousness

Google-search-consciousness

Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Slow Reading is Catching on Fast (Again)

Pursuing a cherished activity, uninterrupted, with no distraction is one of life’s pleasures. Many who multi-task and brag about it have long forgotten the benefits of deep focus and immersion in one single, prolonged task. Reading can be such a process — and over the last several years researchers have found that distraction-free, thoughtful reading — slow reading — is beneficial.

So, please put down your tablet, laptop, smartphone and TV remote after you read this post, go find an unread book, shut out your daily distractions — kids, news, Facebook, boss, grocery lists, plumber — and immerse yourself in the words on a page, and nothing else. It will relieve you of stress and benefit your brain.

From WSJ:

Once a week, members of a Wellington, New Zealand, book club arrive at a cafe, grab a drink and shut off their cellphones. Then they sink into cozy chairs and read in silence for an hour.

The point of the club isn’t to talk about literature, but to get away from pinging electronic devices and read, uninterrupted. The group calls itself the Slow Reading Club, and it is at the forefront of a movement populated by frazzled book lovers who miss old-school reading.

Slow reading advocates seek a return to the focused reading habits of years gone by, before Google, smartphones and social media started fracturing our time and attention spans. Many of its advocates say they embraced the concept after realizing they couldn’t make it through a book anymore.

“I wasn’t reading fiction the way I used to,” said Meg Williams, a 31-year-old marketing manager for an annual arts festival who started the club. “I was really sad I’d lost the thing I used to really, really enjoy.”

Slow readers list numerous benefits to a regular reading habit, saying it improves their ability to concentrate, reduces stress levels and deepens their ability to think, listen and empathize. The movement echoes a resurgence in other old-fashioned, time-consuming pursuits that offset the ever-faster pace of life, such as cooking the “slow-food” way or knitting by hand.

The benefits of reading from an early age through late adulthood have been documented by researchers. A study of 300 elderly people published by the journal Neurology last year showed that regular engagement in mentally challenging activities, including reading, slowed rates of memory loss in participants’ later years.

A study published last year in Science showed that reading literary fiction helps people understand others’ mental states and beliefs, a crucial skill in building relationships. A piece of research published in Developmental Psychology in 1997 showed first-grade reading ability was closely linked to 11th grade academic achievements.

Yet reading habits have declined in recent years. In a survey this year, about 76% of Americans 18 and older said they read at least one book in the past year, down from 79% in 2011, according to the Pew Research Center.

Attempts to revive reading are cropping up in many places. Groups in Seattle, Brooklyn, Boston and Minneapolis have hosted so-called silent reading parties, with comfortable chairs, wine and classical music.

Diana La Counte of Orange County, Calif., set up what she called a virtual slow-reading group a few years ago, with members discussing the group’s book selection online, mostly on Facebook. “When I realized I read Twitter more than a book, I knew it was time for action,” she says.

Read the entire story here.

Send to Kindle

You Are a Neural Computation

Since the days of Aristotle, and later Descartes, thinkers have sought to explain consciousness and free will. Several thousand years on and we are still pondering the notion; science has made great strides and yet fundamentally we still have little idea.

Many neuroscientists now armed with new and very precise research tools are aiming to change this. Yet, increasingly it seems that free will may indeed by a cognitive illusion. Evidence suggests that our subconscious decides and initiates action for us long before we are aware of making a conscious decision. There seems to be no god or ghost in the machine.

From Technology Review:

It was an expedition seeking something never caught before: a single human neuron lighting up to create an urge, albeit for the minor task of moving an index finger, before the subject was even aware of feeling anything. Four years ago, Itzhak Fried, a neurosurgeon at the University of California, Los Angeles, slipped several probes, each with eight hairlike electrodes able to record from single neurons, into the brains of epilepsy patients. (The patients were undergoing surgery to diagnose the source of severe seizures and had agreed to participate in experiments during the process.) Probes in place, the patients—who were conscious—were given instructions to press a button at any time of their choosing, but also to report when they’d first felt the urge to do so.

Later, Gabriel Kreiman, a neuroscientist at Harvard Medical School and Children’s Hospital in Boston, captured the quarry. Poring over data after surgeries in 12 patients, he found telltale flashes of individual neurons in the pre-­supplementary motor area (associated with movement) and the anterior cingulate (associated with motivation and attention), preceding the reported urges by anywhere from hundreds of milliseconds to several seconds. It was a direct neural measurement of the unconscious brain at work—caught in the act of formulating a volitional, or freely willed, decision. Now Kreiman and his colleagues are planning to repeat the feat, but this time they aim to detect pre-urge signatures in real time and stop the subject from performing the action—or see if that’s even possible.

A variety of imaging studies in humans have revealed that brain activity related to decision-making tends to precede conscious action. Implants in macaques and other animals have examined brain circuits involved in perception and action. But Kreiman broke ground by directly measuring a preconscious decision in humans at the level of single neurons. To be sure, the readouts came from an average of just 20 neurons in each patient. (The human brain has about 86 billion of them, each with thousands of connections.) And ultimately, those neurons fired only in response to a chain of even earlier events. But as more such experiments peer deeper into the labyrinth of neural activity behind decisions—whether they involve moving a finger or opting to buy, eat, or kill something—science could eventually tease out the full circuitry of decision-making and perhaps point to behavioral therapies or treatments. “We need to understand the neuronal basis of voluntary decision-making—or ‘freely willed’ decision-­making—and its pathological counterparts if we want to help people such as drug, sex, food, and gambling addicts, or patients with obsessive-compulsive disorder,” says Christof Koch, chief scientist at the Allen Institute of Brain Science in Seattle (see “Cracking the Brain’s Codes”). “Many of these people perfectly well know that what they are doing is dysfunctional but feel powerless to prevent themselves from engaging in these behaviors.”

Kreiman, 42, believes his work challenges important Western philosophical ideas about free will. The Argentine-born neuroscientist, an associate professor at Harvard Medical School, specializes in visual object recognition and memory formation, which draw partly on unconscious processes. He has a thick mop of black hair and a tendency to pause and think a long moment before reframing a question and replying to it expansively. At the wheel of his Jeep as we drove down Broadway in Cambridge, Massachusetts, Kreiman leaned over to adjust the MP3 player—toggling between Vivaldi, Lady Gaga, and Bach. As he did so, his left hand, the one on the steering wheel, slipped to let the Jeep drift a bit over the double yellow lines. Kreiman’s view is that his neurons made him do it, and they also made him correct his small error an instant later; in short, all actions are the result of neural computations and nothing more. “I am interested in a basic age-old question,” he says. “Are decisions really free? I have a somewhat extreme view of this—that there is nothing really free about free will. Ultimately, there are neurons that obey the laws of physics and mathematics. It’s fine if you say ‘I decided’—that’s the language we use. But there is no god in the machine—only neurons that are firing.”

Our philosophical ideas about free will date back to Aristotle and were systematized by René Descartes, who argued that humans possess a God-given “mind,” separate from our material bodies, that endows us with the capacity to freely choose one thing rather than another. Kreiman takes this as his departure point. But he’s not arguing that we lack any control over ourselves. He doesn’t say that our decisions aren’t influenced by evolution, experiences, societal norms, sensations, and perceived consequences. “All of these external influences are fundamental to the way we decide what we do,” he says. “We do have experiences, we do learn, we can change our behavior.”

But the firing of a neuron that guides us one way or another is ultimately like the toss of a coin, Kreiman insists. “The rules that govern our decisions are similar to the rules that govern whether a coin will land one way or the other. Ultimately there is physics; it is chaotic in both cases, but at the end of the day, nobody will argue the coin ‘wanted’ to land heads or tails. There is no real volition to the coin.”

Testing Free Will

It’s only in the past three to four decades that imaging tools and probes have been able to measure what actually happens in the brain. A key research milestone was reached in the early 1980s when Benjamin Libet, a researcher in the physiology department at the University of California, San Francisco, made a remarkable study that tested the idea of conscious free will with actual data.

Libet fitted subjects with EEGs—gadgets that measure aggregate electrical brain activity through the scalp—and had them look at a clock dial that spun around every 2.8 seconds. The subjects were asked to press a button whenever they chose to do so—but told they should also take note of where the time hand was when they first felt the “wish or urge.” It turns out that the actual brain activity involved in the action began 300 milliseconds, on average, before the subject was conscious of wanting to press the button. While some scientists criticized the methods—questioning, among other things, the accuracy of the subjects’ self-reporting—the study set others thinking about how to investigate the same questions. Since then, functional magnetic resonance imaging (fMRI) has been used to map brain activity by measuring blood flow, and other studies have also measured brain activity processes that take place before decisions are made. But while fMRI transformed brain science, it was still only an indirect tool, providing very low spatial resolution and averaging data from millions of neurons. Kreiman’s own study design was the same as Libet’s, with the important addition of the direct single-neuron measurement.

When Libet was in his prime, ­Kreiman was a boy. As a student of physical chemistry at the University of Buenos Aires, he was interested in neurons and brains. When he went for his PhD at Caltech, his passion solidified under his advisor, Koch. Koch was deep in collaboration with Francis Crick, co-discoverer of DNA’s structure, to look for evidence of how consciousness was represented by neurons. For the star-struck kid from Argentina, “it was really life-changing,” he recalls. “Several decades ago, people said this was not a question serious scientists should be thinking about; they either had to be smoking something or have a Nobel Prize”—and Crick, of course, was a Nobelist. Crick hypothesized that studying how the brain processed visual information was one way to study consciousness (we tap unconscious processes to quickly decipher scenes and objects), and he collaborated with Koch on a number of important studies. Kreiman was inspired by the work. “I was very excited about the possibility of asking what seems to be the most fundamental aspect of cognition, consciousness, and free will in a reductionist way—in terms of neurons and circuits of neurons,” he says.

One thing was in short supply: humans willing to have scientists cut open their skulls and poke at their brains. One day in the late 1990s, Kreiman attended a journal club—a kind of book club for scientists reviewing the latest literature—and came across a paper by Fried on how to do brain science in people getting electrodes implanted in their brains to identify the source of severe epileptic seizures. Before he’d heard of Fried, “I thought examining the activity of neurons was the domain of monkeys and rats and cats, not humans,” Kreiman says. Crick introduced Koch to Fried, and soon Koch, Fried, and Kreiman were collaborating on studies that investigated human neural activity, including the experiment that made the direct neural measurement of the urge to move a finger. “This was the opening shot in a new phase of the investigation of questions of voluntary action and free will,” Koch says.

Read the entire article here.

Send to Kindle

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Send to Kindle

Now Where Did I Put Those Keys?

key_chain

We all lose our car keys and misplace our cell phones. We leave umbrellas on public transport. We forget things at the office. We all do it — some more frequently than others. And, it’s not merely a symptom of aging. Many younger people seem to be increasingly prone to losing their personal items, perhaps a characteristic of their increasingly fragmented, distracted and limited attention spans.

From the WSJ:

You’ve put your keys somewhere and now they appear to be nowhere, certainly not in the basket by the door they’re supposed to go in and now you’re 20 minutes late for work. Kitchen counter, night stand, book shelf, work bag: Wait, finally, there they are under the mail you brought in last night.

Losing things is irritating and yet we are a forgetful people. The average person misplaces up to nine items a day, and one-third of respondents in a poll said they spend an average of 15 minutes each day searching for items—cellphones, keys and paperwork top the list, according to an online survey of 3,000 people published in 2012 by a British insurance company.

Everyday forgetfulness isn’t a sign of a more serious medical condition like Alzheimer’s or dementia. And while it can worsen with age, minor memory lapses are the norm for all ages, researchers say.

Our genes are at least partially to blame, experts say. Stress, fatigue, and multitasking can exacerbate our propensity to make such errors. Such lapses can also be linked to more serious conditions like depression and attention-deficit hyperactivity disorders.

“It’s the breakdown at the interface of attention and memory,” says Daniel L. Schacter, a psychology professor at Harvard University and author of “The Seven Sins of Memory.”

That breakdown can occur in two spots: when we fail to activate our memory and encode what we’re doing—where we put down our keys or glasses—or when we try to retrieve the memory. When you encode a memory, the hippocampus, a central part of the brain involved in memory function, takes a snapshot which is preserved in a set of neurons, says Kenneth Norman, a psychology professor at Princeton University. Those neurons can be activated later with a reminder or cue.

It is important to pay attention when you put down an item, or during encoding. If your state of mind at retrieval is different than it was during encoding, that could pose a problem. Case in point: You were starving when you walked into the house and deposited your keys. When you then go to look for them later, you’re no longer hungry so the memory may be harder to access.

The act of physically and mentally retracing your steps when looking for lost objects can work. Think back to your state of mind when you walked into the house (Were you hungry?). “The more you can make your brain at retrieval like the way it was when you lay down that original memory trace,” the more successful you will be, Dr. Norman says.

In a recent study, researchers in Germany found that the majority of people surveyed about forgetfulness and distraction had a variation in the so-called dopamine D2 receptor gene (DRD2), leading to a higher incidence of forgetfulness. According to the study, 75% of people carry a variation that makes them more prone to forgetfulness.

“Forgetfulness is quite common,” says Sebastian Markett, a researcher in psychology neuroscience at the University of Bonn in Germany and lead author of the study currently in the online version of the journal Neuroscience Letters, where it is expected to be published soon.

The study was based on a survey filled out by 500 people who were asked questions about memory lapses, perceptual failures (failing to notice a stop sign) and psychomotor failures (bumping into people on the street). The individuals also provided a saliva sample for molecular genetic testing.

About half of the total variation of forgetfulness can be explained by genetic effects, likely involving dozens of gene variations, Dr. Markett says.

The buildup of what psychologists call proactive interference helps explain how we can forget where we parked the car when we park in the same lot but different spaces every day. Memory may be impaired by the buildup of interference from previous experiences so it becomes harder to retrieve the specifics, like which parking space, Dr. Schacter says.

A study conducted by researchers at the Salk Institute for Biological Studies in California found that the brain keeps track of similar but distinct memories (where you parked your car today, for example) in the dentate gyrus, part of the hippocampus. There the brain stores separates recordings of each environment and different groups of neurons are activated when similar but nonidentical memories are encoded and later retrieved. The findings appeared last year in the online journal eLife.

The best way to remember where you put something may be the most obvious: Find a regular spot for it and somewhere that makes sense, experts say. If it’s reading glasses, leave them by the bedside. Charge your phone in the same place. Keep a container near the door for keys or a specific pocket in your purse.

Read the entire article here.

Image: Leather key chain. Courtesy of Wikipedia / The Egyptian.

 

Send to Kindle

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Send to Kindle

Doctor Lobotomy

walter-freeman

Read the following article once and you could be forgiven for assuming that it’s a fictional screenplay for Hollywood’s next R-rated Halloween flick or perhaps the depraved tale of an associate of Nazi SS officer and physician Josef Mengele.

Read the following article twice and you’ll see that the story of neurologist Dr. Walter Freeman is true: the victims — patients — were military veterans numbering in the thousands, and it took place in the United States following WWII.

This awful story is all the more incomprehensible by virtue of the cadre of assistants, surgeons, psychiatrists, do-gooders and government bureaucrats who actively aided Freeman or did nothing to stop his foolish, amateurish experiments. Unbelievable!

From WSJ:

As World War II raged, two Veterans Administration doctors reported witnessing something extraordinary: An eminent neurologist, Walter J. Freeman, and his partner treating a mentally ill patient by cutting open the skull and slicing through neural fibers in the brain.

It was an operation Dr. Freeman called a lobotomy.

Their report landed on the desk of VA chief Frank Hines on July 26, 1943, in the form of a memo recommending lobotomies for veterans with intractable mental illnesses. The operation “may be done, in suitable cases, under local anesthesia,” the memo said. It “does not demand a high degree of surgical skill.”

The next day Mr. Hines stamped the memo in purple ink: APPROVED.

Over the next dozen or so years, the U.S. government would lobotomize roughly 2,000 American veterans, according to a cache of forgotten VA documents unearthed by The Wall Street Journal, including the memo approved by Mr. Hines. It was a decision made “in accord with our desire to keep abreast of all advances in treatment,” the memo said.

The 1943 decision gave birth to an alliance between the VA and lobotomy’s most dogged salesman, Dr. Freeman, a man famous in his day and notorious in retrospect. His prolific—some critics say reckless—use of brain surgery to treat mental illness places him today among the most controversial figures in American medical history.

At the VA, Dr. Freeman pushed the frontiers of ethically acceptable medicine. He said VA psychiatrists, untrained in surgery, should be allowed to perform lobotomies by hammering ice-pick-like tools through patients’ eye sockets. And he argued that, while their patients’ skulls were open anyway, VA surgeons should be permitted to remove samples of living brain for research purposes.

The documents reveal the degree to which the VA was swayed by his pitch. The Journal this week is reporting the first detailed account of the VA’s psychosurgery program based on records in the National Archives, Dr. Freeman’s own papers at George Washington University, military documents and medical records, as well as interviews with doctors from the era, families of lobotomized vets and one surviving patient, 90-year-old Roman Tritz.

The agency’s use of lobotomy tailed off when the first major antipsychotic drug, Thorazine, came on the market in the mid-1950s, and public opinion of Dr. Freeman and his signature surgery pivoted from admiration to horror.

During and immediately after World War II, lobotomies weren’t greeted with the dismay they prompt today. Still, Dr. Freeman’s views sparked a heated debate inside the agency about the wisdom and ethics of an operation Dr. Freeman himself described as “a surgically induced childhood.”

In 1948, one senior VA psychiatrist wrote a memo mocking Dr. Freeman for using lobotomies to treat “practically everything from delinquency to a pain in the neck.” Other doctors urged more research before forging ahead with such a dramatic medical intervention. A number objected in particular to the Freeman ice-pick technique.

Yet Dr. Freeman’s influence proved decisive. The agency brought Dr. Freeman and his junior partner, neurosurgeon James Watts, aboard as consultants, speakers and inspirations, and its doctors performed lobotomies on veterans at some 50 hospitals from Massachusetts to Oregon.

Born in 1895 to a family of Philadelphia doctors, Yale-educated Dr. Freeman was drawn to psychosurgery by his work in the wards of St. Elizabeth’s Hospital, where Washington’s mentally ill, including World War I veterans, were housed but rarely cured. The treatments of the day—psychotherapy, electroshock, high-pressure water sprays and insulin injections to induce temporary comas—wouldn’t successfully cure serious mental illnesses that resulted from physical defects in the brain, Dr. Freeman believed. His suggestion was to sever faulty neural pathways between the prefrontal area and the rest of the brain, channels believed by lobotomy practitioners to promote excessive emotions.

It was an approach pioneered by Egas Moniz, a Portuguese physician who in 1935 performed the first lobotomy (then called a leucotomy). Fourteen years later, he was rewarded with the Nobel Prize in medicine.

In 1936, Drs. Freeman and Watts performed their first lobotomy, on a 63-year-old woman suffering from depression, anxiety and insomnia. “I knew as soon as I operated on a mental patient and cut into a physically normal brain, I’d be considered radical by some people,” Dr. Watts said in a 1979 interview transcribed in the George Washington University archives.

By his own count, Dr. Freeman would eventually participate in 3,500 lobotomies, some, according to records in the university archives, on children as young as four years old.

“In my father’s hands, the operation worked,” says his son, Walter Freeman III, a retired professor of neurobiology. “This was an explanation for his zeal.”

Drs. Freeman and Watts considered about one-third of their operations successes in which the patient was able to lead a “productive life,” Dr. Freeman’s son says. Another third were able to return home but not support themselves. The final third were “failures,” according to Dr. Watts.

Later in life, Dr. Watts, who died in 1994, offered a blunt assessment of lobotomy’s heyday. “It’s a brain-damaging operation. It changes the personality,” he said in the 1979 interview. “We could predict relief, and we could fairly accurately predict relief of certain symptoms like suicidal ideas, attempts to kill oneself. We could predict there would be relief of anxiety and emotional tension. But we could not nearly as accurately predict what kind of person this was going to be.”

Other possible side-effects included seizures, incontinence, emotional outbursts and, on occasion, death.

Read the entire article here.

 

Send to Kindle

Left Brain, Right Brain or Top Brain, Bottom Brain?

Are you analytical and logical? If so, you are likely to be labeled as being “left-brained”. On the other hand, if you are emotional and creative, you are more likely to be labeled “right-brained”. And so the popular narrative of brain function continues. But this generalized distinction is a myth. Our brains’ hemispheres do specialize, but not in such an overarching way. Recent research points to another distinction: top brain and bottom brain.

From WSJ:

Who hasn’t heard that people are either left-brained or right-brained—either analytical and logical or artistic and intuitive, based on the relative “strengths” of the brain’s two hemispheres? How often do we hear someone remark about thinking with one side or the other?

A flourishing industry of books, videos and self-help programs has been built on this dichotomy. You can purportedly “diagnose” your brain, “motivate” one or both sides, indulge in “essence therapy” to “restore balance” and much more. Everyone from babies to elders supposedly can benefit. The left brain/right brain difference seems to be a natural law.

Except that it isn’t. The popular left/right story has no solid basis in science. The brain doesn’t work one part at a time, but rather as a single interactive system, with all parts contributing in concert, as neuroscientists have long known. The left brain/right brain story may be the mother of all urban legends: It sounds good and seems to make sense—but just isn’t true.

The origins of this myth lie in experimental surgery on some very sick epileptics a half-century ago, conducted under the direction of Roger Sperry, a renowned neuroscientist at the California Institute of Technology. Seeking relief for their intractable epilepsy, and encouraged by Sperry’s experimental work with animals, 16 patients allowed the Caltech team to cut the corpus callosum, the massive bundle of nerve fibers that connects the two sides of the brain. The patients’ suffering was alleviated, and Sperry’s postoperative studies of these volunteers confirmed that the two halves do, indeed, have distinct cognitive capabilities.

But these capabilities are not the stuff of popular narrative: They reflect very specific differences in function—such as attending to overall shape versus details during perception—not sweeping distinctions such as being “logical” versus “intuitive.” This important fine print got buried in the vast mainstream publicity that Sperry’s research generated.

There is a better way to understand the functioning of the brain, based on another, ordinarily overlooked anatomical division—between its top and bottom parts. We call this approach “the theory of cognitive modes.” Built on decades of unimpeachable research that has largely remained inside scientific circles, it offers a new way of viewing thought and behavior that may help us understand the actions of people as diverse as Oprah Winfrey, the Dalai Lama, Tiger Woods and Elizabeth Taylor.

Our theory has emerged from the field of neuropsychology, the study of higher cognitive functioning—thoughts, wishes, hopes, desires and all other aspects of mental life. Higher cognitive functioning is seated in the cerebral cortex, the rind-like outer layer of the brain that consists of four lobes. Illustrations of this wrinkled outer brain regularly show a top-down view of the two hemispheres, which are connected by thick bundles of neuronal tissue, notably the corpus callosum, an impressive structure consisting of some 250 million nerve fibers.

If you move the view to the side, however, you can see the top and bottom parts of the brain, demarcated largely by the Sylvian fissure, the crease-like structure named for the 17th-century Dutch physician who first described it. The top brain comprises the entire parietal lobe and the top (and larger) portion of the frontal lobe. The bottom comprises the smaller remainder of the frontal lobe and all of the occipital and temporal lobes.

Our theory’s roots lie in a landmark report published in 1982 by Mortimer Mishkin and Leslie G. Ungerleider of the National Institute of Mental Health. Their trailblazing research examined rhesus monkeys, which have brains that process visual information in much the same way as the human brain. Hundreds of subsequent studies in several fields have helped to shape our theory, by researchers such as Gregoire Borst of Paris Descartes University, Martha Farah of the University of Pennsylvania, Patricia Goldman-Rakic of Yale University, Melvin Goodale of the University of Western Ontario and Maria Kozhevnikov of the National University of Singapore.

This research reveals that the top-brain system uses information about the surrounding environment (in combination with other sorts of information, such as emotional reactions and the need for food or drink) to figure out which goals to try to achieve. It actively formulates plans, generates expectations about what should happen when a plan is executed and then, as the plan is being carried out, compares what is happening with what was expected, adjusting the plan accordingly.

The bottom-brain system organizes signals from the senses, simultaneously comparing what is being perceived with all the information previously stored in memory. It then uses the results of such comparisons to classify and interpret the object or event, allowing us to confer meaning on the world.

The top- and bottom-brain systems always work together, just as the hemispheres always do. Our brains are not engaged in some sort of constant cerebral tug of war, with one part seeking dominance over another. (What a poor evolutionary strategy that would have been!) Rather, they can be likened roughly to the parts of a bicycle: the frame, seat, wheels, handlebars, pedals, gears, brakes and chain that work together to provide transportation.

But here’s the key to our theory: Although the top and bottom parts of the brain are always used during all of our waking lives, people do not rely on them to an equal degree. To extend the bicycle analogy, not everyone rides a bike the same way. Some may meander, others may race.

Read the entire article here.

Image: Left-brain, right-brain cartoon. Courtesy of HuffingtonPost.

Send to Kindle

Why Sleep?

There are more theories on why we sleep than there are cable channels in the U.S. But that hasn’t prevented researchers from proposing yet another one — it’s all about flushing waste.

From the Guardian:

Scientists in the US claim to have a new explanation for why we sleep: in the hours spent slumbering, a rubbish disposal service swings into action that cleans up waste in the brain.

Through a series of experiments on mice, the researchers showed that during sleep, cerebral spinal fluid is pumped around the brain, and flushes out waste products like a biological dishwasher.

The process helps to remove the molecular detritus that brain cells churn out as part of their natural activity, along with toxic proteins that can lead to dementia when they build up in the brain, the researchers say.

Maiken Nedergaard, who led the study at the University of Rochester, said the discovery might explain why sleep is crucial for all living organisms. “I think we have discovered why we sleep,” Nedergaard said. “We sleep to clean our brains.”

Writing in the journal Science, Nedergaard describes how brain cells in mice shrank when they slept, making the space between them on average 60% greater. This made the cerebral spinal fluid in the animals’ brains flow ten times faster than when the mice were awake.

The scientists then checked how well mice cleared toxins from their brains by injecting traces of proteins that are implicated in Alzheimer’s disease. These amyloid beta proteins were removed faster from the brains of sleeping mice, they found.

Nedergaard believes the clean-up process is more active during sleep because it takes too much energy to pump fluid around the brain when awake. “You can think of it like having a house party. You can either entertain the guests or clean up the house, but you can’t really do both at the same time,” she said in a statement.

According to the scientist, the cerebral spinal fluid flushes the brain’s waste products into what she calls the “glymphatic system” which carries it down through the body and ultimately to the liver where it is broken down.

Other researchers were sceptical of the study, and said it was too early to know if the process goes to work in humans, and how to gauge the importance of the mechanism. “It’s very attractive, but I don’t think it’s the main function of sleep,” said Raphaelle Winsky-Sommerer, a specialist on sleep and circadian rhythms at Surrey University. “Sleep is related to everything: your metabolism, your physiology, your digestion, everything.” She said she would like to see other experiments that show a build up of waste in the brains of sleep-deprived people, and a reduction of that waste when they catch up on sleep.

Vladyslav Vyazovskiy, another sleep expert at Surrey University, was also sceptical. “I’m not fully convinced. Some of the effects are so striking they are hard to believe. I would like to see this work replicated independently before it can be taken seriously,” he said.

Jim Horne, professor emeritus and director of the sleep research centre at Loughborough University, cautioned that what happened in the fairly simple mouse brain might be very different to what happened in the more complex human brain. “Sleep in humans has evolved far more sophisticated functions for our cortex than that for the mouse, even though the present findings may well be true for us,” he said.

But Nedergaard believes she will find the same waste disposal system at work in humans. The work, she claims, could pave the way for medicines that slow the onset of dementias caused by the build-up of waste in the brain, and even help those who go without enough sleep. “It may be that we can reduce the need at least, because it’s so annoying to waste so much time sleeping,” she said.

Read the entire article here.

Image courtesy of Telegraph.

Send to Kindle

Night Owls, Beware!

A new batch of research points to a higher incidence of depression in night owls than in early risers. Further studies will be required to determine a true causal link, but initial evidence seems to suggest that those who stay up late have structural differences in the brain leading to a form of chronic jet lag.

From Washington Post:

They say the early bird catches the worm, but night owls may be missing far more than just a tasty snack. Researchers have discovered evidence of structural brain differences that distinguish early risers from people who like to stay up late. The differences might help explain why night owls seem to be at greater risk of depression.

About 10 percent of people are morning people, or larks, and 20 percent are night owls, with the rest falling in between. Your status is called your chronotype.

Previous studies have suggested that night owls experience worse sleep, feel more tiredness during the day and consume greater amounts of tobacco and alcohol. This has prompted some to suggest that they are suffering from a form of chronic jet lag.

Jessica Rosenberg at RWTH Aachen University in Germany and colleagues used a technique called diffusion tensor imaging to scan the brains of 16 larks, 23 night owls and 20 people with intermediate chronotypes. They found a reduction in the integrity of night owls’ white matter — brain tissue largely made up of fatty insulating material that speeds up the transmission of nerve signals — in areas associated with depression.

“We think this could be caused by the fact that late chronotypes suffer from this permanent jet lag,” Rosenberg says, although she cautions that further studies are needed to confirm cause and effect.

Read the entire article here.

Image courtesy of Google search.

Send to Kindle

Growing a Brain Outside of the Body

‘Tis the stuff of science fiction. And, it’s also quite real and happening in a lab near you.

From Technology Review:

Scientists at the Institute of Molecular Biotechnology in Vienna, Austria, have grown three-dimensional human brain tissues from stem cells. The tissues form discrete structures that are seen in the developing brain.

The Vienna researchers found that immature brain cells derived from stem cells self-organize into brain-like tissues in the right culture conditions. The “cerebral organoids,” as the researchers call them, grew to about four millimeters in size and could survive as long as 10 months. For decades, scientists have been able to take cells from animals including humans and grow them in a petri dish, but for the most part this has been done in two dimensions, with the cells grown in a thin layer in petri dishes. But in recent years, researchers have advanced tissue culture techniques so that three-dimensional brain tissue can grow in the lab. The new report from the Austrian team demonstrates that allowing immature brain cells to self-organize yields some of the largest and most complex lab-grown brain tissue, with distinct subregions and signs of functional neurons.

The work, published in Nature on Wednesday, is the latest advance in a field focused on creating more lifelike tissue cultures of neurons and related cells for studying brain function, disease, and repair. With a cultured cell model system that mimics the brain’s natural architecture, researchers would be able to look at how certain diseases occur and screen potential medications for toxicity and efficacy in a more natural setting, says Anja Kunze, a neuroengineer at the University of California, Los Angeles, who has developed three-dimensional brain tissue cultures to study Alzheimer’s disease.

The Austrian researchers coaxed cultured neurons to take on a three-dimensional organization using cell-friendly scaffolding materials in the cultures. The team also let the neuron progenitors control their own fate. “Stem cells have an amazing ability to self-organize,” said study first author Madeline Lancaster at a press briefing on Tuesday. Others groups have also recently seen success in allowing progenitor cells to self-organize, leading to reports of primitive eye structures, liver buds, and more (see “Growing Eyeballs” and “A Rudimentary Liver Is Grown from Stem Cells”).

The brain tissue formed discrete regions found in the early developing human brain, including regions that resemble parts of the cortex, the retina, and structures that produce cerebrospinal fluid. At the press briefing, senior author Juergen Knoblich said that while there have been numerous attempts to model human brain tissue in a culture using human cells, the complex human organ has proved difficult to replicate. Knoblich says the proto-brain resembles the developmental stage of a nine-week-old fetus’s brain.

While Knoblich’s group is focused on developmental questions, other groups are developing three-dimensional brain tissue cultures with the hopes of treating degenerative diseases or brain injury. A group at Georgia Institute of Technology has developed a three-dimensional neural culture to study brain injury, with the goal of identifying biomarkers that could be used to diagnose brain injury and potential drug targets for medications that can repair injured neurons. “It’s important to mimic the cellular architecture of the brain as much as possible because the mechanical response of that tissue is very dependent on its 3-D structure,” says biomedical engineer Michelle LaPlaca of Georgia Tech. Physical insults on cells in a three-dimensional culture will put stress on connections between cells and supporting material known as the extracellular matrix, she says.

Read the entire article here.

Image: Cerebral organoid derived from stem cells containing different brain regions. Courtesy of Japan Times.

Send to Kindle

Overcoming Right-handedness

When asked about handedness Nick Moran over a TheMillions says, “everybody’s born right-handed, but the best overcome it.” Funny. And perhaps, now, based on several rings of truth.

Several meta-studies on the issue of handedness suggest that lefties may indeed have an advantage over their right-handed cousins in a specific kind of creative thinking known as divergent thinking. Divergent thinking is the ability to generate new ideas for a single principle quickly.

At last, left-handers can emerge from the shadow that once branded them as sinister degenerates and criminals. (We recommend you check the etymology of the word “sinister” for yourself.)

From the New Yorker:

Cesare Lombroso, the father of modern criminology, owes his career to a human skull. In 1871, as a young doctor at a mental asylum in Pavia, Italy, he autopsied the brain of Giuseppe Villela, a Calabrese peasant turned criminal, who has been described as an Italian Jack the Ripper. “At the sight of that skull,” Lombroso said, “I seemed to see all at once, standing out clearly illuminated as in a vast plain under a flaming sky, the problem of the nature of the criminal, who reproduces in civilised times characteristics, not only of primitive savages, but of still lower types as far back as the carnivora.”

Lombroso would go on to argue that the key to understanding the essence of criminality lay in organic, physical, and constitutional features—each defect being a throwback to a more primitive and bestial psyche. And while his original insight had come from a skull, certain telltale signs, he believed, could be discerned long before an autopsy. Chief among these was left-handedness.

In 1903, Lombroso summarized his views on the left-handed of the world. “What is sure,” he wrote, “is, that criminals are more often left-handed than honest men, and lunatics are more sensitively left-sided than either of the other two.” Left-handers were more than three times as common in criminal populations as they were in everyday life, he found. The prevalence among swindlers was even higher: up to thirty-three per cent were left-handed—in contrast to the four per cent Lombroso found within the normal population. He ended on a conciliatory note. “I do not dream at all of saying that all left-handed people are wicked, but that left-handedness, united to many other traits, may contribute to form one of the worst characters among the human species.”

Though Lombroso’s science may seem suspect to a modern eye, less-than-favorable views of the left-handed have persisted. In 1977, the psychologist Theodore Blau argued that left-handed children were over-represented among the academically and behaviorally challenged, and were more vulnerable to mental diseases like schizophrenia. “Sinister children,” he called them. The psychologist Stanley Coren, throughout the eighties and nineties, presented evidence that the left-handed lived shorter, more impoverished lives, and that they were more likely to experience delays in mental and physical maturity, among other signs of “neurological insult or physical malfunctioning.” Toward the end of his career, the Harvard University neurologist Norman Geschwind implicated left-handedness in a range of problematic conditions, including migraines, diseases of the immune system, and learning disorders. He attributed the phenomenon, and the related susceptibilities, to higher levels of testosterone in utero, which, he argued, slowed down the development of the brain’s left hemisphere (the one responsible for the right side of the body).

But over the past two decades, the data that seemed compelling have largely been discredited. In 1993, the psychologist Marian Annett, who has spent half a century researching “handedness,” as it is known, challenged the basic foundation of Coren’s findings. The data, she argued, were fundamentally flawed: it wasn’t the case that left-handers led shorter lives. Rather, the older you were, the more likely it was that you had been forced to use your right hand as a young child. The mental-health data have also withered: a 2010 analysis of close to fifteen hundred individuals that included schizophrenic patients and their non-affected siblings found that being left-handed neither increased the risk of developing schizophrenia nor predicted any other cognitive or neural disadvantage. And when a group of neurologists scanned the brains of four hundred and sixty-five adults, they found no effect of handedness on either grey or white matter volume or concentration, either globally or regionally.

Left-handers may, in fact, even derive certain cognitive benefits from their preference. This spring, a group of psychiatrists from the University of Athens invited a hundred university students and graduates—half left-handed and half right—to complete two tests of cognitive ability. In the Trail Making Test, participants had to find a path through a batch of circles as quickly as possible. In the hard version of the test, the circles contain numbers and letters, and participants must move in ascending order while alternating between the two as fast as possible. In the second test, Letter-Number Sequencing, participants hear a group of numbers and letters and must then repeat the whole group, but with numbers in ascending order and letters organized alphabetically. Lefties performed better on both the complex version of the T.M.T.—demonstrating faster and more accurate spatial skills, along with strong executive control and mental flexibility—and on the L.N.S., demonstrating enhanced working memory. And the more intensely they preferred their left hand for tasks, the stronger the effect.

The Athens study points to a specific kind of cognitive benefit, since both the T.M.T. and the L.N.S. are thought to engage, to a large extent, the right hemisphere of the brain. But a growing body of research suggests another, broader benefit: a boost in a specific kind of creativity—namely, divergent thinking, or the ability to generate new ideas from a single principle quickly and effectively. In one demonstration, researchers found that the more marked the left-handed preference in a group of males, the better they were at tests of divergent thought. (The demonstration was led by the very Coren who had originally argued for the left-handers’ increased susceptibility to mental illness.) Left-handers were more adept, for instance, at combining two common objects in novel ways to form a third—for example, using a pole and a tin can to make a birdhouse. They also excelled at grouping lists of words into as many alternate categories as possible. Another recent study has demonstrated an increased cognitive flexibility among the ambidextrous and the left-handed—and lefties have been found to be over-represented among architects, musicians, and art and music students (as compared to those studying science).

Part of the explanation for this creative edge may lie in the greater connectivity of the left-handed brain. In a meta-analysis of forty-three studies, the neurologist Naomi Driesen and the cognitive neuroscientist Naftali Raz concluded that the corpus callosum—the bundle of fibers that connects the brain’s hemispheres—was slightly but significantly larger in left-handers than in right-handers. The explanation could also be a much more prosaic one: in 1989, a group of Connecticut College psychologists suggested that the creativity boost was a result of the environment, since left-handers had to constantly improvise to deal with a world designed for right-handers. In a 2013 review of research into handedness and cognition, a group of psychologists found that the main predictor of cognitive performance wasn’t whether an individual was left-handed or right-handed, but rather how strongly they preferred one hand over another. Strongly handed individuals, both right and left, were at a slight disadvantage compared to those who occupied the middle ground—both the ambidextrous and the left-handed who, through years of practice, had been forced to develop their non-dominant right hand. In those less clear-cut cases, the brain’s hemispheres interacted more and overall performance improved, indicating there may something to left-handed brains being pushed in a way that a right-handed one never is.

Whatever the ultimate explanation may be, the advantage appears to extend to other types of thinking, too. In a 1986 study of students who had scored in the top of their age group on either the math or the verbal sections of the S.A.T., the prevalence of left-handers among the high achievers—over fifteen per cent, as compared to the roughly ten percent found in the general population—was higher than in any comparison groups, which included their siblings and parents. Among those who had scored in the top in both the verbal and math sections, the percentage of left-handers jumped to nearly seventeen per cent, for males, and twenty per cent, for females. That advantage echoes an earlier sample of elementary-school children, which found increased left-handedness among children with I.Q. scores above a hundred and thirty-one.

Read the entire article here.

Image: Book cover – David Wolman’s new book, A Left Hand Turn Around the World, explores the scientific factors that lead to 10 percent of the human race being left-handed. Courtesy of NPR.

Send to Kindle

Dopamine on the Mind

Dopamine is one of the brain’s key signalling chemicals. And, because of its central role in the risk-reward structures of the brain it often gets much attention — both in neuroscience research and in the public consciousness.

From Slate:

In a brain that people love to describe as “awash with chemicals,” one chemical always seems to stand out. Dopamine: the molecule behind all our most sinful behaviors and secret cravings. Dopamine is love. Dopamine is lust. Dopamine is adultery. Dopamine is motivation. Dopamine is attention. Dopamine is feminism. Dopamine is addiction.

My, dopamine’s been busy.

Dopamine is the one neurotransmitter that everyone seems to know about. Vaughn Bell once called it the Kim Kardashian of molecules, but I don’t think that’s fair to dopamine. Suffice it to say, dopamine’s big. And every week or so, you’ll see a new article come out all about dopamine.

So is dopamine your cupcake addiction? Your gambling? Your alcoholism? Your sex life? The reality is dopamine has something to do with all of these. But it is none of them. Dopamine is a chemical in your body. That’s all. But that doesn’t make it simple.

What is dopamine? Dopamine is one of the chemical signals that pass information from one neuron to the next in the tiny spaces between them. When it is released from the first neuron, it floats into the space (the synapse) between the two neurons, and it bumps against receptors for it on the other side that then send a signal down the receiving neuron. That sounds very simple, but when you scale it up from a single pair of neurons to the vast networks in your brain, it quickly becomes complex. The effects of dopamine release depend on where it’s coming from, where the receiving neurons are going and what type of neurons they are, what receptors are binding the dopamine (there are five known types), and what role both the releasing and receiving neurons are playing.

And dopamine is busy! It’s involved in many different important pathways. But when most people talk about dopamine, particularly when they talk about motivation, addiction, attention, or lust, they are talking about the dopamine pathway known as the mesolimbic pathway, which starts with cells in the ventral tegmental area, buried deep in the middle of the brain, which send their projections out to places like the nucleus accumbens and the cortex. Increases in dopamine release in the nucleus accumbens occur in response to sex, drugs, and rock and roll. And dopamine signaling in this area is changed during the course of drug addiction.  All abused drugs, from alcohol to cocaine to heroin, increase dopamine in this area in one way or another, and many people like to describe a spike in dopamine as “motivation” or “pleasure.” But that’s not quite it. Really, dopamine is signaling feedback for predicted rewards. If you, say, have learned to associate a cue (like a crack pipe) with a hit of crack, you will start getting increases in dopamine in the nucleus accumbens in response to the sight of the pipe, as your brain predicts the reward. But if you then don’t get your hit, well, then dopamine can decrease, and that’s not a good feeling. So you’d think that maybe dopamine predicts reward. But again, it gets more complex. For example, dopamine can increase in the nucleus accumbens in people with post-traumatic stress disorder when they are experiencing heightened vigilance and paranoia. So you might say, in this brain area at least, dopamine isn’t addiction or reward or fear. Instead, it’s what we call salience. Salience is more than attention: It’s a sign of something that needs to be paid attention to, something that stands out. This may be part of the mesolimbic role in attention deficit hyperactivity disorder and also a part of its role in addiction.

But dopamine itself? It’s not salience. It has far more roles in the brain to play. For example, dopamine plays a big role in starting movement, and the destruction of dopamine neurons in an area of the brain called the substantia nigra is what produces the symptoms of Parkinson’s disease. Dopamine also plays an important role as a hormone, inhibiting prolactin to stop the release of breast milk. Back in the mesolimbic pathway, dopamine can play a role in psychosis, and many antipsychotics for treatment of schizophrenia target dopamine. Dopamine is involved in the frontal cortex in executive functions like attention. In the rest of the body, dopamine is involved in nausea, in kidney function, and in heart function.

With all of these wonderful, interesting things that dopamine does, it gets my goat to see dopamine simplified to things like “attention” or “addiction.” After all, it’s so easy to say “dopamine is X” and call it a day. It’s comforting. You feel like you know the truth at some fundamental biological level, and that’s that. And there are always enough studies out there showing the role of dopamine in X to leave you convinced. But simplifying dopamine, or any chemical in the brain, down to a single action or result gives people a false picture of what it is and what it does. If you think that dopamine is motivation, then more must be better, right? Not necessarily! Because if dopamine is also “pleasure” or “high,” then too much is far too much of a good thing. If you think of dopamine as only being about pleasure or only being about attention, you’ll end up with a false idea of some of the problems involving dopamine, like drug addiction or attention deficit hyperactivity disorder, and you’ll end up with false ideas of how to fix them.

Read the entire article here.

Image: 3D model of dopamine. Courtesy of Wikipedia.

Send to Kindle

Rewriting Memories

Important new research suggests that traumatic memories can be rewritten. Timing is critical.

From Technology Review:

It was a Saturday night at the New York Psychoanalytic Institute, and the second-floor auditorium held an odd mix of gray-haired, cerebral Upper East Side types and young, scruffy downtown grad students in black denim. Up on the stage, neuroscientist Daniela Schiller, a riveting figure with her long, straight hair and impossibly erect posture, paused briefly from what she was doing to deliver a mini-lecture about memory.

She explained how recent research, including her own, has shown that memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.

And then Schiller went back to what she had been doing, which was providing a slamming, rhythmic beat on drums and backup vocals for the Amygdaloids, a rock band composed of New York City neuroscientists. During their performance at the institute’s second annual “Heavy Mental Variety Show,” the band blasted out a selection of its greatest hits, including songs about cognition (“Theory of My Mind”), memory (“A Trace”), and psychopathology (“Brainstorm”).

“Just give me a pill,” Schiller crooned at one point, during the chorus of a song called “Memory Pill.” “Wash away my memories …”

The irony is that if research by Schiller and others holds up, you may not even need a pill to strip a memory of its power to frighten or oppress you.

Schiller, 40, has been in the vanguard of a dramatic reassessment of how human memory works at the most fundamental level. Her current lab group at Mount Sinai School of Medicine, her former colleagues at New York University, and a growing army of like-minded researchers have marshaled a pile of data to argue that we can alter the emotional impact of a memory by adding new information to it or recalling it in a different context. This hypothesis challenges 100 years of neuroscience and overturns cultural touchstones from Marcel Proust to best-selling memoirs. It changes how we think about the permanence of memory and identity, and it suggests radical nonpharmacological approaches to treating pathologies like post-traumatic stress disorder, other fear-based anxiety disorders, and even addictive behaviors.

In a landmark 2010 paper in Nature, Schiller (then a postdoc at New York University) and her NYU colleagues, including Joseph E. LeDoux and Elizabeth A. Phelps, published the results of human experiments indicating that memories are reshaped and rewritten every time we recall an event. And, the research suggested, if mitigating information about a traumatic or unhappy event is introduced within a narrow window of opportunity after its recall—during the few hours it takes for the brain to rebuild the memory in the biological brick and mortar of molecules—the emotional experience of the memory can essentially be rewritten.

“When you affect emotional memory, you don’t affect the content,” Schiller explains. “You still remember perfectly. You just don’t have the emotional memory.”

Fear training

The idea that memories are constantly being rewritten is not entirely new. Experimental evidence to this effect dates back at least to the 1960s. But mainstream researchers tended to ignore the findings for decades because they contradicted the prevailing scientific theory about how memory works.

That view began to dominate the science of memory at the beginning of the 20th century. In 1900, two German scientists, Georg Elias Müller and Alfons Pilzecker, conducted a series of human experiments at the University of Göttingen. Their results suggested that memories were fragile at the moment of formation but were strengthened, or consolidated, over time; once consolidated, these memories remained essentially static, permanently stored in the brain like a file in a cabinet from which they could be retrieved when the urge arose.

It took decades of painstaking research for neuroscientists to tease apart a basic mechanism of memory to explain how consolidation occurred at the level of neurons and proteins: an experience entered the neural landscape of the brain through the senses, was initially “encoded” in a central brain apparatus known as the hippocampus, and then migrated—by means of biochemical and electrical signals—to other precincts of the brain for storage. A famous chapter in this story was the case of “H.M.,” a young man whose hippocampus was removed during surgery in 1953 to treat debilitating epileptic seizures; although physiologically healthy for the remainder of his life (he died in 2008), H.M. was never again able to create new long-term memories, other than to learn new motor skills.

Subsequent research also made clear that there is no single thing called memory but, rather, different types of memory that achieve different biological purposes using different neural pathways. “Episodic” memory refers to the recollection of specific past events; “procedural” memory refers to the ability to remember specific motor skills like riding a bicycle or throwing a ball; fear memory, a particularly powerful form of emotional memory, refers to the immediate sense of distress that comes from recalling a physically or emotionally dangerous experience. Whatever the memory, however, the theory of consolidation argued that it was an unchanging neural trace of an earlier event, fixed in long-term storage. Whenever you retrieved the memory, whether it was triggered by an unpleasant emotional association or by the seductive taste of a madeleine, you essentially fetched a timeless narrative of an earlier event. Humans, in this view, were the sum total of their fixed memories. As recently as 2000 in Science, in a review article titled “Memory—A Century of Consolidation,” James L. McGaugh, a leading neuroscientist at the University of California, Irvine, celebrated the consolidation hypothesis for the way that it “still guides” fundamental research into the biological process of long-term memory.

As it turns out, Proust wasn’t much of a neuroscientist, and consolidation theory couldn’t explain everything about memory. This became apparent during decades of research into what is known as fear training.

Schiller gave me a crash course in fear training one afternoon in her Mount Sinai lab. One of her postdocs, Dorothee Bentz, strapped an electrode onto my right wrist in order to deliver a mild but annoying shock. She also attached sensors to several fingers on my left hand to record my galvanic skin response, a measure of physiological arousal and fear. Then I watched a series of images—blue and purple cylinders—flash by on a computer screen. It quickly became apparent that the blue cylinders often (but not always) preceded a shock, and my skin conductivity readings reflected what I’d learned. Every time I saw a blue cylinder, I became anxious in anticipation of a shock. The “learning” took no more than a couple of minutes, and Schiller pronounced my little bumps of anticipatory anxiety, charted in real time on a nearby monitor, a classic response of fear training. “It’s exactly the same as in the rats,” she said.

In the 1960s and 1970s, several research groups used this kind of fear memory in rats to detect cracks in the theory of memory consolidation. In 1968, for example, Donald J. Lewis of Rutgers University led a study showing that you could make the rats lose the fear associated with a memory if you gave them a strong electroconvulsive shock right after they were induced to retrieve that memory; the shock produced an amnesia about the previously learned fear. Giving a shock to animals that had not retrieved the memory, in contrast, did not cause amnesia. In other words, a strong shock timed to occur immediately after a memory was retrieved seemed to have a unique capacity to disrupt the memory itself and allow it to be reconsolidated in a new way. Follow-up work in the 1980s confirmed some of these observations, but they lay so far outside mainstream thinking that they barely received notice.

Moment of silence

At the time, Schiller was oblivious to these developments. A self-described skateboarding “science geek,” she grew up in Rishon LeZion, Israel’s fourth-largest city, on the coastal plain a few miles southeast of Tel Aviv. She was the youngest of four children of a mother from Morocco and a “culturally Polish” father from Ukraine—“a typical Israeli melting pot,” she says. As a tall, fair-skinned teenager with European features, she recalls feeling estranged from other neighborhood kids because she looked so German.

Schiller remembers exactly when her curiosity about the nature of human memory began. She was in the sixth grade, and it was the annual Holocaust Memorial Day in Israel. For a school project, she asked her father about his memories as a Holocaust survivor, and he shrugged off her questions. She was especially puzzled by her father’s behavior at 11 a.m., when a simultaneous eruption of sirens throughout Israel signals the start of a national moment of silence. While everyone else in the country stood up to honor the victims of genocide, he stubbornly remained seated at the kitchen table as the sirens blared, drinking his coffee and reading the newspaper.

“The Germans did something to my dad, but I don’t know what because he never talks about it,” Schiller told a packed audience in 2010 at The Moth, a storytelling event.

During her compulsory service in the Israeli army, she organized scientific and educational conferences, which led to studies in psychology and philosophy at Tel Aviv University; during that same period, she procured a set of drums and formed her own Hebrew rock band, the Rebellion Movement. Schiller went on to receive a PhD in psychobiology from Tel Aviv University in 2004. That same year, she recalls, she saw the movie Eternal Sunshine of the Spotless Mind, in which a young man undergoes treatment with a drug that erases all memories of a former girlfriend and their painful breakup. Schiller heard (mistakenly, it turns out) that the premise of the movie had been based on research conducted by Joe LeDoux, and she eventually applied to NYU for a postdoctoral fellowship.

In science as in memory, timing is everything. Schiller arrived in New York just in time for the second coming of memory reconsolidation in neuroscience.

Altering the story

The table had been set for Schiller’s work on memory modification in 2000, when Karim Nader, a postdoc in LeDoux’s lab, suggested an experiment testing the effect of a drug on the formation of fear memories in rats. LeDoux told Nader in no uncertain terms that he thought the idea was a waste of time and money. Nader did the experiment anyway. It ended up getting published in Nature and sparked a burst of renewed scientific interest in memory reconsolidation (see “Manipulating Memory,” May/June 2009).

The rats had undergone classic fear training—in an unpleasant twist on Pavlovian conditioning, they had learned to associate an auditory tone with an electric shock. But right after the animals retrieved the fearsome memory (the researchers knew they had done so because they froze when they heard the tone), Nader injected a drug that blocked protein synthesis directly into their amygdala, the part of the brain where fear memories are believed to be stored. Surprisingly, that appeared to pave over the fearful association. The rats no longer froze in fear of the shock when they heard the sound cue.

Decades of research had established that long-term memory consolidation requires the synthesis of proteins in the brain’s memory pathways, but no one knew that protein synthesis was required after the retrieval of a memory as well—which implied that the memory was being consolidated then, too. Nader’s experiments also showed that blocking protein synthesis prevented the animals from recalling the fearsome memory only if they received the drug at the right time, shortly after they were reminded of the fearsome event. If Nader waited six hours before giving the drug, it had no effect and the original memory remained intact. This was a big biochemical clue that at least some forms of memories essentially had to be neurally rewritten every time they were recalled.

When Schiller arrived at NYU in 2005, she was asked by Elizabeth Phelps, who was spearheading memory research in humans, to extend Nader’s findings and test the potential of a drug to block fear memories. The drug used in the rodent experiment was much too toxic for human use, but a class of antianxiety drugs known as beta-adrenergic antagonists (or, in common parlance, “beta blockers”) had potential; among these drugs was propranolol, which had previously been approved by the FDA for the treatment of panic attacks and stage fright. ­Schiller immediately set out to test the effect of propranolol on memory in humans, but she never actually performed the experiment because of prolonged delays in getting institutional approval for what was then a pioneering form of human experimentation. “It took four years to get approval,” she recalls, “and then two months later, they took away the approval again. My entire postdoc was spent waiting for this experiment to be approved.” (“It still hasn’t been approved!” she adds.)

While waiting for the approval that never came, Schiller began to work on a side project that turned out to be even more interesting. It grew out of an offhand conversation with a colleague about some anomalous data described at meeting of LeDoux’s lab: a group of rats “didn’t behave as they were supposed to” in a fear experiment, Schiller says.

The data suggested that a fear memory could be disrupted in animals even without the use of a drug that blocked protein synthesis. Schiller used the kernel of this idea to design a set of fear experiments in humans, while Marie-H. Monfils, a member of the LeDoux lab, simultaneously pursued a parallel line of experimentation in rats. In the human experiments, volunteers were shown a blue square on a computer screen and then given a shock. Once the blue square was associated with an impending shock, the fear memory was in place. Schiller went on to show that if she repeated the sequence that produced the fear memory the following day but broke the association within a narrow window of time—that is, showed the blue square without delivering the shock—this new information was incorporated into the memory.

Here, too, the timing was crucial. If the blue square that wasn’t followed by a shock was shown within 10 minutes of the initial memory recall, the human subjects reconsolidated the memory without fear. If it happened six hours later, the initial fear memory persisted. Put another way, intervening during the brief window when the brain was rewriting its memory offered a chance to revise the initial memory itself while diminishing the emotion (fear) that came with it. By mastering the timing, the NYU group had essentially created a scenario in which humans could rewrite a fearsome memory and give it an unfrightening ending. And this new ending was robust: when Schiller and her colleagues called their subjects back into the lab a year later, they were able to show that the fear associated with the memory was still blocked.

The study, published in Nature in 2010, made clear that reconsolidation of memory didn’t occur only in rats.

Read the entire article here.

Send to Kindle

Dead Man Talking

Graham is a man very much alive. But, his mind has convinced him that his brain is dead and that he killed it.

From the New Scientist:

Name: Graham
Condition: Cotard’s syndrome

“When I was in hospital I kept on telling them that the tablets weren’t going to do me any good ’cause my brain was dead. I lost my sense of smell and taste. I didn’t need to eat, or speak, or do anything. I ended up spending time in the graveyard because that was the closest I could get to death.”

Nine years ago, Graham woke up and discovered he was dead.

He was in the grip of Cotard’s syndrome. People with this rare condition believe that they, or parts of their body, no longer exist.

For Graham, it was his brain that was dead, and he believed that he had killed it. Suffering from severe depression, he had tried to commit suicide by taking an electrical appliance with him into the bath.

Eight months later, he told his doctor his brain had died or was, at best, missing. “It’s really hard to explain,” he says. “I just felt like my brain didn’t exist any more. I kept on telling the doctors that the tablets weren’t going to do me any good because I didn’t have a brain. I’d fried it in the bath.”

Doctors found trying to rationalise with Graham was impossible. Even as he sat there talking, breathing – living – he could not accept that his brain was alive. “I just got annoyed. I didn’t know how I could speak or do anything with no brain, but as far as I was concerned I hadn’t got one.”

Baffled, they eventually put him in touch with neurologists Adam Zeman at the University of Exeter, UK, and Steven Laureys at the University of Liège in Belgium.

“It’s the first and only time my secretary has said to me: ‘It’s really important for you to come and speak to this patient because he’s telling me he’s dead,'” says Laureys.

Limbo state

“He was a really unusual patient,” says Zeman. Graham’s belief “was a metaphor for how he felt about the world – his experiences no longer moved him. He felt he was in a limbo state caught between life and death”.

No one knows how common Cotard’s syndrome may be. A study published in 1995 of 349 elderly psychiatric patients in Hong Kong found two with symptoms resembling Cotard’s (General Hospital Psychiatry, DOI: 10.1016/0163-8343(94)00066-M). But with successful and quick treatments for mental states such as depression – the condition from which Cotard’s appears to arise most often – readily available, researchers suspect the syndrome is exceptionally rare today. Most academic work on the syndrome is limited to single case studies like Graham.

Some people with Cotard’s have reportedly died of starvation, believing they no longer needed to eat. Others have attempted to get rid of their body using acid, which they saw as the only way they could free themselves of being the “walking dead”.

Graham’s brother and carers made sure he ate, and looked after him. But it was a joyless existence. “I didn’t want to face people. There was no point,” he says, “I didn’t feel pleasure in anything. I used to idolise my car, but I didn’t go near it. All the things I was interested in went away.”

Even the cigarettes he used to relish no longer gave him a hit. “I lost my sense of smell and my sense of taste. There was no point in eating because I was dead. It was a waste of time speaking as I never had anything to say. I didn’t even really have any thoughts. Everything was meaningless.”

Low metabolism

A peek inside Graham’s brain provided Zeman and Laureys with some explanation. They used positron emission tomography to monitor metabolism across his brain. It was the first PET scan ever taken of a person with Cotard’s. What they found was shocking: metabolic activity across large areas of the frontal and parietal brain regions was so low that it resembled that of someone in a vegetative state.

Graham says he didn’t really have any thoughts about his future during that time. “I had no other option other than to accept the fact that I had no way to actually die. It was a nightmare.”

Graveyard haunt

This feeling prompted him on occasion to visit the local graveyard. “I just felt I might as well stay there. It was the closest I could get to death. The police would come and get me, though, and take me back home.”

There were some unexplained consequences of the disorder. Graham says he used to have “nice hairy legs”. But after he got Cotard’s, all the hairs fell out. “I looked like a plucked chicken! Saves shaving them I suppose…”

It’s nice to hear him joke. Over time, and with a lot of psychotherapy and drug treatment, Graham has gradually improved and is no longer in the grip of the disorder. He is now able to live independently. “His Cotard’s has ebbed away and his capacity to take pleasure in life has returned,” says Zeman.

“I couldn’t say I’m really back to normal, but I feel a lot better now and go out and do things around the house,” says Graham. “I don’t feel that brain-dead any more. Things just feel a bit bizarre sometimes.” And has the experience changed his feeling about death? “I’m not afraid of death,” he says. “But that’s not to do with what happened – we’re all going to die sometime. I’m just lucky to be alive now.”

Read the entire article here.

Image courtesy of Wikimedia / Public domain.

Send to Kindle

Age is All in the Mind (Hypothalamus)

Researchers are continuing to make great progress in unraveling the complexities of aging. While some fingers point to the shortening of telomeres — end caps — in our chromosomal DNA as a contributing factor, other research points to the hypothalamus. This small sub-region of the brain has been found to play a major role in aging and death (though, at the moment only in mice).

From the New Scientist:

The brain’s mechanism for controlling ageing has been discovered – and manipulated to shorten and extend the lives of mice. Drugs to slow ageing could follow

Tick tock, tick tock… A mechanism that controls ageing, counting down to inevitable death, has been identified in the hypothalamus?– a part of the brain that controls most of the basic functions of life.

By manipulating this mechanism, researchers have both shortened and lengthened the lifespan of mice. The discovery reveals several new drug targets that, if not quite an elixir of youth, may at least delay the onset of age-related disease.

The hypothalamus is an almond-sized puppetmaster in the brain. “It has a global effect,” says Dongsheng Cai at the Albert Einstein College of Medicine in New York. Sitting on top of the brain stem, it is the interface between the brain and the rest of the body, and is involved in, among other things, controlling our automatic response to the world around us, our hormone levels, sleep-wake cycles, immunity and reproduction.

While investigating ageing processes in the brain, Cai and his colleagues noticed that ageing mice produce increasing levels of nuclear factor kB (NF-kB)? ?– a protein complex that plays a major role in regulating immune responses. NF-kB is barely active in the hypothalamus of 3 to 4-month-old mice but becomes very active in old mice, aged 22 to 24 months.

To see whether it was possible to affect ageing by manipulating levels of this protein complex, Cai’s team tested three groups of middle-aged mice. One group was given gene therapy that inhibits NF-kB, the second had gene therapy to activate NF-kB, while the third was left to age naturally.

This last group lived, as expected, between 600 and 1000 days. Mice with activated NF-kB all died within 900 days, while the animals with NF-kB inhibition lived for up to 1100 days.

Crucially, the mice that lived the longest not only increased their lifespan but also remained mentally and physically fit for longer. Six months after receiving gene therapy, all the mice were given a series of tests involving cognitive and physical ability.

In all of the tests, the mice that subsequently lived the longest outperformed the controls, while the short-lived mice performed the worst.

Post-mortem examinations of muscle and bone in the longest-living rodents also showed that they had many chemical and physical qualities of younger mice.

Further investigation revealed that NF-kB reduces the level of a chemical produced by the hypothalamus called gonadotropin-releasing hormone (GnRH) ?– better known for its involvement in the regulation of puberty and fertility, and the production of eggs and sperm.

To see if they could control lifespan using this hormone, the team gave another group of mice??– 20 to 24 months old??– daily subcutaneous injections of GnRH for five to eight weeks. These mice lived longer too, by a length of time similar to that of mice with inhibited NF-kB.

GnRH injections also resulted in new neurons in the brain. What’s more, when injected directly into the hypothalamus, GnRH influenced other brain regions, reversing widespread age-related decline and further supporting the idea that the hypothalamus could be a master controller for many ageing processes.

GnRH injections even delayed ageing in the mice that had been given gene therapy to activate NF-kB and would otherwise have aged more quickly than usual. None of the mice in the study showed serious side effects.

So could regular doses of GnRH keep death at bay? Cai hopes to find out how different doses affect lifespan, but says the hormone is unlikely to prolong life indefinitely since GnRH is only one of many factors at play. “Ageing is the most complicated biological process,” he says.

Read the entire article after the jump.

Image: Location of Hypothalamus. Courtesy of Colorado State University / Wikipedia.

Send to Kindle

Criminology and Brain Science

Pathological criminals and the non-criminals who seek to understand them have no doubt co-existed since humans first learned to steal from and murder one another.

So while we may be no clearer in fully understanding the underlying causes of anti-social, destructive and violent behavior many researchers continue their quests. In one camp are those who maintain that such behavior is learned or comes as a consequence of poor choices or life-events, usually traumatic, or through exposure to an acute psychological or physiological stressor. In the other camp, are those who argue that genes and their subsequent expression, especially those controlling brain function, are a principal cause.

Some recent neurological studies of criminals and psychopaths shows fascinating, though not unequivocal, results.

From the Wall Street Journal:

The scientific study of crime got its start on a cold, gray November morning in 1871, on the east coast of Italy. Cesare Lombroso, a psychiatrist and prison doctor at an asylum for the criminally insane, was performing a routine autopsy on an infamous Calabrian brigand named Giuseppe Villella. Lombroso found an unusual indentation at the base of Villella’s skull. From this singular observation, he would go on to become the founding father of modern criminology.

Lombroso’s controversial theory had two key points: that crime originated in large measure from deformities of the brain and that criminals were an evolutionary throwback to more primitive species. Criminals, he believed, could be identified on the basis of physical characteristics, such as a large jaw and a sloping forehead. Based on his measurements of such traits, Lombroso created an evolutionary hierarchy, with Northern Italians and Jews at the top and Southern Italians (like Villella), along with Bolivians and Peruvians, at the bottom.

These beliefs, based partly on pseudoscientific phrenological theories about the shape and size of the human head, flourished throughout Europe in the late 19th and early 20th centuries. Lombroso was Jewish and a celebrated intellectual in his day, but the theory he spawned turned out to be socially and scientifically disastrous, not least by encouraging early-20th-century ideas about which human beings were and were not fit to reproduce—or to live at all.

The racial side of Lombroso’s theory fell into justifiable disrepute after the horrors of World War II, but his emphasis on physiology and brain traits has proved to be prescient. Modern-day scientists have now developed a far more compelling argument for the genetic and neurological components of criminal behavior. They have uncovered, quite literally, the anatomy of violence, at a time when many of us are preoccupied by the persistence of violent outrages in our midst.

The field of neurocriminology—using neuroscience to understand and prevent crime—is revolutionizing our understanding of what drives “bad” behavior. More than 100 studies of twins and adopted children have confirmed that about half of the variance in aggressive and antisocial behavior can be attributed to genetics. Other research has begun to pinpoint which specific genes promote such behavior.

Brain-imaging techniques are identifying physical deformations and functional abnormalities that predispose some individuals to violence. In one recent study, brain scans correctly predicted which inmates in a New Mexico prison were most likely to commit another crime after release. Nor is the story exclusively genetic: A poor environment can change the early brain and make for antisocial behavior later in life.

Most people are still deeply uncomfortable with the implications of neurocriminology. Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. Both sides fear any seeming effort to erode the idea of human agency and free will.

It is growing harder and harder, however, to avoid the mounting evidence. With each passing year, neurocriminology is winning new adherents, researchers and practitioners who understand its potential to transform our approach to both crime prevention and criminal justice.

The genetic basis of criminal behavior is now well established. Numerous studies have found that identical twins, who have all of their genes in common, are much more similar to each other in terms of crime and aggression than are fraternal twins, who share only 50% of their genes.

In a landmark 1984 study, my colleague Sarnoff Mednick found that children in Denmark who had been adopted from parents with a criminal record were more likely to become criminals in adulthood than were other adopted kids. The more offenses the biological parents had, the more likely it was that their offspring would be convicted of a crime. For biological parents who had no offenses, 13% of their sons had been convicted; for biological parents with three or more offenses, 25% of their sons had been convicted.

As for environmental factors that affect the young brain, lead is neurotoxic and particularly damages the prefrontal region, which regulates behavior. Measured lead levels in our bodies tend to peak at 21 months—an age when toddlers are apt to put their fingers into their mouths. Children generally pick up lead in soil that has been contaminated by air pollution and dumping.

Rising lead levels in the U.S. from 1950 through the 1970s neatly track increases in violence 20 years later, from the ’70s through the ’90s. (Violence peaks when individuals are in their late teens and early 20s.) As lead in the environment fell in the ’70s and ’80s—thanks in large part to the regulation of gasoline—violence fell correspondingly. No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.

Lead isn’t the only culprit. Other factors linked to higher aggression and violence in adulthood include smoking and drinking by the mother before birth, complications during birth and poor nutrition early in life.

Genetics and environment may work together to encourage violent behavior. One pioneering study in 2002 by Avshalom Caspi and Terrie Moffitt of Duke University genotyped over 1,000 individuals in a community in New Zealand and assessed their levels of antisocial behavior in adulthood. They found that a genotype conferring low levels of the enzyme monoamine oxidase A (MAOA), when combined with early child abuse, predisposed the individual to later antisocial behavior. Low MAOA has been linked to reduced volume in the amygdala—the emotional center of the brain—while physical child abuse can damage the frontal part of the brain, resulting in a double hit.

Brain-imaging studies have also documented impairments in offenders. Murderers, for instance, tend to have poorer functioning in the prefrontal cortex—the “guardian angel” that keeps the brakes on impulsive, disinhibited behavior and volatile emotions.

Read the entire article following the jump.

Image: The Psychopath Test by Jon Ronson, book cover. Courtesy of Goodreads.

Send to Kindle

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

Send to Kindle

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Send to Kindle

The Benefits of Human Stupidity

Human intelligence is a wonderful thing. At both the individual and collective level it drives our complex communication, our fundamental discoveries and inventions, and impressive and accelerating progress. Intelligence allows us to innovate, to design, to build; and it underlies our superior capacity, over other animals, for empathy, altruism, art, and social and cultural evolution. Yet, despite our intellectual abilities and seemingly limitless potential, we humans still do lots of stupid things. Why is this?

From New Scientist:

“EARTH has its boundaries, but human stupidity is limitless,” wrote Gustave Flaubert. He was almost unhinged by the fact. Colourful fulminations about his fatuous peers filled his many letters to Louise Colet, the French poet who inspired his novel Madame Bovary. He saw stupidity everywhere, from the gossip of middle-class busybodies to the lectures of academics. Not even Voltaire escaped his critical eye. Consumed by this obsession, he devoted his final years to collecting thousands of examples for a kind of encyclopedia of stupidity. He died before his magnum opus was complete, and some attribute his sudden death, aged 58, to the frustration of researching the book.

Documenting the extent of human stupidity may itself seem a fool’s errand, which could explain why studies of human intellect have tended to focus on the high end of the intelligence spectrum. And yet, the sheer breadth of that spectrum raises many intriguing questions. If being smart is such an overwhelming advantage, for instance, why aren’t we all uniformly intelligent? Or are there drawbacks to being clever that sometimes give slower thinkers the upper hand? And why are even the smartest people prone to – well, stupidity?

It turns out that our usual measures of intelligence – particularly IQ – have very little to do with the kind of irrational, illogical behaviours that so enraged Flaubert. You really can be highly intelligent, and at the same time very stupid. Understanding the factors that lead clever people to make bad decisions is beginning to shed light on many of society’s biggest catastrophes, including the recent economic crisis. More intriguingly, the latest research may suggest ways to evade a condition that can plague us all.

The idea that intelligence and stupidity are simply opposing ends of a single spectrum is a surprisingly modern one. The Renaissance theologian Erasmus painted Folly – or Stultitia in Latin – as a distinct entity in her own right, descended from the god of wealth and the nymph of youth; others saw it as a combination of vanity, stubbornness and imitation. It was only in the middle of the 18th century that stupidity became conflated with mediocre intelligence, says Matthijs van Boxsel, a Dutch historian who has written many books about stupidity. “Around that time, the bourgeoisie rose to power, and reason became a new norm with the Enlightenment,” he says. “That put every man in charge of his own fate.”

Modern attempts to study variations in human ability tended to focus on IQ tests that put a single number on someone’s mental capacity. They are perhaps best recognised as a measure of abstract reasoning, says psychologist Richard Nisbett at the University of Michigan in Ann Arbor. “If you have an IQ of 120, calculus is easy. If it’s 100, you can learn it but you’ll have to be motivated to put in a lot of work. If your IQ is 70, you have no chance of grasping calculus.” The measure seems to predict academic and professional success.

Various factors will determine where you lie on the IQ scale. Possibly a third of the variation in our intelligence is down to the environment in which we grow up – nutrition and education, for example. Genes, meanwhile, contribute more than 40 per cent of the differences between two people.

These differences may manifest themselves in our brain’s wiring. Smarter brains seem to have more efficient networks of connections between neurons. That may determine how well someone is able to use their short-term “working” memory to link disparate ideas and quickly access problem-solving strategies, says Jennie Ferrell, a psychologist at the University of the West of England in Bristol. “Those neural connections are the biological basis for making efficient mental connections.”

This variation in intelligence has led some to wonder whether superior brain power comes at a cost – otherwise, why haven’t we all evolved to be geniuses? Unfortunately, evidence is in short supply. For instance, some proposed that depression may be more common among more intelligent people, leading to higher suicide rates, but no studies have managed to support the idea. One of the only studies to report a downside to intelligence found that soldiers with higher IQs were more likely to die during the second world war. The effect was slight, however, and other factors might have skewed the data.

Intellectual wasteland

Alternatively, the variation in our intelligence may have arisen from a process called “genetic drift”, after human civilisation eased the challenges driving the evolution of our brains. Gerald Crabtree at Stanford University in California is one of the leading proponents of this idea. He points out that our intelligence depends on around 2000 to 5000 constantly mutating genes. In the distant past, people whose mutations had slowed their intellect would not have survived to pass on their genes; but Crabtree suggests that as human societies became more collaborative, slower thinkers were able to piggyback on the success of those with higher intellect. In fact, he says, someone plucked from 1000 BC and placed in modern society, would be “among the brightest and most intellectually alive of our colleagues and companions” (Trends in Genetics, vol 29, p 1).

This theory is often called the “idiocracy” hypothesis, after the eponymous film, which imagines a future in which the social safety net has created an intellectual wasteland. Although it has some supporters, the evidence is shaky. We can’t easily estimate the intelligence of our distant ancestors, and the average IQ has in fact risen slightly in the immediate past. At the very least, “this disproves the fear that less intelligent people have more children and therefore the national intelligence will fall”, says psychologist Alan Baddeley at the University of York, UK.

In any case, such theories on the evolution of intelligence may need a radical rethink in the light of recent developments, which have led many to speculate that there are more dimensions to human thinking than IQ measures. Critics have long pointed out that IQ scores can easily be skewed by factors such as dyslexia, education and culture. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux Indian,” says Nisbett. Additionally, people with scores as low as 80 can still speak multiple languages and even, in the case of one British man, engage in complex financial fraud. Conversely, high IQ is no guarantee that a person will act rationally – think of the brilliant physicists who insist that climate change is a hoax.

It was this inability to weigh up evidence and make sound decisions that so infuriated Flaubert. Unlike the French writer, however, many scientists avoid talking about stupidity per se – “the term is unscientific”, says Baddeley. However, Flaubert’s understanding that profound lapses in logic can plague the brightest minds is now getting attention. “There are intelligent people who are stupid,” says Dylan Evans, a psychologist and author who studies emotion and intelligence.

Read the entire article after the jump.

Send to Kindle

Chocolate for the Soul and Mind (But Not Body)

Hot on the heels of the recent research finding that the Mediterranean diet improves heart health, come news that choc-a-holics the world over have been anxiously awaiting — chocolate improves brain function.

Researchers have found that chocolate rich in compounds known as flavanols can improve cognitive function. Now, before you rush out the door to visit the local grocery store to purchase a mountain of Mars bars (perhaps not coincidentally, Mars, Inc., partly funded the research study), Godiva pralines, Cadbury flakes or a slab of Dove, take note that all chocolate is not created equally. Flavanols tend to be found in highest concentrations in raw cocoa. In fact, during the process of making most chocolate, including the dark kind, most flavanols tend to be removed or destroyed. Perhaps the silver lining here is that to replicate the dose of flavanols found to have a positive effect on brain function, you would have to eat around 20 bars of chocolate per day for several months. This may be good news for your brain, but not your waistline!

From Scientific American:

It’s news chocolate lovers have been craving: raw cocoa may be packed with brain-boosting compounds. Researchers at the University of L’Aquila in Italy, with scientists from Mars, Inc., and their colleagues published findings last September that suggest cognitive function in the elderly is improved by ingesting high levels of natural compounds found in cocoa called flavanols. The study included 90 individuals with mild cognitive impairment, a precursor to Alzheimer’s disease. Subjects who drank a cocoa beverage containing either moderate or high levels of flavanols daily for eight weeks demonstrated greater cognitive function than those who consumed low levels of flavanols on three separate tests that measured factors that included verbal fluency, visual searching and attention.

Exactly how cocoa causes these changes is still unknown, but emerging research points to one flavanol in particular: (-)-epicatechin, pronounced “minus epicatechin.” Its name signifies its structure, differentiating it from other catechins, organic compounds highly abundant in cocoa and present in apples, wine and tea. The graph below shows how (-)-epicatechin fits into the world of brain-altering food molecules. Other studies suggest that the compound supports increased circulation and the growth of blood vessels, which could explain improvements in cognition, because better blood flow would bring the brain more oxygen and improve its function.

Animal research has already demonstrated how pure (-)-epicatechin enhances memory. Findings published last October in the Journal of Experimental Biology note that snails can remember a trained task—such as holding their breath in deoxygenated water—for more than a day when given (-)-epicatechin but for less than three hours without the flavanol. Salk Institute neuroscientist Fred Gage and his colleagues found previously that (-)-epicatechin improves spatial memory and increases vasculature in mice. “It’s amazing that a single dietary change could have such profound effects on behavior,” Gage says. If further research confirms the compound’s cognitive effects, flavanol supplements—or raw cocoa beans—could be just what the doctor ordered.

So, Can We Binge on Chocolate Now?

Nope, sorry. A food’s origin, processing, storage and preparation can each alter its chemical composition. As a result, it is nearly impossible to predict which flavanols—and how many—remain in your bonbon or cup of tea. Tragically for chocoholics, most methods of processing cocoa remove many of the flavanols found in the raw plant. Even dark chocolate, touted as the “healthy” option, can be treated such that the cocoa darkens while flavanols are stripped.

Researchers are only beginning to establish standards for measuring flavanol content in chocolate. A typical one and a half ounce chocolate bar might contain about 50 milligrams of flavanols, which means you would need to consume 10 to 20 bars daily to approach the flavanol levels used in the University of L’Aquila study. At that point, the sugars and fats in these sweet confections would probably outweigh any possible brain benefits. Mars Botanical nutritionist and toxicologist Catherine Kwik-Uribe, an author on the University of L’Aquila study, says, “There’s now even more reasons to enjoy tea, apples and chocolate. But diversity and variety in your diet remain key.”

Read the entire article after the jump.

Image courtesy of Google Search.

Send to Kindle

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

From the New Scientist:

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

Read the entire article after the jump.

Send to Kindle

Your Brain and Politics

New research out of the University of Exeter in Britain and the University of California, San Diego, shows that liberals and conservatives really do have different brains. In fact, activity in specific areas of the brain can be used to predict whether a person leans to the left or to the right with an accuracy of just under 83 percent. This means that a brain scan could more accurately predict your politics than the political persuasions of your parents (accurate around 70 percent of the time).

From Smithsonian:

If you want to know people’s politics, tradition said to study their parents. In fact, the party affiliation of someone’s parents can predict the child’s political leanings about around 70 percent of the time.

But new research, published yesterday in the journal PLOS ONE, suggests what mom and dad think isn’t the endgame when it comes to shaping a person’s political identity. Ideological differences between partisans may reflect distinct neural processes, and they can predict who’s right and who’s left of center with 82.9 percent accuracy, outperforming the “your parents pick your party” model. It also out-predicts another neural model based on differences in brain structure, which distinguishes liberals from conservatives with 71.6 percent accuracy.

The study matched publicly available party registration records with the names of 82 American participants whose risk-taking behavior during a gambling experiment was monitored by brain scans. The researchers found that liberals and conservatives don’t differ in the risks they do or don’t take, but their brain activity does vary while they’re making decisions.

The idea that the brains of Democrats and Republicans may be hard-wired to their beliefs is not new. Previous research has shown that during MRI scans, areas linked to broad social connectedness, which involves friends and the world at large, light up in Democrats’ brains. Republicans, on the other hand, show more neural activity in parts of the brain associated with tight social connectedness, which focuses on family and country.

Other scans have shown that brain regions associated with risk and uncertainty, such as the fear-processing amygdala, differ in structure in liberals and conservatives. And different architecture means different behavior. Liberals tend to seek out novelty and uncertainty, while conservatives exhibit strong changes in attitude to threatening situations. The former are more willing to accept risk, while the latter tends to have more intense physical reactions to threatening stimuli.

Building on this, the new research shows that Democrats exhibited significantly greater activity in the left insula, a region associated with social and self-awareness, during the task. Republicans, however, showed significantly greater activity in the right amygdala, a region involved in our fight-or flight response system.

“If you went to Vegas, you won’t be able to tell who’s a Democrat or who’s a Republican, but the fact that being a Republican changes how your brain processes risk and gambling is really fascinating,” says lead researcher Darren Schreiber, a University of Exeter professor who’s currently teaching at Central European University in Budapest. “It suggests that politics alters our worldview and alters the way our brains process.”

Read the entire article following the jump.

Image: Sagittal brain MRI. Courtesy of Wikipedia.

Send to Kindle