Tag Archives: brain

Are You Smarter Than My Octopus?


My pet octopus has moods. It can change the color of its skin on demand. It watches me with its huge eyes. It’s inquisitive and can manipulate objects. Importantly, my octopus has around half a billion neurons in its brain, compared with around 100 billion in mine, and around 50 million in your pet gerbil.

Ok, let me stop for a moment. I don’t actually have a pet octopus. But the rest is true — about the octopus’ remarkable abilities. So, does it have a mind and is it sentient?

From the Atlantic:

Drawing on the work of other researchers, from primatologists to fellow octopologists and philosophers, Godfrey-Smith suggests two reasons for the large nervous system of the octopus. One has to do with its body. For an animal like a cat or a human, details of the skeleton dictate many of the motions the animal can make. You can’t roll your arm into a neat spiral from wrist to shoulder— your bones and joints get in the way. An octopus, having no skeleton, has no such constraint. It can, and frequently does, roll up some of its arms; or it can choose to make one (or several) of them stiff, creating an elbow. Surely the animal needs a huge number of neurons merely to be well coordinated when roaming about the reef.

At the same time, octopuses are versatile predators, eating a wide variety of food, from lobsters and shrimps to clams and fish. Octopuses that live in tide pools will occasionally leap out of the water to catch passing crabs; some even prey on incautious birds, grabbing them by the legs, pulling them underwater, and drowning them. Animals that evolve to tackle diverse kinds of food may tend to evolve larger brains than animals that always handle food in the same way (think of a frog catching insects).

Like humans, octopuses learn new skills. In some species, individuals inhabit a den for only a week or so before moving on, so they are constantly learning routes through new environments. Similarly, the first time an octopus tackles a clam, say, it has to figure out how to open it—can it pull it apart, or would it be more effective to drill a hole? If consciousness is necessary for such tasks, then perhaps the octopus does have an awareness that in some ways resembles our own.

Perhaps, indeed, we should take the “mammalian” behaviors of octopuses at face value. If evolution can produce similar eyes through different routes, why not similar minds? Or perhaps, in wishing to find these animals like ourselves, what we are really revealing is our deep desire not to be alone.

Read the entire article here.

Image: Common octopus. Courtesy: Wikipedia. CC BY-SA 3.0.

Send to Kindle

Thoughts As Shapes

wednesday is indigo blue bookcoverJonathan Jackson has a very rare form of a rare neurological condition. He has synesthesia, which is a cross-connection of two (or more) unrelated senses where an perception in one sense causes an automatic experience in another sense. Some synesthetes, for instance, see various sounds or musical notes as distinct colors (chromesthesia), others perceive different words as distinct tastes (lexical-gustatory synesthesia).

Jackson, on the other hand, experiences his thoughts as shapes in a visual mindmap. This is so fascinating I’ve excerpted a short piece of his story below.

Also, if you are further intrigued by this subject I recommend three great reads on the subject: Wednesday Is Indigo Blue: Discovering the Brain of Synesthesia by Richard Cytowic, and David M. Eagleman; Musicophilia: Tales of Music and the Brain, by Oliver Sacks; The Man Who Tasted Shapes by Richard Cytowic.

From the Atlantic:

One spring evening in the mid 2000s, Jonathan Jackson and Andy Linscott sat on some seaside rocks near their college campus, smoking the kind of cigarettes reserved for heartbreak. Linscott was, by his own admission, “emotionally spewing” over a girl, and Jackson was consoling him.

Jackson had always been a particularly good listener. But in the middle of their talk, he did something Linscott found deeply odd.

“He got up and jumped over to this much higher rock,” Linscott says. “He was like, ‘Andy, I’m listening, I just want to get a different angle. I want to see what you’re saying and the shape of your words from a different perspective.’ I was baffled.”

For Jackson, moving physically to think differently about an idea seemed totally natural. “People say, ‘Okay, we need to think about this from a new angle’ all the time!” he says. “But for me that’s literal.”

Jackson has synesthesia, a neurological phenomenon that has long been defined as the co-activation of two or more conventionally unrelated senses. Some synesthetes see music (known as auditory-visual synesthesia) or read letters and numbers in specific hues (grapheme-color synesthesia). But recent research has complicated that definition, exploring where in the sensory process those overlaps start and opening up the term to include types of synesthesia in which senses interact in a much more complex manner.

Read the entire  story here.

Image: Wednesday Is Indigo Blue, bookcover, Courtesy: By Richard E. Cytowic and David M. Eagleman, MIT Press.

Send to Kindle

Towards an Understanding of Consciousness


The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Send to Kindle

Your Brain on LSD


For the first time, researchers have peered inside the brain to study the realtime effect of the psychedelic drug LSD (lysergic acid diethylamide). Yes, neuroscientists scanned the brains of subjects who volunteered to take a trip inside an MRI scanner, all in the name of science.

While the researchers did not seem to document the detailed subjective experiences of their volunteers, the findings suggest that they were experiencing intense dreamlike visions, effectively “seeing with their eyes shut”. Under the influence of LSD many areas of the brain that are usually compartmentalized showed far greater interconnection and intense activity.

LSD was first synthesized in 1938. Its profound psychological properties were studied from the mid-1940s to the early sixties. The substance was later banned — worldwide — after its adoption as a recreational drug.

This new study was conducted by researchers from Imperial College London and The Beckley Foundation, which researches psychoactive substances.

From Guardian:

The profound impact of LSD on the brain has been laid bare by the first modern scans of people high on the drug.

The images, taken from volunteers who agreed to take a trip in the name of science, have given researchers an unprecedented insight into the neural basis for effects produced by one of the most powerful drugs ever created.

A dose of the psychedelic substance – injected rather than dropped – unleashed a wave of changes that altered activity and connectivity across the brain. This has led scientists to new theories of visual hallucinations and the sense of oneness with the universe some users report.

The brain scans revealed that trippers experienced images through information drawn from many parts of their brains, and not just the visual cortex at the back of the head that normally processes visual information. Under the drug, regions once segregated spoke to one another.

Further images showed that other brain regions that usually form a network became more separated in a change that accompanied users’ feelings of oneness with the world, a loss of personal identity called “ego dissolution”.

David Nutt, the government’s former drugs advisor, professor of neuropsychopharmacology at Imperial College London, and senior researcher on the study, said neuroscientists had waited 50 years for this moment. “This is to neuroscience what the Higgs boson was to particle physics,” he said. “We didn’t know how these profound effects were produced. It was too difficult to do. Scientists were either scared or couldn’t be bothered to overcome the enormous hurdles to get this done.”

Read the entire story here.

Image: Different sections of the brain, either on placebo, or under the influence of LSD (lots of orange). Courtesy: Imperial College/Beckley Foundation.

Send to Kindle

Multitasking: A Powerful and Diabolical Illusion

Our ever-increasingly ubiquitous technology makes possible all manner of things that would have been insurmountable just decades ago. We carry smartphones that envelope more computational power than mainframes just a generation ago. Yet for all this power at our fingertips we seem to forget that we are still very much human animals with limitations. One such “shortcoming” [your friendly editor believes it’s a boon] is our inability to multitask like our phones. I’ve written about this before, and am compelled to do so again after reading this thoughtful essay by Daniel J. Levitin, extracted from his book The Organized Mind: Thinking Straight in the Age of Information Overload. I even had to use his phrasing for the title of this post.

From the Guardian:

Our brains are busier than ever before. We’re assaulted with facts, pseudo facts, jibber-jabber, and rumour, all posing as information. Trying to figure out what you need to know and what you can ignore is exhausting. At the same time, we are all doing more. Thirty years ago, travel agents made our airline and rail reservations, salespeople helped us find what we were looking for in shops, and professional typists or secretaries helped busy people with their correspondence. Now we do most of those things ourselves. We are doing the jobs of 10 different people while still trying to keep up with our lives, our children and parents, our friends, our careers, our hobbies, and our favourite TV shows.

Our smartphones have become Swiss army knife–like appliances that include a dictionary, calculator, web browser, email, Game Boy, appointment calendar, voice recorder, guitar tuner, weather forecaster, GPS, texter, tweeter, Facebook updater, and flashlight. They’re more powerful and do more things than the most advanced computer at IBM corporate headquarters 30 years ago. And we use them all the time, part of a 21st-century mania for cramming everything we do into every single spare moment of downtime. We text while we’re walking across the street, catch up on email while standing in a queue – and while having lunch with friends, we surreptitiously check to see what our other friends are doing. At the kitchen counter, cosy and secure in our domicile, we write our shopping lists on smartphones while we are listening to that wonderfully informative podcast on urban beekeeping.

But there’s a fly in the ointment. Although we think we’re doing several things at once, multitasking, this is a powerful and diabolical illusion. Earl Miller, a neuroscientist at MIT and one of the world experts on divided attention, says that our brains are “not wired to multitask well… When people think they’re multitasking, they’re actually just switching from one task to another very rapidly. And every time they do, there’s a cognitive cost in doing so.” So we’re not actually keeping a lot of balls in the air like an expert juggler; we’re more like a bad amateur plate spinner, frantically switching from one task to another, ignoring the one that is not right in front of us but worried it will come crashing down any minute. Even though we think we’re getting a lot done, ironically, multitasking makes us demonstrably less efficient.

Multitasking has been found to increase the production of the stress hormone cortisol as well as the fight-or-flight hormone adrenaline, which can overstimulate your brain and cause mental fog or scrambled thinking. Multitasking creates a dopamine-addiction feedback loop, effectively rewarding the brain for losing focus and for constantly searching for external stimulation. To make matters worse, the prefrontal cortex has a novelty bias, meaning that its attention can be easily hijacked by something new – the proverbial shiny objects we use to entice infants, puppies, and kittens. The irony here for those of us who are trying to focus amid competing activities is clear: the very brain region we need to rely on for staying on task is easily distracted. We answer the phone, look up something on the internet, check our email, send an SMS, and each of these things tweaks the novelty- seeking, reward-seeking centres of the brain, causing a burst of endogenous opioids (no wonder it feels so good!), all to the detriment of our staying on task. It is the ultimate empty-caloried brain candy. Instead of reaping the big rewards that come from sustained, focused effort, we instead reap empty rewards from completing a thousand little sugar-coated tasks.

In the old days, if the phone rang and we were busy, we either didn’t answer or we turned the ringer off. When all phones were wired to a wall, there was no expectation of being able to reach us at all times – one might have gone out for a walk or been between places – and so if someone couldn’t reach you (or you didn’t feel like being reached), it was considered normal. Now more people have mobile phones than have toilets. This has created an implicit expectation that you should be able to reach someone when it is convenient for you, regardless of whether it is convenient for them. This expectation is so ingrained that people in meetings routinely answer their mobile phones to say, “I’m sorry, I can’t talk now, I’m in a meeting.” Just a decade or two ago, those same people would have let a landline on their desk go unanswered during a meeting, so different were the expectations for reachability.

Just having the opportunity to multitask is detrimental to cognitive performance. Glenn Wilson, former visiting professor of psychology at Gresham College, London, calls it info-mania. His research found that being in a situation where you are trying to concentrate on a task, and an email is sitting unread in your inbox, can reduce your effective IQ by 10 points. And although people ascribe many benefits to marijuana, including enhanced creativity and reduced pain and stress, it is well documented that its chief ingredient, cannabinol, activates dedicated cannabinol receptors in the brain and interferes profoundly with memory and with our ability to concentrate on several things at once. Wilson showed that the cognitive losses from multitasking are even greater than the cognitive losses from pot?smoking.

Russ Poldrack, a neuroscientist at Stanford, found that learning information while multitasking causes the new information to go to the wrong part of the brain. If students study and watch TV at the same time, for example, the information from their schoolwork goes into the striatum, a region specialised for storing new procedures and skills, not facts and ideas. Without the distraction of TV, the information goes into the hippocampus, where it is organised and categorised in a variety of ways, making it easier to retrieve. MIT’s Earl Miller adds, “People can’t do [multitasking] very well, and when they say they can, they’re deluding themselves.” And it turns out the brain is very good at this deluding business.

Then there are the metabolic costs that I wrote about earlier. Asking the brain to shift attention from one activity to another causes the prefrontal cortex and striatum to burn up oxygenated glucose, the same fuel they need to stay on task. And the kind of rapid, continual shifting we do with multitasking causes the brain to burn through fuel so quickly that we feel exhausted and disoriented after even a short time. We’ve literally depleted the nutrients in our brain. This leads to compromises in both cognitive and physical performance. Among other things, repeated task switching leads to anxiety, which raises levels of the stress hormone cortisol in the brain, which in turn can lead to aggressive and impulsive behaviour. By contrast, staying on task is controlled by the anterior cingulate and the striatum, and once we engage the central executive mode, staying in that state uses less energy than multitasking and actually reduces the brain’s need for glucose.

To make matters worse, lots of multitasking requires decision-making: Do I answer this text message or ignore it? How do I respond to this? How do I file this email? Do I continue what I’m working on now or take a break? It turns out that decision-making is also very hard on your neural resources and that little decisions appear to take up as much energy as big ones. One of the first things we lose is impulse control. This rapidly spirals into a depleted state in which, after making lots of insignificant decisions, we can end up making truly bad decisions about something important. Why would anyone want to add to their daily weight of information processing by trying to multitask?

Read the entire article here.

Send to Kindle

The Great Unknown: Consciousness


Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Send to Kindle

Isolation Fractures the Mind

Through the lens of extreme isolation Michael Bond shows us in this fascinating article how we really are social animals. Remove a person from all meaningful social contact — even for a short while — and her mind will begin to play tricks and eventually break. Michael Bond is author of The Power of Others.

From the BBC:

When people are isolated from human contact, their mind can do some truly bizarre things, says Michael Bond. Why does this happen?

Sarah Shourd’s mind began to slip after about two months into her incarceration. She heard phantom footsteps and flashing lights, and spent most of her day crouched on all fours, listening through a gap in the door.

That summer, the 32-year-old had been hiking with two friends in the mountains of Iraqi Kurdistan when they were arrested by Iranian troops after straying onto the border with Iran. Accused of spying, they were kept in solitary confinement in Evin prison in Tehran, each in their own tiny cell. She endured almost 10,000 hours with little human contact before she was freed. One of the most disturbing effects was the hallucinations.

“In the periphery of my vision, I began to see flashing lights, only to jerk my head around to find that nothing was there,” she wrote in the New York Times in 2011. “At one point, I heard someone screaming, and it wasn’t until I felt the hands of one of the friendlier guards on my face, trying to revive me, that I realised the screams were my own.”

We all want to be alone from time to time, to escape the demands of our colleagues or the hassle of crowds. But not alone alone. For most people, prolonged social isolation is all bad, particularly mentally. We know this not only from reports by people like Shourd who have experienced it first-hand, but also from psychological experiments on the effects of isolation and sensory deprivation, some of which had to be called off due to the extreme and bizarre reactions of those involved. Why does the mind unravel so spectacularly when we’re truly on our own, and is there any way to stop it?

We’ve known for a while that isolation is physically bad for us. Chronically lonely people have higher blood pressure, are more vulnerable to infection, and are also more likely to develop Alzheimer’s disease and dementia. Loneliness also interferes with a whole range of everyday functioning, such as sleep patterns, attention and logical and verbal reasoning. The mechanisms behind these effects are still unclear, though what is known is that social isolation unleashes an extreme immune response – a cascade of stress hormones and inflammation. This may have been appropriate in our early ancestors, when being isolated from the group carried big physical risks, but for us the outcome is mostly harmful.

Yet some of the most profound effects of loneliness are on the mind. For starters, isolation messes with our sense of time. One of the strangest effects is the ‘time-shifting’ reported by those who have spent long periods living underground without daylight. In 1961, French geologist Michel Siffre led a two-week expedition to study an underground glacier beneath the French Alps and ended up staying two months, fascinated by how the darkness affected human biology. He decided to abandon his watch and “live like an animal”. While conducting tests with his team on the surface, they discovered it took him five minutes to count to what he thought was 120 seconds.

A similar pattern of ‘slowing time’ was reported by Maurizio Montalbini, a sociologist and caving enthusiast. In 1993, Montalbini spent 366 days in an underground cavern near Pesaro in Italy that had been designed with Nasa to simulate space missions, breaking his own world record for time spent underground. When he emerged, he was convinced only 219 days had passed. His sleep-wake cycles had almost doubled in length. Since then, researchers have found that in darkness most people eventually adjust to a 48-hour cycle: 36 hours of activity followed by 12 hours of sleep. The reasons are still unclear.

As well as their time-shifts, Siffre and Montalbini reported periods of mental instability too. But these experiences were nothing compared with the extreme reactions seen in notorious sensory deprivation experiments in the mid-20th Century.

In the 1950s and 1960s, China was rumoured to be using solitary confinement to “brainwash” American prisoners captured during the Korean War, and the US and Canadian governments were all too keen to try it out. Their defence departments funded a series of research programmes that might be considered ethically dubious today.

The most extensive took place at McGill University Medical Center in Montreal, led by the psychologist Donald Hebb. The McGill researchers invited paid volunteers – mainly college students – to spend days or weeks by themselves in sound-proof cubicles, deprived of meaningful human contact. Their aim was to reduce perceptual stimulation to a minimum, to see how their subjects would behave when almost nothing was happening. They minimised what they could feel, see, hear and touch, fitting them with translucent visors, cotton gloves and cardboard cuffs extending beyond the fingertips. As Scientific American magazine reported at the time, they had them lie on U-shaped foam pillows to restrict noise, and set up a continuous hum of air-conditioning units to mask small sounds.

After only a few hours, the students became acutely restless. They started to crave stimulation, talking, singing or reciting poetry to themselves to break the monotony. Later, many of them became anxious or highly emotional. Their mental performance suffered too, struggling with arithmetic and word association tests.

But the most alarming effects were the hallucinations. They would start with points of light, lines or shapes, eventually evolving into bizarre scenes, such as squirrels marching with sacks over their shoulders or processions of eyeglasses filing down a street. They had no control over what they saw: one man saw only dogs; another, babies.

Some of them experienced sound hallucinations as well: a music box or a choir, for instance. Others imagined sensations of touch: one man had the sense he had been hit in the arm by pellets fired from guns. Another, reaching out to touch a doorknob, felt an electric shock.

When they emerged from the experiment they found it hard to shake this altered sense of reality, convinced that the whole room was in motion, or that objects were constantly changing shape and size.

Read the entire article here.


Send to Kindle

You Are a Neural Computation

Since the days of Aristotle, and later Descartes, thinkers have sought to explain consciousness and free will. Several thousand years on and we are still pondering the notion; science has made great strides and yet fundamentally we still have little idea.

Many neuroscientists now armed with new and very precise research tools are aiming to change this. Yet, increasingly it seems that free will may indeed by a cognitive illusion. Evidence suggests that our subconscious decides and initiates action for us long before we are aware of making a conscious decision. There seems to be no god or ghost in the machine.

From Technology Review:

It was an expedition seeking something never caught before: a single human neuron lighting up to create an urge, albeit for the minor task of moving an index finger, before the subject was even aware of feeling anything. Four years ago, Itzhak Fried, a neurosurgeon at the University of California, Los Angeles, slipped several probes, each with eight hairlike electrodes able to record from single neurons, into the brains of epilepsy patients. (The patients were undergoing surgery to diagnose the source of severe seizures and had agreed to participate in experiments during the process.) Probes in place, the patients—who were conscious—were given instructions to press a button at any time of their choosing, but also to report when they’d first felt the urge to do so.

Later, Gabriel Kreiman, a neuroscientist at Harvard Medical School and Children’s Hospital in Boston, captured the quarry. Poring over data after surgeries in 12 patients, he found telltale flashes of individual neurons in the pre-­supplementary motor area (associated with movement) and the anterior cingulate (associated with motivation and attention), preceding the reported urges by anywhere from hundreds of milliseconds to several seconds. It was a direct neural measurement of the unconscious brain at work—caught in the act of formulating a volitional, or freely willed, decision. Now Kreiman and his colleagues are planning to repeat the feat, but this time they aim to detect pre-urge signatures in real time and stop the subject from performing the action—or see if that’s even possible.

A variety of imaging studies in humans have revealed that brain activity related to decision-making tends to precede conscious action. Implants in macaques and other animals have examined brain circuits involved in perception and action. But Kreiman broke ground by directly measuring a preconscious decision in humans at the level of single neurons. To be sure, the readouts came from an average of just 20 neurons in each patient. (The human brain has about 86 billion of them, each with thousands of connections.) And ultimately, those neurons fired only in response to a chain of even earlier events. But as more such experiments peer deeper into the labyrinth of neural activity behind decisions—whether they involve moving a finger or opting to buy, eat, or kill something—science could eventually tease out the full circuitry of decision-making and perhaps point to behavioral therapies or treatments. “We need to understand the neuronal basis of voluntary decision-making—or ‘freely willed’ decision-­making—and its pathological counterparts if we want to help people such as drug, sex, food, and gambling addicts, or patients with obsessive-compulsive disorder,” says Christof Koch, chief scientist at the Allen Institute of Brain Science in Seattle (see “Cracking the Brain’s Codes”). “Many of these people perfectly well know that what they are doing is dysfunctional but feel powerless to prevent themselves from engaging in these behaviors.”

Kreiman, 42, believes his work challenges important Western philosophical ideas about free will. The Argentine-born neuroscientist, an associate professor at Harvard Medical School, specializes in visual object recognition and memory formation, which draw partly on unconscious processes. He has a thick mop of black hair and a tendency to pause and think a long moment before reframing a question and replying to it expansively. At the wheel of his Jeep as we drove down Broadway in Cambridge, Massachusetts, Kreiman leaned over to adjust the MP3 player—toggling between Vivaldi, Lady Gaga, and Bach. As he did so, his left hand, the one on the steering wheel, slipped to let the Jeep drift a bit over the double yellow lines. Kreiman’s view is that his neurons made him do it, and they also made him correct his small error an instant later; in short, all actions are the result of neural computations and nothing more. “I am interested in a basic age-old question,” he says. “Are decisions really free? I have a somewhat extreme view of this—that there is nothing really free about free will. Ultimately, there are neurons that obey the laws of physics and mathematics. It’s fine if you say ‘I decided’—that’s the language we use. But there is no god in the machine—only neurons that are firing.”

Our philosophical ideas about free will date back to Aristotle and were systematized by René Descartes, who argued that humans possess a God-given “mind,” separate from our material bodies, that endows us with the capacity to freely choose one thing rather than another. Kreiman takes this as his departure point. But he’s not arguing that we lack any control over ourselves. He doesn’t say that our decisions aren’t influenced by evolution, experiences, societal norms, sensations, and perceived consequences. “All of these external influences are fundamental to the way we decide what we do,” he says. “We do have experiences, we do learn, we can change our behavior.”

But the firing of a neuron that guides us one way or another is ultimately like the toss of a coin, Kreiman insists. “The rules that govern our decisions are similar to the rules that govern whether a coin will land one way or the other. Ultimately there is physics; it is chaotic in both cases, but at the end of the day, nobody will argue the coin ‘wanted’ to land heads or tails. There is no real volition to the coin.”

Testing Free Will

It’s only in the past three to four decades that imaging tools and probes have been able to measure what actually happens in the brain. A key research milestone was reached in the early 1980s when Benjamin Libet, a researcher in the physiology department at the University of California, San Francisco, made a remarkable study that tested the idea of conscious free will with actual data.

Libet fitted subjects with EEGs—gadgets that measure aggregate electrical brain activity through the scalp—and had them look at a clock dial that spun around every 2.8 seconds. The subjects were asked to press a button whenever they chose to do so—but told they should also take note of where the time hand was when they first felt the “wish or urge.” It turns out that the actual brain activity involved in the action began 300 milliseconds, on average, before the subject was conscious of wanting to press the button. While some scientists criticized the methods—questioning, among other things, the accuracy of the subjects’ self-reporting—the study set others thinking about how to investigate the same questions. Since then, functional magnetic resonance imaging (fMRI) has been used to map brain activity by measuring blood flow, and other studies have also measured brain activity processes that take place before decisions are made. But while fMRI transformed brain science, it was still only an indirect tool, providing very low spatial resolution and averaging data from millions of neurons. Kreiman’s own study design was the same as Libet’s, with the important addition of the direct single-neuron measurement.

When Libet was in his prime, ­Kreiman was a boy. As a student of physical chemistry at the University of Buenos Aires, he was interested in neurons and brains. When he went for his PhD at Caltech, his passion solidified under his advisor, Koch. Koch was deep in collaboration with Francis Crick, co-discoverer of DNA’s structure, to look for evidence of how consciousness was represented by neurons. For the star-struck kid from Argentina, “it was really life-changing,” he recalls. “Several decades ago, people said this was not a question serious scientists should be thinking about; they either had to be smoking something or have a Nobel Prize”—and Crick, of course, was a Nobelist. Crick hypothesized that studying how the brain processed visual information was one way to study consciousness (we tap unconscious processes to quickly decipher scenes and objects), and he collaborated with Koch on a number of important studies. Kreiman was inspired by the work. “I was very excited about the possibility of asking what seems to be the most fundamental aspect of cognition, consciousness, and free will in a reductionist way—in terms of neurons and circuits of neurons,” he says.

One thing was in short supply: humans willing to have scientists cut open their skulls and poke at their brains. One day in the late 1990s, Kreiman attended a journal club—a kind of book club for scientists reviewing the latest literature—and came across a paper by Fried on how to do brain science in people getting electrodes implanted in their brains to identify the source of severe epileptic seizures. Before he’d heard of Fried, “I thought examining the activity of neurons was the domain of monkeys and rats and cats, not humans,” Kreiman says. Crick introduced Koch to Fried, and soon Koch, Fried, and Kreiman were collaborating on studies that investigated human neural activity, including the experiment that made the direct neural measurement of the urge to move a finger. “This was the opening shot in a new phase of the investigation of questions of voluntary action and free will,” Koch says.

Read the entire article here.

Send to Kindle

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Send to Kindle

Now Where Did I Put Those Keys?


We all lose our car keys and misplace our cell phones. We leave umbrellas on public transport. We forget things at the office. We all do it — some more frequently than others. And, it’s not merely a symptom of aging. Many younger people seem to be increasingly prone to losing their personal items, perhaps a characteristic of their increasingly fragmented, distracted and limited attention spans.

From the WSJ:

You’ve put your keys somewhere and now they appear to be nowhere, certainly not in the basket by the door they’re supposed to go in and now you’re 20 minutes late for work. Kitchen counter, night stand, book shelf, work bag: Wait, finally, there they are under the mail you brought in last night.

Losing things is irritating and yet we are a forgetful people. The average person misplaces up to nine items a day, and one-third of respondents in a poll said they spend an average of 15 minutes each day searching for items—cellphones, keys and paperwork top the list, according to an online survey of 3,000 people published in 2012 by a British insurance company.

Everyday forgetfulness isn’t a sign of a more serious medical condition like Alzheimer’s or dementia. And while it can worsen with age, minor memory lapses are the norm for all ages, researchers say.

Our genes are at least partially to blame, experts say. Stress, fatigue, and multitasking can exacerbate our propensity to make such errors. Such lapses can also be linked to more serious conditions like depression and attention-deficit hyperactivity disorders.

“It’s the breakdown at the interface of attention and memory,” says Daniel L. Schacter, a psychology professor at Harvard University and author of “The Seven Sins of Memory.”

That breakdown can occur in two spots: when we fail to activate our memory and encode what we’re doing—where we put down our keys or glasses—or when we try to retrieve the memory. When you encode a memory, the hippocampus, a central part of the brain involved in memory function, takes a snapshot which is preserved in a set of neurons, says Kenneth Norman, a psychology professor at Princeton University. Those neurons can be activated later with a reminder or cue.

It is important to pay attention when you put down an item, or during encoding. If your state of mind at retrieval is different than it was during encoding, that could pose a problem. Case in point: You were starving when you walked into the house and deposited your keys. When you then go to look for them later, you’re no longer hungry so the memory may be harder to access.

The act of physically and mentally retracing your steps when looking for lost objects can work. Think back to your state of mind when you walked into the house (Were you hungry?). “The more you can make your brain at retrieval like the way it was when you lay down that original memory trace,” the more successful you will be, Dr. Norman says.

In a recent study, researchers in Germany found that the majority of people surveyed about forgetfulness and distraction had a variation in the so-called dopamine D2 receptor gene (DRD2), leading to a higher incidence of forgetfulness. According to the study, 75% of people carry a variation that makes them more prone to forgetfulness.

“Forgetfulness is quite common,” says Sebastian Markett, a researcher in psychology neuroscience at the University of Bonn in Germany and lead author of the study currently in the online version of the journal Neuroscience Letters, where it is expected to be published soon.

The study was based on a survey filled out by 500 people who were asked questions about memory lapses, perceptual failures (failing to notice a stop sign) and psychomotor failures (bumping into people on the street). The individuals also provided a saliva sample for molecular genetic testing.

About half of the total variation of forgetfulness can be explained by genetic effects, likely involving dozens of gene variations, Dr. Markett says.

The buildup of what psychologists call proactive interference helps explain how we can forget where we parked the car when we park in the same lot but different spaces every day. Memory may be impaired by the buildup of interference from previous experiences so it becomes harder to retrieve the specifics, like which parking space, Dr. Schacter says.

A study conducted by researchers at the Salk Institute for Biological Studies in California found that the brain keeps track of similar but distinct memories (where you parked your car today, for example) in the dentate gyrus, part of the hippocampus. There the brain stores separates recordings of each environment and different groups of neurons are activated when similar but nonidentical memories are encoded and later retrieved. The findings appeared last year in the online journal eLife.

The best way to remember where you put something may be the most obvious: Find a regular spot for it and somewhere that makes sense, experts say. If it’s reading glasses, leave them by the bedside. Charge your phone in the same place. Keep a container near the door for keys or a specific pocket in your purse.

Read the entire article here.

Image: Leather key chain. Courtesy of Wikipedia / The Egyptian.


Send to Kindle

Need Some Exercise? Laugh

Duck_SoupYour sense of humor and wit will keep your brain active and nimble. It will endear you to friends (often), family (usually) and bosses (sometimes). In addition, there is growing evidence that being an amateur (or professional) comedian or a just a connoisseur of good jokes will help you physically as well.

From WSJ:

“I just shot an elephant in my pajamas,” goes the old Groucho Marx joke. “How he got in my pajamas I don’t know.”

You’ve probably heard that one before, or something similar. For example, while viewing polling data for the 2008 presidential election on Comedy Central, Stephen Colbert deadpanned, “If I’m reading this graph correctly…I’d be very surprised.”

Zingers like these aren’t just good lines. They reveal something unusual about how the mind operates—and they show us how humor works. Simply put, the brain likes to jump the gun. We are always guessing where things are going, and we often get it wrong. But this isn’t necessarily bad. It’s why we laugh.

Humor is a form of exercise—a way of keeping the brain engaged. Mr. Colbert’s line is a fine example of this kind of mental calisthenics. If he had simply observed that polling data are hard to interpret, you would have heard crickets chirping. Instead, he misdirected his listeners, leading them to expect ponderous analysis and then bolting in the other direction to declare his own ignorance. He got a laugh as his audience’s minds caught up with him and enjoyed the experience of being fooled.

We benefit from taxing our brains with the mental exercise of humor, much as we benefit from the physical exercise of a long run or a tough tennis match. Comedy extends our mental stamina and improves our mental flexibility. A 1976 study by Avner Ziv of Tel Aviv University found that those who listened to a comedy album before taking a creativity test performed 20% better than those who weren’t exposed to the routine beforehand. In 1987, researchers at the University of Maryland found that watching comedy more than doubles our ability to solve brain teasers, like the so-called Duncker candle problem, which challenges people to attach a candle to a wall using only a book of matches and a box of thumbtacks. Research published in 1998 by psychologist Heather Belanger of the College of William and Mary even suggests that humor improves our ability to mentally rotate imaginary objects in our heads—a key test of spatial thinking ability.

The benefits of humor don’t stop with increased intelligence and creativity. Consider the “cold pressor test,” in which scientists ask subjects to submerge their hands in water cooled to just above the freezing mark.

This isn’t dangerous, but it does allow researchers to measure pain tolerance—which varies, it turns out, depending on what we’ve been doing before dunking our hands. How long could you hold your hand in 35-degree water after watching 10 minutes of Bill Cosby telling jokes? The answer depends on your own pain tolerance, but I can promise that it is longer than it would be if you had instead watched a nature documentary.

Like exercise, humor helps to prepare the mind for stressful events. A study done in 2000 by Arnold Cann, a psychologist at the University of North Carolina, had subjects watch 16 minutes of stand-up comedy before viewing “Faces of Death”—the notorious 1978 shock film depicting scene after scene of gruesome deaths. Those who watched the comedy routine before the grisly film reported significantly less psychological distress than those who watched a travel show instead. The degree to which humor can inoculate us from stress is quite amazing (though perhaps not as amazing as the fact that Dr. Cann got his experiment approved by his university’s ethical review board).

This doesn’t mean that every sort of humor is helpful. Taking a dark, sardonic attitude toward life can be unhealthy, especially when it relies on constant self-punishment. (Rodney Dangerfield: “My wife and I were happy for 20 years. Then we met.”) According to Nicholas Kuiper of the University of Western Ontario, people who resort to this kind of humor experience higher rates of depression than their peers, along with higher anxiety and lower self-esteem. Enjoying a good laugh is healthy, so long as you yourself aren’t always the target.

Having an active sense of humor helps us to get more from life, both cognitively and emotionally. It allows us to exercise our brains regularly, looking for unexpected and pleasing connections even in the face of difficulties or hardship. The physicist Richard Feynman called this “the kick of the discovery,” claiming that the greatest joy of his life wasn’t winning the Nobel Prize—it was the pleasure of discovering new things.

Read the entire story here.

Image: Duck Soup, promotional movie poster (1933). Courtesy of Wikipedia.


Send to Kindle

Is Your City Killing You?

The stresses of modern day living are taking a toll on your mind and body. And, more so if you happen to live in an concrete jungle. The results are even more pronounced for those of us living in large urban centers. That’s the finding of some fascinating new brain research out of Germany. Their simple answer to a lower-stress life: move to the countryside.

From The Guardian:

You are lying down with your head in a noisy and tightfitting fMRI brain scanner, which is unnerving in itself. You agreed to take part in this experiment, and at first the psychologists in charge seemed nice.

They set you some rather confusing maths problems to solve against the clock, and you are doing your best, but they aren’t happy. “Can you please concentrate a little better?” they keep saying into your headphones. Or, “You are among the worst performing individuals to have been studied in this laboratory.” Helpful things like that. It is a relief when time runs out.

Few people would enjoy this experience, and indeed the volunteers who underwent it were monitored to make sure they had a stressful time. Their minor suffering, however, provided data for what became a major study, and a global news story. The researchers, led by Dr Andreas Meyer-Lindenberg of the Central Institute of Mental Health in Mannheim, Germany, were trying to find out more about how the brains of different people handle stress. They discovered that city dwellers’ brains, compared with people who live in the countryside, seem not to handle it so well.

To be specific, while Meyer-Lindenberg and his accomplices were stressing out their subjects, they were looking at two brain regions: the amygdalas and the perigenual anterior cingulate cortex (pACC). The amygdalas are known to be involved in assessing threats and generating fear, while the pACC in turn helps to regulate the amygdalas. In stressed citydwellers, the amygdalas appeared more active on the scanner; in people who lived in small towns, less so; in people who lived in the countryside, least of all.

And something even more intriguing was happening in the pACC. Here the important relationship was not with where the the subjects lived at the time, but where they grew up. Again, those with rural childhoods showed the least active pACCs, those with urban ones the most. In the urban group moreover, there seemed not to be the same smooth connection between the behaviour of the two brain regions that was observed in the others. An erratic link between the pACC and the amygdalas is often seen in those with schizophrenia too. And schizophrenic people are much more likely to live in cities.

When the results were published in Nature, in 2011, media all over the world hailed the study as proof that cities send us mad. Of course it proved no such thing – but it did suggest it. Even allowing for all the usual caveats about the limitations of fMRI imaging, the small size of the study group and the huge holes that still remained in our understanding, the results offered a tempting glimpse at the kind of urban warping of our minds that some people, at least, have linked to city life since the days of Sodom and Gomorrah.

The year before the Meyer-Lindenberg study was published, the existence of that link had been established still more firmly by a group of Dutch researchers led by Dr Jaap Peen. In their meta-analysis (essentially a pooling together of many other pieces of research) they found that living in a city roughly doubles the risk of schizophrenia – around the same level of danger that is added by smoking a lot of cannabis as a teenager.

At the same time urban living was found to raise the risk of anxiety disorders and mood disorders by 21% and 39% respectively. Interestingly, however, a person’s risk of addiction disorders seemed not to be affected by where they live. At one time it was considered that those at risk of mental illness were just more likely to move to cities, but other research has now more or less ruled that out.

So why is it that the larger the settlement you live in, the more likely you are to become mentally ill? Another German researcher and clinician, Dr Mazda Adli, is a keen advocate of one theory, which implicates that most paradoxical urban mixture: loneliness in crowds. “Obviously our brains are not perfectly shaped for living in urban environments,” Adli says. “In my view, if social density and social isolation come at the same time and hit high-risk individuals … then city-stress related mental illness can be the consequence.”

Read the entire story here.

Send to Kindle

Left Brain, Right Brain or Top Brain, Bottom Brain?

Are you analytical and logical? If so, you are likely to be labeled as being “left-brained”. On the other hand, if you are emotional and creative, you are more likely to be labeled “right-brained”. And so the popular narrative of brain function continues. But this generalized distinction is a myth. Our brains’ hemispheres do specialize, but not in such an overarching way. Recent research points to another distinction: top brain and bottom brain.

From WSJ:

Who hasn’t heard that people are either left-brained or right-brained—either analytical and logical or artistic and intuitive, based on the relative “strengths” of the brain’s two hemispheres? How often do we hear someone remark about thinking with one side or the other?

A flourishing industry of books, videos and self-help programs has been built on this dichotomy. You can purportedly “diagnose” your brain, “motivate” one or both sides, indulge in “essence therapy” to “restore balance” and much more. Everyone from babies to elders supposedly can benefit. The left brain/right brain difference seems to be a natural law.

Except that it isn’t. The popular left/right story has no solid basis in science. The brain doesn’t work one part at a time, but rather as a single interactive system, with all parts contributing in concert, as neuroscientists have long known. The left brain/right brain story may be the mother of all urban legends: It sounds good and seems to make sense—but just isn’t true.

The origins of this myth lie in experimental surgery on some very sick epileptics a half-century ago, conducted under the direction of Roger Sperry, a renowned neuroscientist at the California Institute of Technology. Seeking relief for their intractable epilepsy, and encouraged by Sperry’s experimental work with animals, 16 patients allowed the Caltech team to cut the corpus callosum, the massive bundle of nerve fibers that connects the two sides of the brain. The patients’ suffering was alleviated, and Sperry’s postoperative studies of these volunteers confirmed that the two halves do, indeed, have distinct cognitive capabilities.

But these capabilities are not the stuff of popular narrative: They reflect very specific differences in function—such as attending to overall shape versus details during perception—not sweeping distinctions such as being “logical” versus “intuitive.” This important fine print got buried in the vast mainstream publicity that Sperry’s research generated.

There is a better way to understand the functioning of the brain, based on another, ordinarily overlooked anatomical division—between its top and bottom parts. We call this approach “the theory of cognitive modes.” Built on decades of unimpeachable research that has largely remained inside scientific circles, it offers a new way of viewing thought and behavior that may help us understand the actions of people as diverse as Oprah Winfrey, the Dalai Lama, Tiger Woods and Elizabeth Taylor.

Our theory has emerged from the field of neuropsychology, the study of higher cognitive functioning—thoughts, wishes, hopes, desires and all other aspects of mental life. Higher cognitive functioning is seated in the cerebral cortex, the rind-like outer layer of the brain that consists of four lobes. Illustrations of this wrinkled outer brain regularly show a top-down view of the two hemispheres, which are connected by thick bundles of neuronal tissue, notably the corpus callosum, an impressive structure consisting of some 250 million nerve fibers.

If you move the view to the side, however, you can see the top and bottom parts of the brain, demarcated largely by the Sylvian fissure, the crease-like structure named for the 17th-century Dutch physician who first described it. The top brain comprises the entire parietal lobe and the top (and larger) portion of the frontal lobe. The bottom comprises the smaller remainder of the frontal lobe and all of the occipital and temporal lobes.

Our theory’s roots lie in a landmark report published in 1982 by Mortimer Mishkin and Leslie G. Ungerleider of the National Institute of Mental Health. Their trailblazing research examined rhesus monkeys, which have brains that process visual information in much the same way as the human brain. Hundreds of subsequent studies in several fields have helped to shape our theory, by researchers such as Gregoire Borst of Paris Descartes University, Martha Farah of the University of Pennsylvania, Patricia Goldman-Rakic of Yale University, Melvin Goodale of the University of Western Ontario and Maria Kozhevnikov of the National University of Singapore.

This research reveals that the top-brain system uses information about the surrounding environment (in combination with other sorts of information, such as emotional reactions and the need for food or drink) to figure out which goals to try to achieve. It actively formulates plans, generates expectations about what should happen when a plan is executed and then, as the plan is being carried out, compares what is happening with what was expected, adjusting the plan accordingly.

The bottom-brain system organizes signals from the senses, simultaneously comparing what is being perceived with all the information previously stored in memory. It then uses the results of such comparisons to classify and interpret the object or event, allowing us to confer meaning on the world.

The top- and bottom-brain systems always work together, just as the hemispheres always do. Our brains are not engaged in some sort of constant cerebral tug of war, with one part seeking dominance over another. (What a poor evolutionary strategy that would have been!) Rather, they can be likened roughly to the parts of a bicycle: the frame, seat, wheels, handlebars, pedals, gears, brakes and chain that work together to provide transportation.

But here’s the key to our theory: Although the top and bottom parts of the brain are always used during all of our waking lives, people do not rely on them to an equal degree. To extend the bicycle analogy, not everyone rides a bike the same way. Some may meander, others may race.

Read the entire article here.

Image: Left-brain, right-brain cartoon. Courtesy of HuffingtonPost.

Send to Kindle

Why Sleep?

There are more theories on why we sleep than there are cable channels in the U.S. But that hasn’t prevented researchers from proposing yet another one — it’s all about flushing waste.

From the Guardian:

Scientists in the US claim to have a new explanation for why we sleep: in the hours spent slumbering, a rubbish disposal service swings into action that cleans up waste in the brain.

Through a series of experiments on mice, the researchers showed that during sleep, cerebral spinal fluid is pumped around the brain, and flushes out waste products like a biological dishwasher.

The process helps to remove the molecular detritus that brain cells churn out as part of their natural activity, along with toxic proteins that can lead to dementia when they build up in the brain, the researchers say.

Maiken Nedergaard, who led the study at the University of Rochester, said the discovery might explain why sleep is crucial for all living organisms. “I think we have discovered why we sleep,” Nedergaard said. “We sleep to clean our brains.”

Writing in the journal Science, Nedergaard describes how brain cells in mice shrank when they slept, making the space between them on average 60% greater. This made the cerebral spinal fluid in the animals’ brains flow ten times faster than when the mice were awake.

The scientists then checked how well mice cleared toxins from their brains by injecting traces of proteins that are implicated in Alzheimer’s disease. These amyloid beta proteins were removed faster from the brains of sleeping mice, they found.

Nedergaard believes the clean-up process is more active during sleep because it takes too much energy to pump fluid around the brain when awake. “You can think of it like having a house party. You can either entertain the guests or clean up the house, but you can’t really do both at the same time,” she said in a statement.

According to the scientist, the cerebral spinal fluid flushes the brain’s waste products into what she calls the “glymphatic system” which carries it down through the body and ultimately to the liver where it is broken down.

Other researchers were sceptical of the study, and said it was too early to know if the process goes to work in humans, and how to gauge the importance of the mechanism. “It’s very attractive, but I don’t think it’s the main function of sleep,” said Raphaelle Winsky-Sommerer, a specialist on sleep and circadian rhythms at Surrey University. “Sleep is related to everything: your metabolism, your physiology, your digestion, everything.” She said she would like to see other experiments that show a build up of waste in the brains of sleep-deprived people, and a reduction of that waste when they catch up on sleep.

Vladyslav Vyazovskiy, another sleep expert at Surrey University, was also sceptical. “I’m not fully convinced. Some of the effects are so striking they are hard to believe. I would like to see this work replicated independently before it can be taken seriously,” he said.

Jim Horne, professor emeritus and director of the sleep research centre at Loughborough University, cautioned that what happened in the fairly simple mouse brain might be very different to what happened in the more complex human brain. “Sleep in humans has evolved far more sophisticated functions for our cortex than that for the mouse, even though the present findings may well be true for us,” he said.

But Nedergaard believes she will find the same waste disposal system at work in humans. The work, she claims, could pave the way for medicines that slow the onset of dementias caused by the build-up of waste in the brain, and even help those who go without enough sleep. “It may be that we can reduce the need at least, because it’s so annoying to waste so much time sleeping,” she said.

Read the entire article here.

Image courtesy of Telegraph.

Send to Kindle

Night Owls, Beware!

A new batch of research points to a higher incidence of depression in night owls than in early risers. Further studies will be required to determine a true causal link, but initial evidence seems to suggest that those who stay up late have structural differences in the brain leading to a form of chronic jet lag.

From Washington Post:

They say the early bird catches the worm, but night owls may be missing far more than just a tasty snack. Researchers have discovered evidence of structural brain differences that distinguish early risers from people who like to stay up late. The differences might help explain why night owls seem to be at greater risk of depression.

About 10 percent of people are morning people, or larks, and 20 percent are night owls, with the rest falling in between. Your status is called your chronotype.

Previous studies have suggested that night owls experience worse sleep, feel more tiredness during the day and consume greater amounts of tobacco and alcohol. This has prompted some to suggest that they are suffering from a form of chronic jet lag.

Jessica Rosenberg at RWTH Aachen University in Germany and colleagues used a technique called diffusion tensor imaging to scan the brains of 16 larks, 23 night owls and 20 people with intermediate chronotypes. They found a reduction in the integrity of night owls’ white matter — brain tissue largely made up of fatty insulating material that speeds up the transmission of nerve signals — in areas associated with depression.

“We think this could be caused by the fact that late chronotypes suffer from this permanent jet lag,” Rosenberg says, although she cautions that further studies are needed to confirm cause and effect.

Read the entire article here.

Image courtesy of Google search.

Send to Kindle

Growing a Brain Outside of the Body

‘Tis the stuff of science fiction. And, it’s also quite real and happening in a lab near you.

From Technology Review:

Scientists at the Institute of Molecular Biotechnology in Vienna, Austria, have grown three-dimensional human brain tissues from stem cells. The tissues form discrete structures that are seen in the developing brain.

The Vienna researchers found that immature brain cells derived from stem cells self-organize into brain-like tissues in the right culture conditions. The “cerebral organoids,” as the researchers call them, grew to about four millimeters in size and could survive as long as 10 months. For decades, scientists have been able to take cells from animals including humans and grow them in a petri dish, but for the most part this has been done in two dimensions, with the cells grown in a thin layer in petri dishes. But in recent years, researchers have advanced tissue culture techniques so that three-dimensional brain tissue can grow in the lab. The new report from the Austrian team demonstrates that allowing immature brain cells to self-organize yields some of the largest and most complex lab-grown brain tissue, with distinct subregions and signs of functional neurons.

The work, published in Nature on Wednesday, is the latest advance in a field focused on creating more lifelike tissue cultures of neurons and related cells for studying brain function, disease, and repair. With a cultured cell model system that mimics the brain’s natural architecture, researchers would be able to look at how certain diseases occur and screen potential medications for toxicity and efficacy in a more natural setting, says Anja Kunze, a neuroengineer at the University of California, Los Angeles, who has developed three-dimensional brain tissue cultures to study Alzheimer’s disease.

The Austrian researchers coaxed cultured neurons to take on a three-dimensional organization using cell-friendly scaffolding materials in the cultures. The team also let the neuron progenitors control their own fate. “Stem cells have an amazing ability to self-organize,” said study first author Madeline Lancaster at a press briefing on Tuesday. Others groups have also recently seen success in allowing progenitor cells to self-organize, leading to reports of primitive eye structures, liver buds, and more (see “Growing Eyeballs” and “A Rudimentary Liver Is Grown from Stem Cells”).

The brain tissue formed discrete regions found in the early developing human brain, including regions that resemble parts of the cortex, the retina, and structures that produce cerebrospinal fluid. At the press briefing, senior author Juergen Knoblich said that while there have been numerous attempts to model human brain tissue in a culture using human cells, the complex human organ has proved difficult to replicate. Knoblich says the proto-brain resembles the developmental stage of a nine-week-old fetus’s brain.

While Knoblich’s group is focused on developmental questions, other groups are developing three-dimensional brain tissue cultures with the hopes of treating degenerative diseases or brain injury. A group at Georgia Institute of Technology has developed a three-dimensional neural culture to study brain injury, with the goal of identifying biomarkers that could be used to diagnose brain injury and potential drug targets for medications that can repair injured neurons. “It’s important to mimic the cellular architecture of the brain as much as possible because the mechanical response of that tissue is very dependent on its 3-D structure,” says biomedical engineer Michelle LaPlaca of Georgia Tech. Physical insults on cells in a three-dimensional culture will put stress on connections between cells and supporting material known as the extracellular matrix, she says.

Read the entire article here.

Image: Cerebral organoid derived from stem cells containing different brain regions. Courtesy of Japan Times.

Send to Kindle

Overcoming Right-handedness

When asked about handedness Nick Moran over a TheMillions says, “everybody’s born right-handed, but the best overcome it.” Funny. And perhaps, now, based on several rings of truth.

Several meta-studies on the issue of handedness suggest that lefties may indeed have an advantage over their right-handed cousins in a specific kind of creative thinking known as divergent thinking. Divergent thinking is the ability to generate new ideas for a single principle quickly.

At last, left-handers can emerge from the shadow that once branded them as sinister degenerates and criminals. (We recommend you check the etymology of the word “sinister” for yourself.)

From the New Yorker:

Cesare Lombroso, the father of modern criminology, owes his career to a human skull. In 1871, as a young doctor at a mental asylum in Pavia, Italy, he autopsied the brain of Giuseppe Villela, a Calabrese peasant turned criminal, who has been described as an Italian Jack the Ripper. “At the sight of that skull,” Lombroso said, “I seemed to see all at once, standing out clearly illuminated as in a vast plain under a flaming sky, the problem of the nature of the criminal, who reproduces in civilised times characteristics, not only of primitive savages, but of still lower types as far back as the carnivora.”

Lombroso would go on to argue that the key to understanding the essence of criminality lay in organic, physical, and constitutional features—each defect being a throwback to a more primitive and bestial psyche. And while his original insight had come from a skull, certain telltale signs, he believed, could be discerned long before an autopsy. Chief among these was left-handedness.

In 1903, Lombroso summarized his views on the left-handed of the world. “What is sure,” he wrote, “is, that criminals are more often left-handed than honest men, and lunatics are more sensitively left-sided than either of the other two.” Left-handers were more than three times as common in criminal populations as they were in everyday life, he found. The prevalence among swindlers was even higher: up to thirty-three per cent were left-handed—in contrast to the four per cent Lombroso found within the normal population. He ended on a conciliatory note. “I do not dream at all of saying that all left-handed people are wicked, but that left-handedness, united to many other traits, may contribute to form one of the worst characters among the human species.”

Though Lombroso’s science may seem suspect to a modern eye, less-than-favorable views of the left-handed have persisted. In 1977, the psychologist Theodore Blau argued that left-handed children were over-represented among the academically and behaviorally challenged, and were more vulnerable to mental diseases like schizophrenia. “Sinister children,” he called them. The psychologist Stanley Coren, throughout the eighties and nineties, presented evidence that the left-handed lived shorter, more impoverished lives, and that they were more likely to experience delays in mental and physical maturity, among other signs of “neurological insult or physical malfunctioning.” Toward the end of his career, the Harvard University neurologist Norman Geschwind implicated left-handedness in a range of problematic conditions, including migraines, diseases of the immune system, and learning disorders. He attributed the phenomenon, and the related susceptibilities, to higher levels of testosterone in utero, which, he argued, slowed down the development of the brain’s left hemisphere (the one responsible for the right side of the body).

But over the past two decades, the data that seemed compelling have largely been discredited. In 1993, the psychologist Marian Annett, who has spent half a century researching “handedness,” as it is known, challenged the basic foundation of Coren’s findings. The data, she argued, were fundamentally flawed: it wasn’t the case that left-handers led shorter lives. Rather, the older you were, the more likely it was that you had been forced to use your right hand as a young child. The mental-health data have also withered: a 2010 analysis of close to fifteen hundred individuals that included schizophrenic patients and their non-affected siblings found that being left-handed neither increased the risk of developing schizophrenia nor predicted any other cognitive or neural disadvantage. And when a group of neurologists scanned the brains of four hundred and sixty-five adults, they found no effect of handedness on either grey or white matter volume or concentration, either globally or regionally.

Left-handers may, in fact, even derive certain cognitive benefits from their preference. This spring, a group of psychiatrists from the University of Athens invited a hundred university students and graduates—half left-handed and half right—to complete two tests of cognitive ability. In the Trail Making Test, participants had to find a path through a batch of circles as quickly as possible. In the hard version of the test, the circles contain numbers and letters, and participants must move in ascending order while alternating between the two as fast as possible. In the second test, Letter-Number Sequencing, participants hear a group of numbers and letters and must then repeat the whole group, but with numbers in ascending order and letters organized alphabetically. Lefties performed better on both the complex version of the T.M.T.—demonstrating faster and more accurate spatial skills, along with strong executive control and mental flexibility—and on the L.N.S., demonstrating enhanced working memory. And the more intensely they preferred their left hand for tasks, the stronger the effect.

The Athens study points to a specific kind of cognitive benefit, since both the T.M.T. and the L.N.S. are thought to engage, to a large extent, the right hemisphere of the brain. But a growing body of research suggests another, broader benefit: a boost in a specific kind of creativity—namely, divergent thinking, or the ability to generate new ideas from a single principle quickly and effectively. In one demonstration, researchers found that the more marked the left-handed preference in a group of males, the better they were at tests of divergent thought. (The demonstration was led by the very Coren who had originally argued for the left-handers’ increased susceptibility to mental illness.) Left-handers were more adept, for instance, at combining two common objects in novel ways to form a third—for example, using a pole and a tin can to make a birdhouse. They also excelled at grouping lists of words into as many alternate categories as possible. Another recent study has demonstrated an increased cognitive flexibility among the ambidextrous and the left-handed—and lefties have been found to be over-represented among architects, musicians, and art and music students (as compared to those studying science).

Part of the explanation for this creative edge may lie in the greater connectivity of the left-handed brain. In a meta-analysis of forty-three studies, the neurologist Naomi Driesen and the cognitive neuroscientist Naftali Raz concluded that the corpus callosum—the bundle of fibers that connects the brain’s hemispheres—was slightly but significantly larger in left-handers than in right-handers. The explanation could also be a much more prosaic one: in 1989, a group of Connecticut College psychologists suggested that the creativity boost was a result of the environment, since left-handers had to constantly improvise to deal with a world designed for right-handers. In a 2013 review of research into handedness and cognition, a group of psychologists found that the main predictor of cognitive performance wasn’t whether an individual was left-handed or right-handed, but rather how strongly they preferred one hand over another. Strongly handed individuals, both right and left, were at a slight disadvantage compared to those who occupied the middle ground—both the ambidextrous and the left-handed who, through years of practice, had been forced to develop their non-dominant right hand. In those less clear-cut cases, the brain’s hemispheres interacted more and overall performance improved, indicating there may something to left-handed brains being pushed in a way that a right-handed one never is.

Whatever the ultimate explanation may be, the advantage appears to extend to other types of thinking, too. In a 1986 study of students who had scored in the top of their age group on either the math or the verbal sections of the S.A.T., the prevalence of left-handers among the high achievers—over fifteen per cent, as compared to the roughly ten percent found in the general population—was higher than in any comparison groups, which included their siblings and parents. Among those who had scored in the top in both the verbal and math sections, the percentage of left-handers jumped to nearly seventeen per cent, for males, and twenty per cent, for females. That advantage echoes an earlier sample of elementary-school children, which found increased left-handedness among children with I.Q. scores above a hundred and thirty-one.

Read the entire article here.

Image: Book cover – David Wolman’s new book, A Left Hand Turn Around the World, explores the scientific factors that lead to 10 percent of the human race being left-handed. Courtesy of NPR.

Send to Kindle

Dopamine on the Mind

Dopamine is one of the brain’s key signalling chemicals. And, because of its central role in the risk-reward structures of the brain it often gets much attention — both in neuroscience research and in the public consciousness.

From Slate:

In a brain that people love to describe as “awash with chemicals,” one chemical always seems to stand out. Dopamine: the molecule behind all our most sinful behaviors and secret cravings. Dopamine is love. Dopamine is lust. Dopamine is adultery. Dopamine is motivation. Dopamine is attention. Dopamine is feminism. Dopamine is addiction.

My, dopamine’s been busy.

Dopamine is the one neurotransmitter that everyone seems to know about. Vaughn Bell once called it the Kim Kardashian of molecules, but I don’t think that’s fair to dopamine. Suffice it to say, dopamine’s big. And every week or so, you’ll see a new article come out all about dopamine.

So is dopamine your cupcake addiction? Your gambling? Your alcoholism? Your sex life? The reality is dopamine has something to do with all of these. But it is none of them. Dopamine is a chemical in your body. That’s all. But that doesn’t make it simple.

What is dopamine? Dopamine is one of the chemical signals that pass information from one neuron to the next in the tiny spaces between them. When it is released from the first neuron, it floats into the space (the synapse) between the two neurons, and it bumps against receptors for it on the other side that then send a signal down the receiving neuron. That sounds very simple, but when you scale it up from a single pair of neurons to the vast networks in your brain, it quickly becomes complex. The effects of dopamine release depend on where it’s coming from, where the receiving neurons are going and what type of neurons they are, what receptors are binding the dopamine (there are five known types), and what role both the releasing and receiving neurons are playing.

And dopamine is busy! It’s involved in many different important pathways. But when most people talk about dopamine, particularly when they talk about motivation, addiction, attention, or lust, they are talking about the dopamine pathway known as the mesolimbic pathway, which starts with cells in the ventral tegmental area, buried deep in the middle of the brain, which send their projections out to places like the nucleus accumbens and the cortex. Increases in dopamine release in the nucleus accumbens occur in response to sex, drugs, and rock and roll. And dopamine signaling in this area is changed during the course of drug addiction.  All abused drugs, from alcohol to cocaine to heroin, increase dopamine in this area in one way or another, and many people like to describe a spike in dopamine as “motivation” or “pleasure.” But that’s not quite it. Really, dopamine is signaling feedback for predicted rewards. If you, say, have learned to associate a cue (like a crack pipe) with a hit of crack, you will start getting increases in dopamine in the nucleus accumbens in response to the sight of the pipe, as your brain predicts the reward. But if you then don’t get your hit, well, then dopamine can decrease, and that’s not a good feeling. So you’d think that maybe dopamine predicts reward. But again, it gets more complex. For example, dopamine can increase in the nucleus accumbens in people with post-traumatic stress disorder when they are experiencing heightened vigilance and paranoia. So you might say, in this brain area at least, dopamine isn’t addiction or reward or fear. Instead, it’s what we call salience. Salience is more than attention: It’s a sign of something that needs to be paid attention to, something that stands out. This may be part of the mesolimbic role in attention deficit hyperactivity disorder and also a part of its role in addiction.

But dopamine itself? It’s not salience. It has far more roles in the brain to play. For example, dopamine plays a big role in starting movement, and the destruction of dopamine neurons in an area of the brain called the substantia nigra is what produces the symptoms of Parkinson’s disease. Dopamine also plays an important role as a hormone, inhibiting prolactin to stop the release of breast milk. Back in the mesolimbic pathway, dopamine can play a role in psychosis, and many antipsychotics for treatment of schizophrenia target dopamine. Dopamine is involved in the frontal cortex in executive functions like attention. In the rest of the body, dopamine is involved in nausea, in kidney function, and in heart function.

With all of these wonderful, interesting things that dopamine does, it gets my goat to see dopamine simplified to things like “attention” or “addiction.” After all, it’s so easy to say “dopamine is X” and call it a day. It’s comforting. You feel like you know the truth at some fundamental biological level, and that’s that. And there are always enough studies out there showing the role of dopamine in X to leave you convinced. But simplifying dopamine, or any chemical in the brain, down to a single action or result gives people a false picture of what it is and what it does. If you think that dopamine is motivation, then more must be better, right? Not necessarily! Because if dopamine is also “pleasure” or “high,” then too much is far too much of a good thing. If you think of dopamine as only being about pleasure or only being about attention, you’ll end up with a false idea of some of the problems involving dopamine, like drug addiction or attention deficit hyperactivity disorder, and you’ll end up with false ideas of how to fix them.

Read the entire article here.

Image: 3D model of dopamine. Courtesy of Wikipedia.

Send to Kindle

Dead Man Talking

Graham is a man very much alive. But, his mind has convinced him that his brain is dead and that he killed it.

From the New Scientist:

Name: Graham
Condition: Cotard’s syndrome

“When I was in hospital I kept on telling them that the tablets weren’t going to do me any good ’cause my brain was dead. I lost my sense of smell and taste. I didn’t need to eat, or speak, or do anything. I ended up spending time in the graveyard because that was the closest I could get to death.”

Nine years ago, Graham woke up and discovered he was dead.

He was in the grip of Cotard’s syndrome. People with this rare condition believe that they, or parts of their body, no longer exist.

For Graham, it was his brain that was dead, and he believed that he had killed it. Suffering from severe depression, he had tried to commit suicide by taking an electrical appliance with him into the bath.

Eight months later, he told his doctor his brain had died or was, at best, missing. “It’s really hard to explain,” he says. “I just felt like my brain didn’t exist any more. I kept on telling the doctors that the tablets weren’t going to do me any good because I didn’t have a brain. I’d fried it in the bath.”

Doctors found trying to rationalise with Graham was impossible. Even as he sat there talking, breathing – living – he could not accept that his brain was alive. “I just got annoyed. I didn’t know how I could speak or do anything with no brain, but as far as I was concerned I hadn’t got one.”

Baffled, they eventually put him in touch with neurologists Adam Zeman at the University of Exeter, UK, and Steven Laureys at the University of Liège in Belgium.

“It’s the first and only time my secretary has said to me: ‘It’s really important for you to come and speak to this patient because he’s telling me he’s dead,'” says Laureys.

Limbo state

“He was a really unusual patient,” says Zeman. Graham’s belief “was a metaphor for how he felt about the world – his experiences no longer moved him. He felt he was in a limbo state caught between life and death”.

No one knows how common Cotard’s syndrome may be. A study published in 1995 of 349 elderly psychiatric patients in Hong Kong found two with symptoms resembling Cotard’s (General Hospital Psychiatry, DOI: 10.1016/0163-8343(94)00066-M). But with successful and quick treatments for mental states such as depression – the condition from which Cotard’s appears to arise most often – readily available, researchers suspect the syndrome is exceptionally rare today. Most academic work on the syndrome is limited to single case studies like Graham.

Some people with Cotard’s have reportedly died of starvation, believing they no longer needed to eat. Others have attempted to get rid of their body using acid, which they saw as the only way they could free themselves of being the “walking dead”.

Graham’s brother and carers made sure he ate, and looked after him. But it was a joyless existence. “I didn’t want to face people. There was no point,” he says, “I didn’t feel pleasure in anything. I used to idolise my car, but I didn’t go near it. All the things I was interested in went away.”

Even the cigarettes he used to relish no longer gave him a hit. “I lost my sense of smell and my sense of taste. There was no point in eating because I was dead. It was a waste of time speaking as I never had anything to say. I didn’t even really have any thoughts. Everything was meaningless.”

Low metabolism

A peek inside Graham’s brain provided Zeman and Laureys with some explanation. They used positron emission tomography to monitor metabolism across his brain. It was the first PET scan ever taken of a person with Cotard’s. What they found was shocking: metabolic activity across large areas of the frontal and parietal brain regions was so low that it resembled that of someone in a vegetative state.

Graham says he didn’t really have any thoughts about his future during that time. “I had no other option other than to accept the fact that I had no way to actually die. It was a nightmare.”

Graveyard haunt

This feeling prompted him on occasion to visit the local graveyard. “I just felt I might as well stay there. It was the closest I could get to death. The police would come and get me, though, and take me back home.”

There were some unexplained consequences of the disorder. Graham says he used to have “nice hairy legs”. But after he got Cotard’s, all the hairs fell out. “I looked like a plucked chicken! Saves shaving them I suppose…”

It’s nice to hear him joke. Over time, and with a lot of psychotherapy and drug treatment, Graham has gradually improved and is no longer in the grip of the disorder. He is now able to live independently. “His Cotard’s has ebbed away and his capacity to take pleasure in life has returned,” says Zeman.

“I couldn’t say I’m really back to normal, but I feel a lot better now and go out and do things around the house,” says Graham. “I don’t feel that brain-dead any more. Things just feel a bit bizarre sometimes.” And has the experience changed his feeling about death? “I’m not afraid of death,” he says. “But that’s not to do with what happened – we’re all going to die sometime. I’m just lucky to be alive now.”

Read the entire article here.

Image courtesy of Wikimedia / Public domain.

Send to Kindle

Age is All in the Mind (Hypothalamus)

Researchers are continuing to make great progress in unraveling the complexities of aging. While some fingers point to the shortening of telomeres — end caps — in our chromosomal DNA as a contributing factor, other research points to the hypothalamus. This small sub-region of the brain has been found to play a major role in aging and death (though, at the moment only in mice).

From the New Scientist:

The brain’s mechanism for controlling ageing has been discovered – and manipulated to shorten and extend the lives of mice. Drugs to slow ageing could follow

Tick tock, tick tock… A mechanism that controls ageing, counting down to inevitable death, has been identified in the hypothalamus?– a part of the brain that controls most of the basic functions of life.

By manipulating this mechanism, researchers have both shortened and lengthened the lifespan of mice. The discovery reveals several new drug targets that, if not quite an elixir of youth, may at least delay the onset of age-related disease.

The hypothalamus is an almond-sized puppetmaster in the brain. “It has a global effect,” says Dongsheng Cai at the Albert Einstein College of Medicine in New York. Sitting on top of the brain stem, it is the interface between the brain and the rest of the body, and is involved in, among other things, controlling our automatic response to the world around us, our hormone levels, sleep-wake cycles, immunity and reproduction.

While investigating ageing processes in the brain, Cai and his colleagues noticed that ageing mice produce increasing levels of nuclear factor kB (NF-kB)? ?– a protein complex that plays a major role in regulating immune responses. NF-kB is barely active in the hypothalamus of 3 to 4-month-old mice but becomes very active in old mice, aged 22 to 24 months.

To see whether it was possible to affect ageing by manipulating levels of this protein complex, Cai’s team tested three groups of middle-aged mice. One group was given gene therapy that inhibits NF-kB, the second had gene therapy to activate NF-kB, while the third was left to age naturally.

This last group lived, as expected, between 600 and 1000 days. Mice with activated NF-kB all died within 900 days, while the animals with NF-kB inhibition lived for up to 1100 days.

Crucially, the mice that lived the longest not only increased their lifespan but also remained mentally and physically fit for longer. Six months after receiving gene therapy, all the mice were given a series of tests involving cognitive and physical ability.

In all of the tests, the mice that subsequently lived the longest outperformed the controls, while the short-lived mice performed the worst.

Post-mortem examinations of muscle and bone in the longest-living rodents also showed that they had many chemical and physical qualities of younger mice.

Further investigation revealed that NF-kB reduces the level of a chemical produced by the hypothalamus called gonadotropin-releasing hormone (GnRH) ?– better known for its involvement in the regulation of puberty and fertility, and the production of eggs and sperm.

To see if they could control lifespan using this hormone, the team gave another group of mice??– 20 to 24 months old??– daily subcutaneous injections of GnRH for five to eight weeks. These mice lived longer too, by a length of time similar to that of mice with inhibited NF-kB.

GnRH injections also resulted in new neurons in the brain. What’s more, when injected directly into the hypothalamus, GnRH influenced other brain regions, reversing widespread age-related decline and further supporting the idea that the hypothalamus could be a master controller for many ageing processes.

GnRH injections even delayed ageing in the mice that had been given gene therapy to activate NF-kB and would otherwise have aged more quickly than usual. None of the mice in the study showed serious side effects.

So could regular doses of GnRH keep death at bay? Cai hopes to find out how different doses affect lifespan, but says the hormone is unlikely to prolong life indefinitely since GnRH is only one of many factors at play. “Ageing is the most complicated biological process,” he says.

Read the entire article after the jump.

Image: Location of Hypothalamus. Courtesy of Colorado State University / Wikipedia.

Send to Kindle

Criminology and Brain Science

Pathological criminals and the non-criminals who seek to understand them have no doubt co-existed since humans first learned to steal from and murder one another.

So while we may be no clearer in fully understanding the underlying causes of anti-social, destructive and violent behavior many researchers continue their quests. In one camp are those who maintain that such behavior is learned or comes as a consequence of poor choices or life-events, usually traumatic, or through exposure to an acute psychological or physiological stressor. In the other camp, are those who argue that genes and their subsequent expression, especially those controlling brain function, are a principal cause.

Some recent neurological studies of criminals and psychopaths shows fascinating, though not unequivocal, results.

From the Wall Street Journal:

The scientific study of crime got its start on a cold, gray November morning in 1871, on the east coast of Italy. Cesare Lombroso, a psychiatrist and prison doctor at an asylum for the criminally insane, was performing a routine autopsy on an infamous Calabrian brigand named Giuseppe Villella. Lombroso found an unusual indentation at the base of Villella’s skull. From this singular observation, he would go on to become the founding father of modern criminology.

Lombroso’s controversial theory had two key points: that crime originated in large measure from deformities of the brain and that criminals were an evolutionary throwback to more primitive species. Criminals, he believed, could be identified on the basis of physical characteristics, such as a large jaw and a sloping forehead. Based on his measurements of such traits, Lombroso created an evolutionary hierarchy, with Northern Italians and Jews at the top and Southern Italians (like Villella), along with Bolivians and Peruvians, at the bottom.

These beliefs, based partly on pseudoscientific phrenological theories about the shape and size of the human head, flourished throughout Europe in the late 19th and early 20th centuries. Lombroso was Jewish and a celebrated intellectual in his day, but the theory he spawned turned out to be socially and scientifically disastrous, not least by encouraging early-20th-century ideas about which human beings were and were not fit to reproduce—or to live at all.

The racial side of Lombroso’s theory fell into justifiable disrepute after the horrors of World War II, but his emphasis on physiology and brain traits has proved to be prescient. Modern-day scientists have now developed a far more compelling argument for the genetic and neurological components of criminal behavior. They have uncovered, quite literally, the anatomy of violence, at a time when many of us are preoccupied by the persistence of violent outrages in our midst.

The field of neurocriminology—using neuroscience to understand and prevent crime—is revolutionizing our understanding of what drives “bad” behavior. More than 100 studies of twins and adopted children have confirmed that about half of the variance in aggressive and antisocial behavior can be attributed to genetics. Other research has begun to pinpoint which specific genes promote such behavior.

Brain-imaging techniques are identifying physical deformations and functional abnormalities that predispose some individuals to violence. In one recent study, brain scans correctly predicted which inmates in a New Mexico prison were most likely to commit another crime after release. Nor is the story exclusively genetic: A poor environment can change the early brain and make for antisocial behavior later in life.

Most people are still deeply uncomfortable with the implications of neurocriminology. Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. Both sides fear any seeming effort to erode the idea of human agency and free will.

It is growing harder and harder, however, to avoid the mounting evidence. With each passing year, neurocriminology is winning new adherents, researchers and practitioners who understand its potential to transform our approach to both crime prevention and criminal justice.

The genetic basis of criminal behavior is now well established. Numerous studies have found that identical twins, who have all of their genes in common, are much more similar to each other in terms of crime and aggression than are fraternal twins, who share only 50% of their genes.

In a landmark 1984 study, my colleague Sarnoff Mednick found that children in Denmark who had been adopted from parents with a criminal record were more likely to become criminals in adulthood than were other adopted kids. The more offenses the biological parents had, the more likely it was that their offspring would be convicted of a crime. For biological parents who had no offenses, 13% of their sons had been convicted; for biological parents with three or more offenses, 25% of their sons had been convicted.

As for environmental factors that affect the young brain, lead is neurotoxic and particularly damages the prefrontal region, which regulates behavior. Measured lead levels in our bodies tend to peak at 21 months—an age when toddlers are apt to put their fingers into their mouths. Children generally pick up lead in soil that has been contaminated by air pollution and dumping.

Rising lead levels in the U.S. from 1950 through the 1970s neatly track increases in violence 20 years later, from the ’70s through the ’90s. (Violence peaks when individuals are in their late teens and early 20s.) As lead in the environment fell in the ’70s and ’80s—thanks in large part to the regulation of gasoline—violence fell correspondingly. No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.

Lead isn’t the only culprit. Other factors linked to higher aggression and violence in adulthood include smoking and drinking by the mother before birth, complications during birth and poor nutrition early in life.

Genetics and environment may work together to encourage violent behavior. One pioneering study in 2002 by Avshalom Caspi and Terrie Moffitt of Duke University genotyped over 1,000 individuals in a community in New Zealand and assessed their levels of antisocial behavior in adulthood. They found that a genotype conferring low levels of the enzyme monoamine oxidase A (MAOA), when combined with early child abuse, predisposed the individual to later antisocial behavior. Low MAOA has been linked to reduced volume in the amygdala—the emotional center of the brain—while physical child abuse can damage the frontal part of the brain, resulting in a double hit.

Brain-imaging studies have also documented impairments in offenders. Murderers, for instance, tend to have poorer functioning in the prefrontal cortex—the “guardian angel” that keeps the brakes on impulsive, disinhibited behavior and volatile emotions.

Read the entire article following the jump.

Image: The Psychopath Test by Jon Ronson, book cover. Courtesy of Goodreads.

Send to Kindle

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Send to Kindle

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

Send to Kindle

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Send to Kindle

Technology: Mind Exp(a/e)nder

Rattling off esoteric facts to friends and colleagues at a party or in the office is often seen as a simple way to impress. You may have tried this at some point — to impress a prospective boy or girl friend or a group of peers or even your boss. Not surprisingly, your facts will impress if they are relevant to the discussion at hand. However, your audience will be even more agog at your uncanny, intellectual prowess if the facts and figures relate to some wildly obtuse domain — quotes from authors, local bird species, gold prices through the years, land-speed records through the ages, how electrolysis works, etymology of polysyllabic words, and so it goes.

So, it comes as no surprise that many technology companies fall over themselves to promote their products as a way to make you, the smart user, even smarter. But does having constant, realtime access to a powerful computer or smartphone or spectacles linked to an immense library of interconnected content, make you smarter? Some would argue that it does; that having access to a vast, virtual disk drive of information will improve your cognitive abilities. There is no doubt that our technology puts an unparalleled repository of information within instant and constant reach: we can read all the classic literature — for that matter we can read the entire contents of the Library of Congress; we can find an answer to almost any question — it’s just a Google search away; we can find fresh research and rich reference material on every subject imaginable.

Yet, all this information will not directly make us any smarter; it is not applied knowledge nor is it experiential wisdom. It will not make us more creative or insightful. However, it is more likely to influence our cognition indirectly — freed from our need to carry volumes of often useless facts and figures in our heads, we will be able to turn our minds to more consequential and noble pursuits — to think, rather than to memorize. That is a good thing.

From Slate:

Quick, what’s the square root of 2,130? How many Roadmaster convertibles did Buick build in 1949? What airline has never lost a jet plane in a crash?

If you answered “46.1519,” “8,000,” and “Qantas,” there are two possibilities. One is that you’re Rain Man. The other is that you’re using the most powerful brain-enhancement technology of the 21st century so far: Internet search.

True, the Web isn’t actually part of your brain. And Dustin Hoffman rattled off those bits of trivia a few seconds faster in the movie than you could with the aid of Google. But functionally, the distinctions between encyclopedic knowledge and reliable mobile Internet access are less significant than you might think. Math and trivia are just the beginning. Memory, communication, data analysis—Internet-connected devices can give us superhuman powers in all of these realms. A growing chorus of critics warns that the Internet is making us lazy, stupid, lonely, or crazy. Yet tools like Google, Facebook, and Evernote hold at least as much potential to make us not only more knowledgeable and more productive but literally smarter than we’ve ever been before.

The idea that we could invent tools that change our cognitive abilities might sound outlandish, but it’s actually a defining feature of human evolution. When our ancestors developed language, it altered not only how they could communicate but how they could think. Mathematics, the printing press, and science further extended the reach of the human mind, and by the 20th century, tools such as telephones, calculators, and Encyclopedia Britannica gave people easy access to more knowledge about the world than they could absorb in a lifetime.

Yet it would be a stretch to say that this information was part of people’s minds. There remained a real distinction between what we knew and what we could find out if we cared to.

The Internet and mobile technology have begun to change that. Many of us now carry our smartphones with us everywhere, and high-speed data networks blanket the developed world. If I asked you the capital of Angola, it would hardly matter anymore whether you knew it off the top of your head. Pull out your phone and repeat the question using Google Voice Search, and a mechanized voice will shoot back, “Luanda.” When it comes to trivia, the difference between a world-class savant and your average modern technophile is perhaps five seconds. And Watson’s Jeopardy! triumph over Ken Jennings suggests even that time lag might soon be erased—especially as wearable technology like Google Glass begins to collapse the distance between our minds and the cloud.

So is the Internet now essentially an external hard drive for our brains? That’s the essence of an idea called “the extended mind,” first propounded by philosophers Andy Clark and David Chalmers in 1998. The theory was a novel response to philosophy’s long-standing “mind-brain problem,” which asks whether our minds are reducible to the biology of our brains. Clark and Chalmers proposed that the modern human mind is a system that transcends the brain to encompass aspects of the outside environment. They argued that certain technological tools—computer modeling, navigation by slide rule, long division via pencil and paper—can be every bit as integral to our mental operations as the internal workings of our brains. They wrote: “If, as we confront some task, a part of the world functions as a process which, were it done in the head, we would have no hesitation in recognizing as part of the cognitive process, then that part of the world is (so we claim) part of the cognitive process.”

Fifteen years on and well into the age of Google, the idea of the extended mind feels more relevant today. “Ned Block [an NYU professor] likes to say, ‘Your thesis was false when you wrote the article—since then it has come true,’ ” Chalmers says with a laugh.

The basic Google search, which has become our central means of retrieving published information about the world—is only the most obvious example. Personal-assistant tools like Apple’s Siri instantly retrieve information such as phone numbers and directions that we once had to memorize or commit to paper. Potentially even more powerful as memory aids are cloud-based note-taking apps like Evernote, whose slogan is, “Remember everything.”

So here’s a second pop quiz. Where were you on the night of Feb. 8, 2010? What are the names and email addresses of all the people you know who currently live in New York City? What’s the exact recipe for your favorite homemade pastry?

Read the entire article after the jump.

Image: Google Glass. Courtesy of Google.

Send to Kindle

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

From ars technica:

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

Read the entire article following the jump.

Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.

Send to Kindle

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

From the New Scientist:

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

Read the entire article after the jump.

Send to Kindle

The Connectome: Slicing and Reconstructing the Brain

From the Guardian:

There is a macabre brilliance to the machine in Jeff Lichtman’s laboratory at Harvard University that is worthy of a Wallace and Gromit film. In one end goes brain. Out the other comes sliced brain, courtesy of an automated arm that wields a diamond knife. The slivers of tissue drop one after another on to a conveyor belt that zips along with the merry whirr of a cine projector.

Lichtman’s machine is an automated tape-collecting lathe ultramicrotome (Atlum), which, according to the neuroscientist, is the tool of choice for this line of work. It produces long strips of sticky tape with brain slices attached, all ready to be photographed through a powerful electron microscope.

When these pictures are combined into 3D images, they reveal the inner wiring of the organ, a tangled mass of nervous spaghetti. The research by Lichtman and his co-workers has a goal in mind that is so ambitious it is almost unthinkable.

If we are ever to understand the brain in full, they say, we must know how every neuron inside is wired up.

Though fanciful, the payoff could be profound. Map out our “connectome” – following other major “ome” projects such as the genome and transcriptome – and we will lay bare the biological code of our personalities, memories, skills and susceptibilities. Somewhere in our brains is who we are.

To use an understatement heard often from scientists, the job at hand is not trivial. Lichtman’s machine slices brain tissue into exquisitely thin wafers. To turn a 1mm thick slice of brain into neural salami takes six days in a process that yields about 30,000 slices.

But chopping up the brain is the easy part. When Lichtman began this work several years ago, he calculated how long it might take to image every slice of a 1cm mouse brain. The answer was 7,000 years. “When you hear numbers like that, it does make your pulse quicken,” Lichtman said.

The human brain is another story. There are 85bn neurons in the 1.4kg (3lbs) of flesh between our ears. Each has a cell body (grey matter) and long, thin extensions called dendrites and axons (white matter) that reach out and link to others. Most neurons have lots of dendrites that receive information from other nerve cells, and one axon that branches on to other cells and sends information out.

On average, each neuron forms 10,000 connections, through synapses with other nerve cells. Altogether, Lichtman estimates there are between 100tn and 1,000tn connections between neurons.

Unlike the lung, or the kidney, where the whole organ can be understood, more or less, by grasping the role of a handful of repeating physiological structures, the brain is made of thousands of specific types of brain cell that look and behave differently. Their names – Golgi, Betz, Renshaw, Purkinje – read like a roll call of the pioneers of neuroscience.

Lichtman, who is fond of calculations that expose the magnitude of the task he has taken on, once worked out how much computer memory would be needed to store a detailed human connectome.

“To map the human brain at the cellular level, we’re talking about 1m petabytes of information. Most people think that is more than the digital content of the world right now,” he said. “I’d settle for a mouse brain, but we’re not even ready to do that. We’re still working on how to do one cubic millimetre.”

He says he is about to submit a paper on mapping a minuscule volume of the mouse connectome and is working with a German company on building a multibeam microscope to speed up imaging.

For some scientists, mapping the human connectome down to the level of individual cells is verging on overkill. “If you want to study the rainforest, you don’t need to look at every leaf and every twig and measure its position and orientation. It’s too much detail,” said Olaf Sporns, a neuroscientist at Indiana University, who coined the term “connectome” in 2005.

Read the entire article after the jump.

Video courtesy of the Connectome Project / Guardian.

Send to Kindle

Synaesthesia: Smell the Music

From the Economist:

THAT some people make weird associations between the senses has been acknowledged for over a century. The condition has even been given a name: synaesthesia. Odd as it may seem to those not so gifted, synaesthetes insist that spoken sounds and the symbols which represent them give rise to specific colours or that individual musical notes have their own hues.

Yet there may be a little of this cross-modal association in everyone. Most people agree that loud sounds are “brighter” than soft ones. Likewise, low-pitched sounds are reminiscent of large objects and high-pitched ones evoke smallness. Anne-Sylvie Crisinel and Charles Spence of Oxford University think something similar is true between sound and smell.

Ms Crisinel and Dr Spence wanted to know whether an odour sniffed from a bottle could be linked to a specific pitch, and even a specific instrument. To find out, they asked 30 people to inhale 20 smells—ranging from apple to violet and wood smoke—which came from a teaching kit for wine-tasting. After giving each sample a good sniff, volunteers had to click their way through 52 sounds of varying pitches, played by piano, woodwind, string or brass, and identify which best matched the smell. The results of this study, to be published later this month in Chemical Senses, are intriguing.

The researchers’ first finding was that the volunteers did not think their request utterly ridiculous. It rather made sense, they told them afterwards. The second was that there was significant agreement between volunteers. Sweet and sour smells were rated as higher-pitched, smoky and woody ones as lower-pitched. Blackberry and raspberry were very piano. Vanilla had elements of both piano and woodwind. Musk was strongly brass.

It is not immediately clear why people employ their musical senses in this way to help their assessment of a smell. But gone are the days when science assumed each sense worked in isolation. People live, say Dr Spence and Ms Crisinel, in a multisensory world and their brains tirelessly combine information from all sources to make sense, as it were, of what is going on around them. Nor is this response restricted to humans. Studies of the brains of mice show that regions involved in olfaction also react to sound.

Taste, too, seems linked to hearing. Ms Crisinel and Dr Spence have previously established that sweet and sour tastes, like smells, are linked to high pitch, while bitter tastes bring lower pitches to mind. Now they have gone further. In a study that will be published later this year they and their colleagues show how altering the pitch and instruments used in background music can alter the way food tastes.

Read the entire article here.

Image courtesy of cerebromente.org.br.

Send to Kindle

Inside the Weird Teenage Brain

From the Wall Street Journal:

“What was he thinking?” It’s the familiar cry of bewildered parents trying to understand why their teenagers act the way they do.

How does the boy who can thoughtfully explain the reasons never to drink and drive end up in a drunken crash? Why does the girl who knows all about birth control find herself pregnant by a boy she doesn’t even like? What happened to the gifted, imaginative child who excelled through high school but then dropped out of college, drifted from job to job and now lives in his parents’ basement?

Adolescence has always been troubled, but for reasons that are somewhat mysterious, puberty is now kicking in at an earlier and earlier age. A leading theory points to changes in energy balance as children eat more and move less.

At the same time, first with the industrial revolution and then even more dramatically with the information revolution, children have come to take on adult roles later and later. Five hundred years ago, Shakespeare knew that the emotionally intense combination of teenage sexuality and peer-induced risk could be tragic—witness “Romeo and Juliet.” But, on the other hand, if not for fate, 13-year-old Juliet would have become a wife and mother within a year or two.

Our Juliets (as parents longing for grandchildren will recognize with a sigh) may experience the tumult of love for 20 years before they settle down into motherhood. And our Romeos may be poetic lunatics under the influence of Queen Mab until they are well into graduate school.

What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.

The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again.

The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.

Recent studies in the neuroscientist B.J. Casey’s lab at Cornell University suggest that adolescents aren’t reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

From an evolutionary point of view, this all makes perfect sense. One of the most distinctive evolutionary features of human beings is our unusually long, protected childhood. Human children depend on adults for much longer than those of any other primate. That long protected period also allows us to learn much more than any other animal. But eventually, we have to leave the safe bubble of family life, take what we learned as children and apply it to the real adult world.

Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.

Read more here.

Send to Kindle