Tag Archives: neuroscience

The science behind disgust

[div class=attrib]From Salon:[end-div]

We all have things that disgust us irrationally, whether it be cockroaches or chitterlings or cotton balls. For me, it’s fruit soda. It started when I was 3; my mom offered me a can of Sunkist after inner ear surgery. Still woozy from the anesthesia, I gulped it down, and by the time we made it to the cashier, all of it managed to come back up. Although it is nearly 30 years later, just the smell of this “fun, sun and the beach” drink is enough to turn my stomach.

But what, exactly, happens when we feel disgust? As Daniel Kelly, an assistant professor of philosophy at Purdue University, explains in his new book, “Yuck!: The Nature and Moral Significance of Disgust,” it’s not just a physical sensation, it’s a powerful emotional warning sign. Although disgust initially helped keep us away from rotting food and contagious disease, the defense mechanism changed over time to effect the distance we keep from one another. When allowed to play a role in the creation of social policy, Kelly argues, disgust might actually cause more harm than good.

Salon spoke with Kelly about hiding the science behind disgust, why we’re captivated by things we find revolting, and how it can be a very dangerous thing.

What exactly is disgust?

Simply speaking, disgust is the response we have to things we find repulsive. Some of the things that trigger disgust are innate, like the smell of sewage on a hot summer day. No one has to teach you to feel disgusted by garbage, you just are. Other things that are automatically disgusting are rotting food and visible cues of infection or illness. We have this base layer of core disgusting things, and a lot of them don’t seem like they’re learned.

[div class=attrib]More from theSource here.[end-div]

Why Does Time Fly?

[div class=attrib]From Scientific American:[end-div]

Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen.  This overestimation does not happen when the dot seems to move away.  Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Read more from the original study here.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Sawayasu Tsuji.[end-div]

Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

Test-tube truths

[div class=attrib]From Eurozine:[end-div]

In his new book, American atheist Sam Harris argues that science can replace theology as the ultimate moral authority. Kenan Malik is sceptical of any such yearning for moral certainty, be it scientific or divine.

“If God does not exist, everything is permitted.” Dostoevsky never actually wrote that line, though so often is it attributed to him that he may as well have. It has become the almost reflexive response of believers when faced with an argument for a godless world. Without religious faith, runs the argument, we cannot anchor our moral truths or truly know right from wrong. Without belief in God we will be lost in a miasma of moral nihilism. In recent years, the riposte of many to this challenge has been to argue that moral codes are not revealed by God but instantiated in nature, and in particular in the brain. Ethics is not a theological matter but a scientific one. Science is not simply a means of making sense of facts about the world, but also about values, because values are in essence facts in another form.

Few people have expressed this argument more forcefully than the neuroscientist Sam Harris. Over the past few years, through books such as The End of Faith and Letter to a Christian Nation, Harris has gained a considerable reputation as a no-holds-barred critic of religion, in particular of Islam, and as an acerbic champion of science. In his new book, The Moral Landscape: How Science Can Determine Human Values, he sets out to demolish the traditional philosophical distinction between is and ought, between the way the world is and the way that it should be, a distinction we most associate with David Hume.

What Hume failed to understand, Harris argues, is that science can bridge the gap between ought and is, by turning moral claims into empirical facts. Values, he argues, are facts about the “states of the world” and “states of the human brain”. We need to think of morality, therefore, as “an undeveloped branch of science”: “Questions about values are really questions about the wellbeing of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, the effects of specific laws on human relationships, the neurophysiology of happiness and suffering, etc.” Science, and neuroscience in particular, does not simply explain why we might respond in particular ways to equality or to torture but also whether equality is a good, and torture morally acceptable. Where there are disagreements over moral questions, Harris believes, science will decide which view is right “because the discrepant answers people give to them translate into differences in our brains, in the brains of others and in the world at large.”

Harris is nothing if not self-confident. There is a voluminous philosophical literature that stretches back almost to the origins of the discipline on the relationship between facts and values. Harris chooses to ignore most of it. He does not wish to engage “more directly with the academic literature on moral philosophy”, he explains in a footnote, because he did not develop his arguments “by reading the work of moral philosophers” and because he is “convinced that every appearance of terms like ‘metaethics’, ‘deontology’, ‘noncognitivism’, ‘antirealism’, ’emotivism’, etc directly increases the amount of boredom in the universe.”

[div class=attrib]More from theSource here.[end-div]

How Self-Control Works

[div class=attrib]From Scientific American:[end-div]

The scientific community is increasingly coming to realize how central self-control is to many important life outcomes. We have always known about the impact of socioeconomic status and IQ, but these are factors that are highly resistant to interventions. In contrast, self-control may be something that we can tap into to make sweeping improvements life outcomes.

If you think about the environment we live in, you will notice how it is essentially designed to challenge every grain of our self-control. Businesses have the means and motivation to get us to do things NOW, not later. Krispy Kreme wants us to buy a dozen doughnuts while they are hot; Best Buy wants us to buy a television before we leave the store today; even our physicians want us to hurry up and schedule our annual checkup.

There is not much place for waiting in today’s marketplace. In fact you can think about the whole capitalist system as being designed to get us to take actions and spend money now – and those businesses that are more successful in that do better and prosper (at least in the short term).  And this of course continuously tests our ability to resist temptation and exercise self-control.

It is in this very environment that it’s particularly important to understand what’s going on behind the mysterious force of self-control.

[div class=attrib]More from theSource here.[end-div]

How Free Is Your Will?

[div class=attrib]From Scientific American:[end-div]

Think about the last time you got bored with the TV channel you were watching and decided to change it with the remote control. Or a time you grabbed a magazine off a newsstand, or raised a hand to hail a taxi. As we go about our daily lives, we constantly make choices to act in certain ways. We all believe we exercise free will in such actions – we decide what to do and when to do it. Free will, however, becomes more complicated when you try to think how it can arise from brain activity.

Do we control our neurons or do they control us? If everything we do starts in the brain, what kind of neural activity would reflect free choice? And how would you feel about your free will if we were to tell you that neuroscientists can look at your brain activity, and tell that you are about to make a decision to move – and that they could do this a whole second and a half before you yourself became aware of your own choice?

Scientists from UCLA and Harvard — Itzhak Fried, Roy Mukamel and Gabriel Kreiman — have taken an audacious step in the search for free will, reported in a new article in the journal Neuron. They used a powerful tool – intracranial recording – to find neurons in the human brain whose activity predicts decisions to make a movement, challenging conventional notions of free will.

Fried is one of a handful of neurosurgeons in the world who perform the delicate procedure of inserting electrodes into a living human brain, and using them to record activity from individual neurons. He does this to pin down the source of debilitating seizures in the brains of epileptic patients. Once he locates the part of the patients’ brains that sparks off the seizures, he can remove it, pulling the plug on their neuronal electrical storms.

[div class=attrib]More from theSource here.[end-div]

A New Tool for Creative Thinking: Mind-Body Dissonance

[div class=attrib]From Scientific American:[end-div]

A New Tool for Creative Thinking: Mind-Body Dissonance

Did you ever get the giggles during a religious service or some other serious occasion?  Did you ever have to smile politely when you felt like screaming?  In these situations, the emotions that we are required to express differ from the ones we are feeling inside.  That can be stressful, unpleasant, and exhausting.  Normally our minds and our bodies are in harmony.  When facial expressions or posture depart from how we feel, we experience what two psychologists at Northwestern University, Li Huang and Adam Galinsky, call mind–body dissonance.  And in a fascinating new paper, they show that such awkward clashes between mind and body can actually be useful: they help us think more expansively.

Ask yourself, would you say that a camel is a vehicle?  Would you describe a handbag as an item of clothing?  Your default answer might be negative, but there’s a way in which the camels can be regarded as forms of transport, and handbags can certainly be said to dress up an outfit.  When we think expansively, we think about categories more inclusively, we stop privileging the average cases, and extend our horizons to the atypical or exotic.  Expansive thought can be regarded a kind of creativity, and an opportunity for new insights.

Huang and Galinsky have shown that mind–body dissonance can make us think expansively.  In a clever series of studies, they developed a way to get people’s facial expressions to depart from their emotional experiences.  Participants were asked to either hold a pen between their teeth, forcing an unwitting smile, or to affix two golf tees in a particular position on their foreheads, unwittingly forcing an expression of sadness.  While in these facial configurations subjects were asked to recall happy and sad events or listen to happy and sad music.

[div class=attrib]More from theSource here.[end-div]

Why Athletes Are Geniuses

[div class=attrib]From Discover:[end-div]

The qualities that set a great athlete apart from the rest of us lie not just in the muscles and the lungs but also between the ears. That’s because athletes need to make complicated decisions in a flash. One of the most spectacular examples of the athletic brain operating at top speed came in 2001, when the Yankees were in an American League playoff game with the Oakland Athletics. Shortstop Derek Jeter managed to grab an errant throw coming in from right field and then gently tossed the ball to catcher Jorge Posada, who tagged the base runner at home plate. Jeter’s quick decision saved the game—and the series—for the Yankees. To make the play, Jeter had to master both conscious decisions, such as whether to intercept the throw, and unconscious ones. These are the kinds of unthinking thoughts he must make in every second of every game: how much weight to put on a foot, how fast to rotate his wrist as he releases a ball, and so on.

In recent years neuroscientists have begun to catalog some fascinating differences between average brains and the brains of great athletes. By understanding what goes on in athletic heads, researchers hope to understand more about the workings of all brains—those of sports legends and couch potatoes alike.

As Jeter’s example shows, an athlete’s actions are much more than a set of automatic responses; they are part of a dynamic strategy to deal with an ever-changing mix of intricate challenges. Even a sport as seemingly straightforward as pistol shooting is surprisingly complex. A marksman just points his weapon and fires, and yet each shot calls for many rapid decisions, such as how much to bend the elbow and how tightly to contract the shoulder muscles. Since the shooter doesn’t have perfect control over his body, a slight wobble in one part of the arm may require many quick adjustments in other parts. Each time he raises his gun, he has to make a new calculation of what movements are required for an accurate shot, combining previous experience with whatever variations he is experiencing at the moment.

To explain how brains make these on-the-fly decisions, Reza Shadmehr of Johns Hopkins University and John Krakauer of Columbia University two years ago reviewed studies in which the brains of healthy people and of brain-damaged patients who have trouble controlling their movements were scanned. They found that several regions of the brain collaborate to make the computations needed for detailed motor actions. The brain begins by setting a goal—pick up the fork, say, or deliver the tennis serve—and calculates the best course of action to reach it. As the brain starts issuing commands, it also begins to make predictions about what sort of sensations should come back from the body if it achieves the goal. If those predictions don’t match the actual sensations, the brain then revises its plan to reduce error. Shadmehr and Krakauer’s work demonstrates that the brain does not merely issue rigid commands; it also continually updates its solution to the problem of how to move the body. Athletes may perform better than the rest of us because their brains can find better solutions than ours do.

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]

I Didn’t Sin—It Was My Brain

[div class=attrib]From Discover:[end-div]

Why does being bad feel so good? Pride, envy, greed, wrath, lust, gluttony, and sloth: It might sound like just one more episode of The Real Housewives of New Jersey, but this enduring formulation of the worst of human failures has inspired great art for thousands of years. In the 14th century Dante depicted ghoulish evildoers suffering for eternity in his masterpiece, The Divine Comedy. Medieval muralists put the fear of God into churchgoers with lurid scenarios of demons and devils. More recently George Balanchine choreographed their dance.

Today these transgressions are inspiring great science, too. New research is explaining where these behaviors come from and helping us understand why we continue to engage in them—and often celebrate them—even as we declare them to be evil. Techniques such as functional magnetic resonance imaging (fMRI), which highlights metabolically active areas of the brain, now allow neuroscientists to probe the biology behind bad intentions.

The most enjoyable sins engage the brain’s reward circuitry, including evolutionarily ancient regions such as the nucleus accumbens and hypothalamus; located deep in the brain, they provide us such fundamental feelings as pain, pleasure, reward, and punishment. More disagreeable forms of sin such as wrath and envy enlist the dorsal anterior cingulate cortex (dACC). This area, buried in the front of the brain, is often called the brain’s “conflict detector,” coming online when you are confronted with contradictory information, or even simply when you feel pain. The more social sins (pride, envy, lust, wrath) recruit the medial prefrontal cortex (mPFC), brain terrain just behind the forehead, which helps shape the awareness of self.

No understanding of temptation is complete without considering restraint, and neuroscience has begun to illuminate this process as well. As we struggle to resist, inhibitory cognitive control networks involving the front of the brain activate to squelch the impulse by tempering its appeal. Meanwhile, research suggests that regions such as the caudate—partly responsible for body movement and coordination—suppress the physical impulse. It seems to be the same whether you feel a spark of lechery, a surge of jealousy, or the sudden desire to pop somebody in the mouth: The two sides battle it out, the devilish reward system versus the angelic brain regions that hold us in check.

It might be too strong to claim that evolution has wired us for sin, but excessive indulgence in lust or greed could certainly put you ahead of your competitors. “Many of these sins you could think of as virtues taken to the extreme,” says Adam Safron, a research consultant at Northwestern University whose neuroimaging studies focus on sexual behavior. “From the perspective of natural selection, you want the organism to eat, to procreate, so you make them rewarding. But there’s a potential for that process to go beyond the bounds.”

[div class=attrib]More from theSource here[end-div]

How Much of Your Memory Is True?

[div class=attrib]From Discover:[end-div]

Rita Magil was driving down a Montreal boulevard one sunny morning in 2002 when a car came blasting through a red light straight toward her. “I slammed the brakes, but I knew it was too late,” she says. “I thought I was going to die.” The oncoming car smashed into hers, pushing her off the road and into a building with large cement pillars in front. A pillar tore through the car, stopping only about a foot from her face. She was trapped in the crumpled vehicle, but to her shock, she was still alive.

The accident left Magil with two broken ribs and a broken collarbone. It also left her with post-traumatic stress disorder (PTSD) and a desperate wish to forget. Long after her bones healed, Magil was plagued by the memory of the cement barriers looming toward her. “I would be doing regular things—cooking something, shopping, whatever—and the image would just come into my mind from nowhere,” she says. Her heart would pound; she would start to sweat and feel jumpy all over. It felt visceral and real, like something that was happening at that very moment.

Most people who survive accidents or attacks never develop PTSD. But for some, the event forges a memory that is pathologically potent, erupting into consciousness again and again. “PTSD really can be characterized as a disorder of memory,” says McGill University psychologist Alain Brunet, who studies and treats psychological trauma. “It’s about what you wish to forget and what you cannot forget.” This kind of memory is not misty and water­colored. It is relentless.

More than a year after her accident, Magil saw Brunet’s ad for an experimental treatment for PTSD, and she volunteered. She took a low dose of a common blood-pressure drug, propranolol, that reduces activity in the amygdala, a part of the brain that processes emotions. Then she listened to a taped re-creation of her car accident. She had relived that day in her mind a thousand times. The difference this time was that the drug broke the link between her factual memory and her emotional memory. Propranolol blocks the action of adrenaline, so it prevented her from tensing up and getting anxious. By having Magil think about the accident while the drug was in her body, Brunet hoped to permanently change how she remembered the crash. It worked. She did not forget the accident but was actively able to reshape her memory of the event, stripping away the terror while leaving the facts behind.

Brunet’s experiment emerges from one of the most exciting and controversial recent findings in neuroscience: that we alter our memories just by remembering them. Karim Nader of McGill—the scientist who made this discovery—hopes it means that people with PTSD can cure themselves by editing their memories. Altering remembered thoughts might also liberate people imprisoned by anxiety, obsessive-compulsive disorder, even addiction. “There is no such thing as a pharmacological cure in psychiatry,” Brunet says. “But we may be on the verge of changing that.”

[div class=attrib]More from theSource here[end-div]

Windows on the Mind

[div class=attrib]From Scientific American:[end-div]

Once scorned as nervous tics, certain tiny, unconscious flicks of the eyes now turn out to underpin much of our ability to see. These movements may even reveal subliminal thoughts.

As you read this, your eyes are rapidly flicking from left to right in small hops, bringing each word sequentially into focus. When you stare at a person’s face, your eyes will similarly dart here and there, resting momentarily on one eye, the other eye, nose, mouth and other features. With a little introspection, you can detect this frequent flexing of your eye muscles as you scan a page, face or scene.

But these large voluntary eye movements, called saccades, turn out to be just a small part of the daily workout your eye muscles get. Your eyes never stop moving, even when they are apparently settled, say, on a person’s nose or a sailboat bobbing on the horizon. When the eyes fixate on something, as they do for 80 percent of your waking hours, they still jump and jiggle imperceptibly in ways that turn out to be essential for seeing. If you could somehow halt these miniature motions while fixing your gaze, a static scene would simply fade from view.

[div class=attrib]More from theSource here.[end-div]

The Memory Code

[div class=attrib]From Scientific American:[end-div]

Researchers are closing in on the rules that the brain uses to lay down memories. Discovery of this memory code could lead to the design of smarter computers and robots and even to new ways to peer into the human mind.

INTRODUCTION
Anyone who has ever been in an earthquake has vivid memories of it: the ground shakes, trembles, buckles and heaves; the air fills with sounds of rumbling, cracking and shattering glass; cabinets fly open; books, dishes and knickknacks tumble from shelves. We remember such episodes–with striking clarity and for years afterward–because that is what our brains evolved to do: extract information from salient events and use that knowledge to guide our responses to similar situations in the future. This ability to learn from past experience allows all animals to adapt to a world that is complex and ever changing.

For decades, neuroscientists have attempted to unravel how the brain makes memories. Now, by combining a set of novel experiments with powerful mathematical analyses and an ability to record simultaneously the activity of more than 200 neurons in awake mice, my colleagues and I have discovered what we believe is the basic mechanism the brain uses to draw vital information from experiences and turn that information into memories. Our results add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories. Rather, the coordinated activity of large populations of neurons is needed.

Furthermore, our studies indicate that neuronal populations involved in encoding memories also extract the kind of generalized concepts that allow us to transform our daily experiences into knowledge and ideas. Our findings bring biologists closer to deciphering the universal neural code: the rules the brain follows to convert collections of electrical impulses into perception, memory, knowledge and, ultimately, behavior. Such understanding could allow investigators to develop more seamless brain-machine interfaces, design a whole new generation of smart computers and robots, and perhaps even assemble a codebook of the mind that would make it possible to decipher–by monitoring neural activity–what someone remembers and thinks.

HISTORICAL PERSPECTIVE
My group’s research into the brain code grew out of work focused on the molecular basis of learning and memory. In the fall of 1999 we generated a strain of mice engineered to have improved memory. This “smart” mouse–nicknamed Doogie after the brainy young doctor in the early-1990s TV dramedy Doogie Howser, M.D.—learns faster and remembers things longer than wild-type mice. The work generated great interest and debate and even made the cover of Time magazine. But our findings left me asking, What exactly is a memory?

Scientists knew that converting perceptual experiences into long-lasting memories requires a brain region called the hippocampus. And we even knew what molecules are critical to the process, such as the NMDA receptor, which we altered to produce Doogie. But no one knew how, exactly, the activation of nerve cells in the brain represents memory. A few years ago I began to wonder if we could find a way to describe mathematically or physiologically what memory is. Could we identify the relevant neural network dynamic and visualize the activity pattern that occurs when a memory is formed?

For the better part of a century, neuroscientists had been attempting to discover which patterns of nerve cell activity represent information in the brain and how neural circuits process, modify and store information needed to control and shape behavior. Their earliest efforts involved simply trying to correlate neural activity–the frequency at which nerve cells fire–with some sort of measurable physiological or behavioral response. For example, in the mid-1920s Edgar Adrian performed electrical recordings on frog tissue and found that the firing rate of individual stretch nerves attached to a muscle varies with the amount of weight that is put on the muscle. This study was the first to suggest that information (in this case the intensity of a stimulus) can be conveyed by changes in neural activity–work for which he later won a Nobel Prize.

Since then, many researchers using a single electrode to monitor the activity of one neuron at a time have shown that, when stimulated, neurons in different areas of the brain also change their firing rates. For example, pioneering experiments by David H. Hubel and Torsten N. Wiesel demonstrated that the neurons in the primary visual cortex of cats, an area at the back of the brain, respond vigorously to the moving edges of a bar of light. Charles G. Gross of Princeton University and Robert Desimone of the Massachusetts Institute of Technology found that neurons in a different brain region of the monkey (the inferotemporal cortex) can alter their behavior in response to more complex stimuli, such as pictures of faces.

[div class=attrib]More from the source here.[end-div]

Mirrors in the Mind

[div class=attrib]From Scientific American:[end-div]

A special class of brain cells reflects the outside world, revealing a new avenue for human understanding, connecting and learning

John watches Mary, who is grasping a flower. John knows what Mary is doing–she is picking up the flower–and he also knows why she is doing it. Mary is smiling at John, and he guesses that she will give him the flower as a present. The simple scene lasts just moments, and John’s grasp of what is happening is nearly instantaneous. But how exactly does he understand Mary’s action, as well as her intention, so effortlessly?

A decade ago most neuroscientists and psychologists would have attributed an individual’s understanding of someone else’s actions and, especially, intentions to a rapid reasoning process not unlike that used to solve a logical problem: some sophisticated cognitive apparatus in John’s brain elaborated on the information his senses took in and compared it with similar previously stored experiences, allowing John to arrive at a conclusion about what Mary was up to and why.
[div class=attrib]More from theSource here.[end-div]

The Expert Mind

[div class=attrib]From Scientific American:[end-div]

Studies of the mental processes of chess grandmasters have revealed clues to how people become experts in other fields as well.

A man walks along the inside of a circle of chess tables, glancing at each for two or three seconds before making his move. On the outer rim, dozens of amateurs sit pondering their replies until he completes the circuit. The year is 1909, the man is Jose Raul Capablanca of Cuba, and the result is a whitewash: 28 wins in as many games. The exhibition was part of a tour in which Capablanca won 168 games in a row.

How did he play so well, so quickly? And how far ahead could he calculate under such constraints? “I see only one move ahead,” Capablanca is said to have answered, “but it is always the correct one.”

[div class=attrib]More from theSource here.end-div]

‘Thirst For Knowledge’ May Be Opium Craving

[div class=attrib]From ScienceDaily:[end-div]

Neuroscientists have proposed a simple explanation for the pleasure of grasping a new concept: The brain is getting its fix.

The “click” of comprehension triggers a biochemical cascade that rewards the brain with a shot of natural opium-like substances, said Irving Biederman of the University of Southern California. He presents his theory in an invited article in the latest issue of American Scientist.

“While you’re trying to understand a difficult theorem, it’s not fun,” said Biederman, professor of neuroscience in the USC College of Letters, Arts and Sciences.

“But once you get it, you just feel fabulous.”

The brain’s craving for a fix motivates humans to maximize the rate at which they absorb knowledge, he said.

“I think we’re exquisitely tuned to this as if we’re junkies, second by second.”

Biederman hypothesized that knowledge addiction has strong evolutionary value because mate selection correlates closely with perceived intelligence.

Only more pressing material needs, such as hunger, can suspend the quest for knowledge, he added.

The same mechanism is involved in the aesthetic experience, Biederman said, providing a neurological explanation for the pleasure we derive from art.

[div class=attrib]More from theSource here.[end-div]