Tag Archives: consciousness

Towards an Understanding of Consciousness

Robert-Fudd-Consciousness-17C

The modern scientific method has helped us make great strides in our understanding of much that surrounds us. From knowledge of the infinitesimally small building blocks of atoms to the vast structures of the universe, theory and experiment have enlightened us considerably over the last several hundred years.

Yet a detailed understanding of consciousness still eludes us. Despite the intricate philosophical essays of John Locke in 1690 that laid the foundations for our modern day views of consciousness, a fundamental grasp of its mechanisms remain as elusive as our knowledge of the universe’s dark matter.

So, it’s encouraging to come across a refreshing view of consciousness, described in the context of evolutionary biology. Michael Graziano, associate professor of psychology and neuroscience at Princeton University, makes a thoughtful case for Attention Schema Theory (AST), which centers on the simple notion that there is adaptive value for the brain to build awareness. According to AST, the brain is constantly constructing and refreshing a model — in Graziano’s words an “attention schema” — that describes what its covert attention is doing from one moment to the next. The brain constructs this schema as an analog to its awareness of attention in others — a sound adaptive perception.

Yet, while this view may hold promise from a purely adaptive and evolutionary standpoint, it does have some way to go before it is able to explain how the brain’s abstraction of a holistic awareness is constructed from the physical substrate — the neurons and connections between them.

Read more of Michael Graziano’s essay, A New Theory Explains How Consciousness Evolved. Graziano is the author of Consciousness and the Social Brain, which serves as his introduction to AST. And, for a compelling rebuttal, check out R. Scott Bakker’s article, Graziano, the Attention Schema Theory, and the Neuroscientific Explananda Problem.

Unfortunately, until our experimentalists make some definitive progress in this area, our understanding will remain just as abstract as the theories themselves, however compelling. But, ideas such as these inch us towards a deeper understanding.

Image: Representation of consciousness from the seventeenth century. Robert FluddUtriusque cosmi maioris scilicet et minoris […] historia, tomus II (1619), tractatus I, sectio I, liber X, De triplici animae in corpore visione. Courtesy: Wikipedia. Public Domain.

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

The Great Unknown: Consciousness

Google-search-consciousness

Much has been written in the humanities and scientific journals about consciousness. Scholars continue to probe and pontificate and theorize. And yet we seem to know more of the ocean depths and our cosmos than we do of that interminable, self-aware inner voice that sits behind our eyes.

From the Guardian:

One spring morning in Tucson, Arizona, in 1994, an unknown philosopher named David Chalmers got up to give a talk on consciousness, by which he meant the feeling of being inside your head, looking out – or, to use the kind of language that might give a neuroscientist an aneurysm, of having a soul. Though he didn’t realise it at the time, the young Australian academic was about to ignite a war between philosophers and scientists, by drawing attention to a central mystery of human life – perhaps the central mystery of human life – and revealing how embarrassingly far they were from solving it.

The scholars gathered at the University of Arizona – for what would later go down as a landmark conference on the subject – knew they were doing something edgy: in many quarters, consciousness was still taboo, too weird and new agey to take seriously, and some of the scientists in the audience were risking their reputations by attending. Yet the first two talks that day, before Chalmers’s, hadn’t proved thrilling. “Quite honestly, they were totally unintelligible and boring – I had no idea what anyone was talking about,” recalled Stuart Hameroff, the Arizona professor responsible for the event. “As the organiser, I’m looking around, and people are falling asleep, or getting restless.” He grew worried. “But then the third talk, right before the coffee break – that was Dave.” With his long, straggly hair and fondness for all-body denim, the 27-year-old Chalmers looked like he’d got lost en route to a Metallica concert. “He comes on stage, hair down to his butt, he’s prancing around like Mick Jagger,” Hameroff said. “But then he speaks. And that’s when everyone wakes up.”

The brain, Chalmers began by pointing out, poses all sorts of problems to keep scientists busy. How do we learn, store memories, or perceive things? How do you know to jerk your hand away from scalding water, or hear your name spoken across the room at a noisy party? But these were all “easy problems”, in the scheme of things: given enough time and money, experts would figure them out. There was only one truly hard problem of consciousness, Chalmers said. It was a puzzle so bewildering that, in the months after his talk, people started dignifying it with capital letters – the Hard Problem of Consciousness – and it’s this: why on earth should all those complicated brain processes feel like anything from the inside? Why aren’t we just brilliant robots, capable of retaining information, of responding to noises and smells and hot saucepans, but dark inside, lacking an inner life? And how does the brain manage it? How could the 1.4kg lump of moist, pinkish-beige tissue inside your skull give rise to something as mysterious as the experience of being that pinkish-beige lump, and the body to which it is attached?

What jolted Chalmers’s audience from their torpor was how he had framed the question. “At the coffee break, I went around like a playwright on opening night, eavesdropping,” Hameroff said. “And everyone was like: ‘Oh! The Hard Problem! The Hard Problem! That’s why we’re here!’” Philosophers had pondered the so-called “mind-body problem” for centuries. But Chalmers’s particular manner of reviving it “reached outside philosophy and galvanised everyone. It defined the field. It made us ask: what the hell is this that we’re dealing with here?”

Two decades later, we know an astonishing amount about the brain: you can’t follow the news for a week without encountering at least one more tale about scientists discovering the brain region associated with gambling, or laziness, or love at first sight, or regret – and that’s only the research that makes the headlines. Meanwhile, the field of artificial intelligence – which focuses on recreating the abilities of the human brain, rather than on what it feels like to be one – has advanced stupendously. But like an obnoxious relative who invites himself to stay for a week and then won’t leave, the Hard Problem remains. When I stubbed my toe on the leg of the dining table this morning, as any student of the brain could tell you, nerve fibres called “C-fibres” shot a message to my spinal cord, sending neurotransmitters to the part of my brain called the thalamus, which activated (among other things) my limbic system. Fine. But how come all that was accompanied by an agonising flash of pain? And what is pain, anyway?

Questions like these, which straddle the border between science and philosophy, make some experts openly angry. They have caused others to argue that conscious sensations, such as pain, don’t really exist, no matter what I felt as I hopped in anguish around the kitchen; or, alternatively, that plants and trees must also be conscious. The Hard Problem has prompted arguments in serious journals about what is going on in the mind of a zombie, or – to quote the title of a famous 1974 paper by the philosopher Thomas Nagel – the question “What is it like to be a bat?” Some argue that the problem marks the boundary not just of what we currently know, but of what science could ever explain. On the other hand, in recent years, a handful of neuroscientists have come to believe that it may finally be about to be solved – but only if we are willing to accept the profoundly unsettling conclusion that computers or the internet might soon become conscious, too.

Next week, the conundrum will move further into public awareness with the opening of Tom Stoppard’s new play, The Hard Problem, at the National Theatre – the first play Stoppard has written for the National since 2006, and the last that the theatre’s head, Nicholas Hytner, will direct before leaving his post in March. The 77-year-old playwright has revealed little about the play’s contents, except that it concerns the question of “what consciousness is and why it exists”, considered from the perspective of a young researcher played by Olivia Vinall. Speaking to the Daily Mail, Stoppard also clarified a potential misinterpretation of the title. “It’s not about erectile dysfunction,” he said.

Stoppard’s work has long focused on grand, existential themes, so the subject is fitting: when conversation turns to the Hard Problem, even the most stubborn rationalists lapse quickly into musings on the meaning of life. Christof Koch, the chief scientific officer at the Allen Institute for Brain Science, and a key player in the Obama administration’s multibillion-dollar initiative to map the human brain, is about as credible as neuroscientists get. But, he told me in December: “I think the earliest desire that drove me to study consciousness was that I wanted, secretly, to show myself that it couldn’t be explained scientifically. I was raised Roman Catholic, and I wanted to find a place where I could say: OK, here, God has intervened. God created souls, and put them into people.” Koch assured me that he had long ago abandoned such improbable notions. Then, not much later, and in all seriousness, he said that on the basis of his recent research he thought it wasn’t impossible that his iPhone might have feelings.

By the time Chalmers delivered his speech in Tucson, science had been vigorously attempting to ignore the problem of consciousness for a long time. The source of the animosity dates back to the 1600s, when René Descartes identified the dilemma that would tie scholars in knots for years to come. On the one hand, Descartes realised, nothing is more obvious and undeniable than the fact that you’re conscious. In theory, everything else you think you know about the world could be an elaborate illusion cooked up to deceive you – at this point, present-day writers invariably invoke The Matrix – but your consciousness itself can’t be illusory. On the other hand, this most certain and familiar of phenomena obeys none of the usual rules of science. It doesn’t seem to be physical. It can’t be observed, except from within, by the conscious person. It can’t even really be described. The mind, Descartes concluded, must be made of some special, immaterial stuff that didn’t abide by the laws of nature; it had been bequeathed to us by God.

This religious and rather hand-wavy position, known as Cartesian dualism, remained the governing assumption into the 18th century and the early days of modern brain study. But it was always bound to grow unacceptable to an increasingly secular scientific establishment that took physicalism – the position that only physical things exist – as its most basic principle. And yet, even as neuroscience gathered pace in the 20th century, no convincing alternative explanation was forthcoming. So little by little, the topic became taboo. Few people doubted that the brain and mind were very closely linked: if you question this, try stabbing your brain repeatedly with a kitchen knife, and see what happens to your consciousness. But how they were linked – or if they were somehow exactly the same thing – seemed a mystery best left to philosophers in their armchairs. As late as 1989, writing in the International Dictionary of Psychology, the British psychologist Stuart Sutherland could irascibly declare of consciousness that “it is impossible to specify what it is, what it does, or why it evolved. Nothing worth reading has been written on it.”

It was only in 1990 that Francis Crick, the joint discoverer of the double helix, used his position of eminence to break ranks. Neuroscience was far enough along by now, he declared in a slightly tetchy paper co-written with Christof Koch, that consciousness could no longer be ignored. “It is remarkable,” they began, “that most of the work in both cognitive science and the neurosciences makes no reference to consciousness” – partly, they suspected, “because most workers in these areas cannot see any useful way of approaching the problem”. They presented their own “sketch of a theory”, arguing that certain neurons, firing at certain frequencies, might somehow be the cause of our inner awareness – though it was not clear how.

Read the entire story here.

Image courtesy of Google Search.

Theism Versus Spirituality

Prominent neo-atheist Sam Harris continues to reject theism, and does so thoughtfully and eloquently. In his latest book, Waking Up, he continues to argue the case against religion, but makes a powerful case for spirituality. Harris defines spirituality as an inner sense of a good and powerful reality, based on sound self-awarenesses and insightful questioning of one’s own consciousness. This type of spirituality, quite rightly, is devoid of theistic angels and demons. Harris reveals more in his interview with Gary Gutting, professor of philosophy at the University of Notre Dame.

From the NYT:

Sam Harris is a neuroscientist and prominent “new atheist,” who along with others like Richard Dawkins, Daniel Dennett and Christopher Hitchens helped put criticism of religion at the forefront of public debate in recent years. In two previous books, “The End of Faith” and “Letter to a Christian Nation,” Harris argued that theistic religion has no place in a world of science. In his latest book, “Waking Up,” his thought takes a new direction. While still rejecting theism, Harris nonetheless makes a case for the value of “spirituality,” which he bases on his experiences in meditation. I interviewed him recently about the book and some of the arguments he makes in it.

Gary Gutting: A common basis for atheism is naturalism — the view that only science can give a reliable account of what’s in the world. But in “Waking Up” you say that consciousness resists scientific description, which seems to imply that it’s a reality beyond the grasp of science. Have you moved away from an atheistic view?

Sam Harris: I don’t actually argue that consciousness is “a reality” beyond the grasp of science. I just think that it is conceptually irreducible — that is, I don’t think we can fully understand it in terms of unconscious information processing. Consciousness is “subjective”— not in the pejorative sense of being unscientific, biased or merely personal, but in the sense that it is intrinsically first-person, experiential and qualitative.

The only thing in this universe that suggests the reality of consciousness is consciousness itself. Many philosophers have made this argument in one way or another — Thomas Nagel, John Searle, David Chalmers. And while I don’t agree with everything they say about consciousness, I agree with them on this point.

The primary approach to understanding consciousness in neuroscience entails correlating changes in its contents with changes in the brain. But no matter how reliable these correlations become, they won’t allow us to drop the first-person side of the equation. The experiential character of consciousness is part of the very reality we are studying. Consequently, I think science needs to be extended to include a disciplined approach to introspection.

G.G.: But science aims at objective truth, which has to be verifiable: open to confirmation by other people. In what sense do you think first-person descriptions of subjective experience can be scientific?

S.H.: In a very strong sense. The only difference between claims about first-person experience and claims about the physical world is that the latter are easier for others to verify. That is an important distinction in practical terms — it’s easier to study rocks than to study moods — but it isn’t a difference that marks a boundary between science and non-science. Nothing, in principle, prevents a solitary genius on a desert island from doing groundbreaking science. Confirmation by others is not what puts the “truth” in a truth claim. And nothing prevents us from making objective claims about subjective experience.

Are you thinking about Margaret Thatcher right now? Well, now you are. Were you thinking about her exactly six minutes ago? Probably not. There are answers to questions of this kind, whether or not anyone is in a position to verify them.

And certain truths about the nature of our minds are well worth knowing. For instance, the anger you felt yesterday, or a year ago, isn’t here anymore, and if it arises in the next moment, based on your thinking about the past, it will quickly pass away when you are no longer thinking about it. This is a profoundly important truth about the mind — and it can be absolutely liberating to understand it deeply. If you do understand it deeply — that is, if you are able to pay clear attention to the arising and passing away of anger, rather than merely think about why you have every right to be angry — it becomes impossible to stay angry for more than a few moments at a time. Again, this is an objective claim about the character of subjective experience. And I invite our readers to test it in the laboratory of their own minds.

G. G.: Of course, we all have some access to what other people are thinking or feeling. But that access is through probable inference and so lacks the special authority of first-person descriptions. Suppose I told you that in fact I didn’t think of Margaret Thatcher when I read your comment, because I misread your text as referring to Becky Thatcher in “The Adventures of Tom Sawyer”? If that’s true, I have evidence for it that you can’t have. There are some features of consciousness that we will agree on. But when our first-person accounts differ, then there’s no way to resolve the disagreement by looking at one another’s evidence. That’s very different from the way things are in science.

S.H.: This difference doesn’t run very deep. People can be mistaken about the world and about the experiences of others — and they can even be mistaken about the character of their own experience. But these forms of confusion aren’t fundamentally different. Whatever we study, we are obliged to take subjective reports seriously, all the while knowing that they are sometimes false or incomplete.

For instance, consider an emotion like fear. We now have many physiological markers for fear that we consider quite reliable, from increased activity in the amygdala and spikes in blood cortisol to peripheral physiological changes like sweating palms. However, just imagine what would happen if people started showing up in the lab complaining of feeling intense fear without showing any of these signs — and they claimed to feel suddenly quite calm when their amygdalae lit up on fMRI, their cortisol spiked, and their skin conductance increased. We would no longer consider these objective measures of fear to be valid. So everything still depends on people telling us how they feel and our (usually) believing them.

However, it is true that people can be very poor judges of their inner experience. That is why I think disciplined training in a technique like “mindfulness,” apart from its personal benefits, can be scientifically important.

Read the entire story here.

A Godless Universe: Mind or Mathematics

In his science column for the NYT George Johnson reviews several recent books by noted thinkers who for different reasons believe science needs to expand its borders. Philosopher Thomas Nagel and physicist Max Tegmark both agree that our current understanding of the universe is rather limited and that science needs to turn to new or alternate explanations. Nagel, still an atheist, suggests in his book Mind and Cosmos that the mind somehow needs to be considered a fundamental structure of the universe. While Tegmark in his book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality suggests that mathematics is the core, irreducible framework of the cosmos. Two radically different ideas — yet both are correct in one respect: we still know so very little about ourselves and our surroundings.

From the NYT:

Though he probably didn’t intend anything so jarring, Nicolaus Copernicus, in a 16th-century treatise, gave rise to the idea that human beings do not occupy a special place in the heavens. Nearly 500 years after replacing the Earth with the sun as the center of the cosmic swirl, we’ve come to see ourselves as just another species on a planet orbiting a star in the boondocks of a galaxy in the universe we call home. And this may be just one of many universes — what cosmologists, some more skeptically than others, have named the multiverse.

Despite the long string of demotions, we remain confident, out here on the edge of nowhere, that our band of primates has what it takes to figure out the cosmos — what the writer Timothy Ferris called “the whole shebang.” New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.

There are skeptics who suspect we may be missing a crucial piece of the puzzle. Recently, I’ve been struck by two books exploring that possibility in very different ways. There is no reason why, in this particular century, Homo sapiens should have gathered all the pieces needed for a theory of everything. In displacing humanity from a privileged position, the Copernican principle applies not just to where we are in space but to when we are in time.

Since it was published in 2012, “Mind and Cosmos,” by the philosopher Thomas Nagel, is the book that has caused the most consternation. With his taunting subtitle — “Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False” — Dr. Nagel was rejecting the idea that there was nothing more to the universe than matter and physical forces. He also doubted that the laws of evolution, as currently conceived, could have produced something as remarkable as sentient life. That idea borders on anathema, and the book quickly met with a blistering counterattack. Steven Pinker, a Harvard psychologist, denounced it as “the shoddy reasoning of a once-great thinker.”

What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.

“Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”

Dr. Nagel finds it astonishing that the human brain — this biological organ that evolved on the third rock from the sun — has developed a science and a mathematics so in tune with the cosmos that it can predict and explain so many things.

Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs.

Continue reading the main story Continue reading the main story
Continue reading the main story

That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.

“Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”

Dr. Nagel is not alone in entertaining such ideas. While rejecting anything mystical, the biologist Stuart Kauffman has suggested that Darwinian theory must somehow be expanded to explain the emergence of complex, intelligent creatures. And David J. Chalmers, a philosopher, has called on scientists to seriously consider “panpsychism” — the idea that some kind of consciousness, however rudimentary, pervades the stuff of the universe.

Some of this is a matter of scientific taste. It can be just as exhilarating, as Stephen Jay Gould proposed in “Wonderful Life,” to consider the conscious mind as simply a fluke, no more inevitable than the human appendix or a starfish’s five legs. But it doesn’t seem so crazy to consider alternate explanations.

Heading off in another direction, a new book by the physicist Max Tegmark suggests that a different ingredient — mathematics — needs to be admitted into science as one of nature’s irreducible parts. In fact, he believes, it may be the most fundamental of all.

In a well-known 1960 essay, the physicist Eugene Wigner marveled at “the unreasonable effectiveness of mathematics” in explaining the world. It is “something bordering on the mysterious,” he wrote, for which “there is no rational explanation.”

The best he could offer was that mathematics is “a wonderful gift which we neither understand nor deserve.”

Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from numbers.

Read the entire article here.

You Are a Neural Computation

Since the days of Aristotle, and later Descartes, thinkers have sought to explain consciousness and free will. Several thousand years on and we are still pondering the notion; science has made great strides and yet fundamentally we still have little idea.

Many neuroscientists now armed with new and very precise research tools are aiming to change this. Yet, increasingly it seems that free will may indeed by a cognitive illusion. Evidence suggests that our subconscious decides and initiates action for us long before we are aware of making a conscious decision. There seems to be no god or ghost in the machine.

From Technology Review:

It was an expedition seeking something never caught before: a single human neuron lighting up to create an urge, albeit for the minor task of moving an index finger, before the subject was even aware of feeling anything. Four years ago, Itzhak Fried, a neurosurgeon at the University of California, Los Angeles, slipped several probes, each with eight hairlike electrodes able to record from single neurons, into the brains of epilepsy patients. (The patients were undergoing surgery to diagnose the source of severe seizures and had agreed to participate in experiments during the process.) Probes in place, the patients—who were conscious—were given instructions to press a button at any time of their choosing, but also to report when they’d first felt the urge to do so.

Later, Gabriel Kreiman, a neuroscientist at Harvard Medical School and Children’s Hospital in Boston, captured the quarry. Poring over data after surgeries in 12 patients, he found telltale flashes of individual neurons in the pre-­supplementary motor area (associated with movement) and the anterior cingulate (associated with motivation and attention), preceding the reported urges by anywhere from hundreds of milliseconds to several seconds. It was a direct neural measurement of the unconscious brain at work—caught in the act of formulating a volitional, or freely willed, decision. Now Kreiman and his colleagues are planning to repeat the feat, but this time they aim to detect pre-urge signatures in real time and stop the subject from performing the action—or see if that’s even possible.

A variety of imaging studies in humans have revealed that brain activity related to decision-making tends to precede conscious action. Implants in macaques and other animals have examined brain circuits involved in perception and action. But Kreiman broke ground by directly measuring a preconscious decision in humans at the level of single neurons. To be sure, the readouts came from an average of just 20 neurons in each patient. (The human brain has about 86 billion of them, each with thousands of connections.) And ultimately, those neurons fired only in response to a chain of even earlier events. But as more such experiments peer deeper into the labyrinth of neural activity behind decisions—whether they involve moving a finger or opting to buy, eat, or kill something—science could eventually tease out the full circuitry of decision-making and perhaps point to behavioral therapies or treatments. “We need to understand the neuronal basis of voluntary decision-making—or ‘freely willed’ decision-­making—and its pathological counterparts if we want to help people such as drug, sex, food, and gambling addicts, or patients with obsessive-compulsive disorder,” says Christof Koch, chief scientist at the Allen Institute of Brain Science in Seattle (see “Cracking the Brain’s Codes”). “Many of these people perfectly well know that what they are doing is dysfunctional but feel powerless to prevent themselves from engaging in these behaviors.”

Kreiman, 42, believes his work challenges important Western philosophical ideas about free will. The Argentine-born neuroscientist, an associate professor at Harvard Medical School, specializes in visual object recognition and memory formation, which draw partly on unconscious processes. He has a thick mop of black hair and a tendency to pause and think a long moment before reframing a question and replying to it expansively. At the wheel of his Jeep as we drove down Broadway in Cambridge, Massachusetts, Kreiman leaned over to adjust the MP3 player—toggling between Vivaldi, Lady Gaga, and Bach. As he did so, his left hand, the one on the steering wheel, slipped to let the Jeep drift a bit over the double yellow lines. Kreiman’s view is that his neurons made him do it, and they also made him correct his small error an instant later; in short, all actions are the result of neural computations and nothing more. “I am interested in a basic age-old question,” he says. “Are decisions really free? I have a somewhat extreme view of this—that there is nothing really free about free will. Ultimately, there are neurons that obey the laws of physics and mathematics. It’s fine if you say ‘I decided’—that’s the language we use. But there is no god in the machine—only neurons that are firing.”

Our philosophical ideas about free will date back to Aristotle and were systematized by René Descartes, who argued that humans possess a God-given “mind,” separate from our material bodies, that endows us with the capacity to freely choose one thing rather than another. Kreiman takes this as his departure point. But he’s not arguing that we lack any control over ourselves. He doesn’t say that our decisions aren’t influenced by evolution, experiences, societal norms, sensations, and perceived consequences. “All of these external influences are fundamental to the way we decide what we do,” he says. “We do have experiences, we do learn, we can change our behavior.”

But the firing of a neuron that guides us one way or another is ultimately like the toss of a coin, Kreiman insists. “The rules that govern our decisions are similar to the rules that govern whether a coin will land one way or the other. Ultimately there is physics; it is chaotic in both cases, but at the end of the day, nobody will argue the coin ‘wanted’ to land heads or tails. There is no real volition to the coin.”

Testing Free Will

It’s only in the past three to four decades that imaging tools and probes have been able to measure what actually happens in the brain. A key research milestone was reached in the early 1980s when Benjamin Libet, a researcher in the physiology department at the University of California, San Francisco, made a remarkable study that tested the idea of conscious free will with actual data.

Libet fitted subjects with EEGs—gadgets that measure aggregate electrical brain activity through the scalp—and had them look at a clock dial that spun around every 2.8 seconds. The subjects were asked to press a button whenever they chose to do so—but told they should also take note of where the time hand was when they first felt the “wish or urge.” It turns out that the actual brain activity involved in the action began 300 milliseconds, on average, before the subject was conscious of wanting to press the button. While some scientists criticized the methods—questioning, among other things, the accuracy of the subjects’ self-reporting—the study set others thinking about how to investigate the same questions. Since then, functional magnetic resonance imaging (fMRI) has been used to map brain activity by measuring blood flow, and other studies have also measured brain activity processes that take place before decisions are made. But while fMRI transformed brain science, it was still only an indirect tool, providing very low spatial resolution and averaging data from millions of neurons. Kreiman’s own study design was the same as Libet’s, with the important addition of the direct single-neuron measurement.

When Libet was in his prime, ­Kreiman was a boy. As a student of physical chemistry at the University of Buenos Aires, he was interested in neurons and brains. When he went for his PhD at Caltech, his passion solidified under his advisor, Koch. Koch was deep in collaboration with Francis Crick, co-discoverer of DNA’s structure, to look for evidence of how consciousness was represented by neurons. For the star-struck kid from Argentina, “it was really life-changing,” he recalls. “Several decades ago, people said this was not a question serious scientists should be thinking about; they either had to be smoking something or have a Nobel Prize”—and Crick, of course, was a Nobelist. Crick hypothesized that studying how the brain processed visual information was one way to study consciousness (we tap unconscious processes to quickly decipher scenes and objects), and he collaborated with Koch on a number of important studies. Kreiman was inspired by the work. “I was very excited about the possibility of asking what seems to be the most fundamental aspect of cognition, consciousness, and free will in a reductionist way—in terms of neurons and circuits of neurons,” he says.

One thing was in short supply: humans willing to have scientists cut open their skulls and poke at their brains. One day in the late 1990s, Kreiman attended a journal club—a kind of book club for scientists reviewing the latest literature—and came across a paper by Fried on how to do brain science in people getting electrodes implanted in their brains to identify the source of severe epileptic seizures. Before he’d heard of Fried, “I thought examining the activity of neurons was the domain of monkeys and rats and cats, not humans,” Kreiman says. Crick introduced Koch to Fried, and soon Koch, Fried, and Kreiman were collaborating on studies that investigated human neural activity, including the experiment that made the direct neural measurement of the urge to move a finger. “This was the opening shot in a new phase of the investigation of questions of voluntary action and free will,” Koch says.

Read the entire article here.

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

What of Consciousness?

google-search-holiday-feast

As we dig into the traditional holiday fare surrounded by family and friends it is useful to ponder whether any of it is actually real or is it all inside the mind. The in-laws may be a figment of the brain, but the wine probably is real.

From the New Scientist:

Descartes might have been onto something with “I think therefore I am”, but surely “I think therefore you are” is going a bit far? Not for some of the brightest minds of 20th-century physics as they wrestled mightily with the strange implications of the quantum world.

According to prevailing wisdom, a quantum particle such as an electron or photon can only be properly described as a mathematical entity known as a wave function. Wave functions can exist as “superpositions” of many states at once. A photon, for instance, can circulate in two different directions around an optical fibre; or an electron can simultaneously spin clockwise and anticlockwise or be in two positions at once.

When any attempt is made to observe these simultaneous existences, however, something odd happens: we see only one. How do many possibilities become one physical reality?

This is the central question in quantum mechanics, and has spawned a plethora of proposals, or interpretations. The most popular is the Copenhagen interpretation, which says nothing is real until it is observed, or measured. Observing a wave function causes the superposition to collapse.

However, Copenhagen says nothing about what exactly constitutes an observation. John von Neumann broke this silence and suggested that observation is the action of a conscious mind. It’s an idea also put forward by Max Planck, the founder of quantum theory, who said in 1931, “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”

That argument relies on the view that there is something special about consciousness, especially human consciousness. Von Neumann argued that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is somehow different. It is thus able to select out one of the quantum possibilities on offer, making it real – to that mind, at least.

Henry Stapp of the Lawrence Berkeley National Laboratory in California is one of the few physicists that still subscribe to this notion: we are “participating observers” whose minds cause the collapse of superpositions, he says. Before human consciousness appeared, there existed a multiverse of potential universes, Stapp says. The emergence of a conscious mind in one of these potential universes, ours, gives it a special status: reality.

There are many objectors. One problem is that many of the phenomena involved are poorly understood. “There’s a big question in philosophy about whether consciousness actually exists,” says Matthew Donald, a philosopher of physics at the University of Cambridge. “When you add on quantum mechanics it all gets a bit confused.”

Donald prefers an interpretation that is arguably even more bizarre: “many minds”. This idea – related to the “many worlds” interpretation of quantum theory, which has each outcome of a quantum decision happen in a different universe – argues that an individual observing a quantum system sees all the many states, but each in a different mind. These minds all arise from the physical substance of the brain, and share a past and a future, but cannot communicate with each other about the present.

Though it sounds hard to swallow, this and other approaches to understanding the role of the mind in our perception of reality are all worthy of attention, Donald reckons. “I take them very seriously,” he says.

Read the entire article here.

Image courtesy of Google Search.

Dead Man Talking

Graham is a man very much alive. But, his mind has convinced him that his brain is dead and that he killed it.

From the New Scientist:

Name: Graham
Condition: Cotard’s syndrome

“When I was in hospital I kept on telling them that the tablets weren’t going to do me any good ’cause my brain was dead. I lost my sense of smell and taste. I didn’t need to eat, or speak, or do anything. I ended up spending time in the graveyard because that was the closest I could get to death.”

Nine years ago, Graham woke up and discovered he was dead.

He was in the grip of Cotard’s syndrome. People with this rare condition believe that they, or parts of their body, no longer exist.

For Graham, it was his brain that was dead, and he believed that he had killed it. Suffering from severe depression, he had tried to commit suicide by taking an electrical appliance with him into the bath.

Eight months later, he told his doctor his brain had died or was, at best, missing. “It’s really hard to explain,” he says. “I just felt like my brain didn’t exist any more. I kept on telling the doctors that the tablets weren’t going to do me any good because I didn’t have a brain. I’d fried it in the bath.”

Doctors found trying to rationalise with Graham was impossible. Even as he sat there talking, breathing – living – he could not accept that his brain was alive. “I just got annoyed. I didn’t know how I could speak or do anything with no brain, but as far as I was concerned I hadn’t got one.”

Baffled, they eventually put him in touch with neurologists Adam Zeman at the University of Exeter, UK, and Steven Laureys at the University of Liège in Belgium.

“It’s the first and only time my secretary has said to me: ‘It’s really important for you to come and speak to this patient because he’s telling me he’s dead,'” says Laureys.

Limbo state

“He was a really unusual patient,” says Zeman. Graham’s belief “was a metaphor for how he felt about the world – his experiences no longer moved him. He felt he was in a limbo state caught between life and death”.

No one knows how common Cotard’s syndrome may be. A study published in 1995 of 349 elderly psychiatric patients in Hong Kong found two with symptoms resembling Cotard’s (General Hospital Psychiatry, DOI: 10.1016/0163-8343(94)00066-M). But with successful and quick treatments for mental states such as depression – the condition from which Cotard’s appears to arise most often – readily available, researchers suspect the syndrome is exceptionally rare today. Most academic work on the syndrome is limited to single case studies like Graham.

Some people with Cotard’s have reportedly died of starvation, believing they no longer needed to eat. Others have attempted to get rid of their body using acid, which they saw as the only way they could free themselves of being the “walking dead”.

Graham’s brother and carers made sure he ate, and looked after him. But it was a joyless existence. “I didn’t want to face people. There was no point,” he says, “I didn’t feel pleasure in anything. I used to idolise my car, but I didn’t go near it. All the things I was interested in went away.”

Even the cigarettes he used to relish no longer gave him a hit. “I lost my sense of smell and my sense of taste. There was no point in eating because I was dead. It was a waste of time speaking as I never had anything to say. I didn’t even really have any thoughts. Everything was meaningless.”

Low metabolism

A peek inside Graham’s brain provided Zeman and Laureys with some explanation. They used positron emission tomography to monitor metabolism across his brain. It was the first PET scan ever taken of a person with Cotard’s. What they found was shocking: metabolic activity across large areas of the frontal and parietal brain regions was so low that it resembled that of someone in a vegetative state.

Graham says he didn’t really have any thoughts about his future during that time. “I had no other option other than to accept the fact that I had no way to actually die. It was a nightmare.”

Graveyard haunt

This feeling prompted him on occasion to visit the local graveyard. “I just felt I might as well stay there. It was the closest I could get to death. The police would come and get me, though, and take me back home.”

There were some unexplained consequences of the disorder. Graham says he used to have “nice hairy legs”. But after he got Cotard’s, all the hairs fell out. “I looked like a plucked chicken! Saves shaving them I suppose…”

It’s nice to hear him joke. Over time, and with a lot of psychotherapy and drug treatment, Graham has gradually improved and is no longer in the grip of the disorder. He is now able to live independently. “His Cotard’s has ebbed away and his capacity to take pleasure in life has returned,” says Zeman.

“I couldn’t say I’m really back to normal, but I feel a lot better now and go out and do things around the house,” says Graham. “I don’t feel that brain-dead any more. Things just feel a bit bizarre sometimes.” And has the experience changed his feeling about death? “I’m not afraid of death,” he says. “But that’s not to do with what happened – we’re all going to die sometime. I’m just lucky to be alive now.”

Read the entire article here.

Image courtesy of Wikimedia / Public domain.

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

[div class=attrib]From the New Scientist:[end-div]

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

[div class=attrib]Read the entire article after the jump.[end-div]

Consciousness as Illusion?

Massimo Pigliucci over at Rationally Speaking ponders free will, moral responsibility and consciousness and, as always, presents a well reasoned and eloquent argument — we do exist!

[div class=attrib]From Rationally Speaking:[end-div]

For some time I have been noticing the emergence of a strange trinity of beliefs among my fellow skeptics and freethinkers: an increasing number of them, it seems, don’t believe that they can make decisions (the free will debate), don’t believe that they have moral responsibility (because they don’t have free will, or because morality is relative — take your pick), and they don’t even believe that they exist as conscious beings because, you know, consciousness is an illusion.

As I have argued recently, there are sensible ways to understand human volition (a much less metaphysically loaded and more sensible term than free will) within a lawful universe (Sean Carroll agrees and, interestingly, so does my sometime opponent Eliezer Yudkowsky). I also devoted an entire series on this blog to a better understanding of what morality is, how it works, and why it ain’t relative (within the domain of social beings capable of self-reflection). Let’s talk about consciousness then.

The oft-heard claim that consciousness is an illusion is an extraordinary one, as it relegates to an entirely epiphenomenal status what is arguably the most distinctive characteristic of human beings, the very thing that seems to shape and give meaning to our lives, and presumably one of the major outcome of millions of years of evolution pushing for a larger brain equipped with powerful frontal lobes capable to carry out reasoning and deliberation.

Still, if science tells us that consciousness is an illusion, we must bow to that pronouncement and move on (though we apparently cannot escape the illusion, partly because we have no free will). But what is the extraordinary evidence for this extraordinary claim? To begin with, there are studies of (very few) “split brain” patients which seem to indicate that the two hemispheres of the brain — once separated — display independent consciousness (under experimental circumstances), to the point that they may even try to make the left and right sides of the body act antagonistically to each other.

But there are a couple of obvious issues here that block an easy jump from observations on those patients to grand conclusions about the illusoriness of consciousness. First off, the two hemispheres are still conscious, so at best we have evidence that consciousness is divisible, not that it is an illusion (and that subdivision presumably can proceed no further than n=2). Second, these are highly pathological situations, and though they certainly tell us something interesting about the functioning of the brain, they are informative mostly about what happens when the brain does not function. As a crude analogy, imagine sawing a car in two, noticing that the front wheels now spin independently of the rear wheels, and concluding that the synchronous rotation of the wheels in the intact car is an “illusion.” Not a good inference, is it?

Let’s pursue this illusion thing a bit further. Sometimes people also argue that physics tells us that the way we perceive the world is also an illusion. After all, apparently solid objects like tables are made of quarks and the forces that bind them together, and since that’s the fundamental level of reality (well, unless you accept string theory) then clearly our senses are mistaken.

But our senses are not mistaken at all, they simply function at the (biologically) appropriate level of perception of reality. We are macroscopic objects and need to navigate the world as such. It would be highly inconvenient if we could somehow perceive quantum level phenomena directly, and in a very strong sense the solidity of a table is not an illusion at all. It is rather an emergent property of matter that our evolved senses exploit to allow us to sit down and have a nice meal at that table without worrying about the zillions of subnuclear interactions going on about it all the time.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Consciousness Art. Courtesy of Google search.[end-div]

The Mystery of Anaesthesia

Contemporary medical and surgical procedures have been completely transformed through the use of patient anaesthesia. Prior to the first use of diethyl ether as an anaesthetic in the United States in 1842, surgery, even for minor ailments, was often a painful process of last resort.

Nowadays the efficacy of anaesthesia is without question. Yet despite the development of ever more sophisticated compounds and methods of administration little is still known about how anaesthesia actually works.

Linda Geddes over at New Scientist has a fascinating article reviewing recent advancements in our understanding of anaesthesia, and its relevance in furthering our knowledge of consciousness in general.

[div class=attrib]From the New Scientist:[end-div]

I have had two operations under general anaesthetic this year. On both occasions I awoke with no memory of what had passed between the feeling of mild wooziness and waking up in a different room. Both times I was told that the anaesthetic would make me feel drowsy, I would go to sleep, and when I woke up it would all be over.

What they didn’t tell me was how the drugs would send me into the realms of oblivion. They couldn’t. The truth is, no one knows.

The development of general anaesthesia has transformed surgery from a horrific ordeal into a gentle slumber. It is one of the commonest medical procedures in the world, yet we still don’t know how the drugs work. Perhaps this isn’t surprising: we still don’t understand consciousness, so how can we comprehend its disappearance?

That is starting to change, however, with the development of new techniques for imaging the brain or recording its electrical activity during anaesthesia. “In the past five years there has been an explosion of studies, both in terms of consciousness, but also how anaesthetics might interrupt consciousness and what they teach us about it,” says George Mashour, an anaesthetist at the University of Michigan in Ann Arbor. “We’re at the dawn of a golden era.”

Consciousness has long been one of the great mysteries of life, the universe and everything. It is something experienced by every one of us, yet we cannot even agree on how to define it. How does the small sac of jelly that is our brain take raw data about the world and transform it into the wondrous sensation of being alive? Even our increasingly sophisticated technology for peering inside the brain has, disappointingly, failed to reveal a structure that could be the seat of consciousness.

Altered consciousness doesn’t only happen under a general anaesthetic of course – it occurs whenever we drop off to sleep, or if we are unlucky enough to be whacked on the head. But anaesthetics do allow neuroscientists to manipulate our consciousness safely, reversibly and with exquisite precision.

It was a Japanese surgeon who performed the first known surgery under anaesthetic, in 1804, using a mixture of potent herbs. In the west, the first operation under general anaesthetic took place at Massachusetts General Hospital in 1846. A flask of sulphuric ether was held close to the patient’s face until he fell unconscious.

Since then a slew of chemicals have been co-opted to serve as anaesthetics, some inhaled, like ether, and some injected. The people who gained expertise in administering these agents developed into their own medical specialty. Although long overshadowed by the surgeons who patch you up, the humble “gas man” does just as important a job, holding you in the twilight between life and death.

Consciousness may often be thought of as an all-or-nothing quality – either you’re awake or you’re not – but as I experienced, there are different levels of anaesthesia (see diagram). “The process of going into and out of general anaesthesia isn’t like flipping a light switch,” says Mashour. “It’s more akin to a dimmer switch.”

A typical subject first experiences a state similar to drunkenness, which they may or may not be able to recall later, before falling unconscious, which is usually defined as failing to move in response to commands. As they progress deeper into the twilight zone, they now fail to respond to even the penetration of a scalpel – which is the point of the exercise, after all – and at the deepest levels may need artificial help with breathing.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Replica of the inhaler used by William T. G. Morton in 1846 in the first public demonstration of surgery using ether. Courtesy of Wikipedia. [end-div]

Undiscovered

[div class=attrib]From Eurozine:[end-div]

Neurological and Darwinistic strands in the philosophy of consciousness see human beings as no more than our evolved brains. Avoiding naturalistic explanations of human beings’ fundamental difference from other animals requires openness to more expansive approaches, argues Raymond Tallis.

For several decades I have been arguing against what I call biologism. This is the idea, currently dominant within secular humanist circles, that humans are essentially animals (or at least much more beastly than has been hitherto thought) and that we need therefore to look to the biological sciences, and only there, to advance our understanding of human nature. As a result of my criticism of this position I have been accused of being a Cartesian dualist, who thinks that the mind is some kind of a ghost in the machinery of the brain. Worse, it has been suggested that I am opposed to Darwinism, to neuroscience or to science itself. Worst of all, some have suggested that I have a hidden religious agenda. For the record, I regard neuroscience (which was my own area of research) as one of the greatest monuments of the human intellect; I think Cartesian dualism is a lost cause; and I believe that Darwin’s theory is supported by overwhelming evidence. Nor do I have a hidden religious agenda: I am an atheist humanist. And this is in fact the reason why I have watched the rise of biologism with such dismay: it is a consequence of the widespread assumption that the only alternative to a supernatural understanding of human beings is a strictly naturalistic one that sees us as just another kind of beast and, ultimately, as being less conscious agents than pieces of matter stitched into the material world.

This is to do humanity a gross disservice, as I think we are so much more than gifted chimps. Unpacking the most “ordinary” moment of human life reveals knowledge, skills, emotions, intuitions, a sense of past and future and of an infinitely elaborated world, that are not to be found elsewhere in the living world.

Biologism has two strands: “Neuromania” and “Darwinitis”. Neuromania arises out of the belief that human consciousness is identical with neural activity in certain parts of the brain. It follows from this that the best way to investigate what we humans truly are, to understand the origins of our beliefs, our predispositions, our morality and even our aesthetic pleasures, will be to peer into the brains of human subjects using the latest scanning technology. This way we shall know what is really going on when we are having experiences, thinking thoughts, feeling emotions, remembering memories, making decisions, being wise or silly, breaking the law, falling in love and so on.

The other strand is Darwinitis, rooted in the belief that evolutionary theory not only explains the origin of the species H. sapiens – which it does, of course – but also explains humans as they are today; that people are at bottom the organisms forged by the processes of natural selection and nothing more.

[div class=attrib]More from theSource here.[end-div]

Is Quantum Mechanics Controlling Your Thoughts?

[div class=attrib]From Discover:[end-div]

Graham Fleming sits down at an L-shaped lab bench, occupying a footprint about the size of two parking spaces. Alongside him, a couple of off-the-shelf lasers spit out pulses of light just millionths of a billionth of a second long. After snaking through a jagged path of mirrors and lenses, these minus­cule flashes disappear into a smoky black box containing proteins from green sulfur bacteria, which ordinarily obtain their energy and nourishment from the sun. Inside the black box, optics manufactured to billionths-of-a-meter precision detect something extraordinary: Within the bacterial proteins, dancing electrons make seemingly impossible leaps and appear to inhabit multiple places at once.

Peering deep into these proteins, Fleming and his colleagues at the University of California at Berkeley and at Washington University in St. Louis have discovered the driving engine of a key step in photosynthesis, the process by which plants and some microorganisms convert water, carbon dioxide, and sunlight into oxygen and carbohydrates. More efficient by far in its ability to convert energy than any operation devised by man, this cascade helps drive almost all life on earth. Remarkably, photosynthesis appears to derive its ferocious efficiency not from the familiar physical laws that govern the visible world but from the seemingly exotic rules of quantum mechanics, the physics of the subatomic world. Somehow, in every green plant or photosynthetic bacterium, the two disparate realms of physics not only meet but mesh harmoniously. Welcome to the strange new world of quantum biology.

On the face of things, quantum mechanics and the biological sciences do not mix. Biology focuses on larger-scale processes, from molecular interactions between proteins and DNA up to the behavior of organisms as a whole; quantum mechanics describes the often-strange nature of electrons, protons, muons, and quarks—the smallest of the small. Many events in biology are considered straightforward, with one reaction begetting another in a linear, predictable way. By contrast, quantum mechanics is fuzzy because when the world is observed at the subatomic scale, it is apparent that particles are also waves: A dancing electron is both a tangible nugget and an oscillation of energy. (Larger objects also exist in particle and wave form, but the effect is not noticeable in the macroscopic world.)

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Dylan Burnette/Olympus Bioscapes Imaging Competition.[end-div]

On the mystery of human consciousness

[div class=attrib]From Eurozine:[end-div]

Philosophers and natural scientists regularly dismiss consciousness as irrelevant. However, even its critics agree that consciousness is less a problem than a mystery. One way into the mystery is through an understanding of autism.

It started with a letter from Michaela Martinková:

Our eldest son, aged almost eight, has Asperger’s Syndrome (AS). It is a diagnosis that falls into the autistic spectrum, but his IQ is very much above average. In an effort to find out how he thinks, I decided that I must find out how we think, and so I read into the cognitive sciences and epistemology. I found what I needed there, although I have an intense feeling that precisely the way of thinking of such people as our son is missing from the mosaic of these sciences. And I think that this missing piece could rearrange the whole mosaic.

In the book Philosophy and the Cognitive Sciences, you write, among other things: “Actually the only handicap so far observed in these children (with autism and AS) is that they cannot use human psychology. They cannot postulate intentional states in their own minds and in the minds of other people.” I think that deeper knowledge of autism, and especially of Asperger’s Syndrome as its version found in people with higher IQ in the framework of autism, could be immensely enriching for the cognitive sciences. I am convinced that these people think in an entirely different way from us.

Why the present interest in autism? It is generally known that some people whose diagnosis falls under Asperger’s Syndrome, namely people with Asperger’s Syndrome and high-functional autism, show a remarkable combination of highly above-average intelligence and well below-average social ability. The causes of this peculiarity, although far from being sufficiently clarified, are usually explained by reduced ability in the areas of verbal communication and empathy, which form the basis of social intelligence. And why consciousness? Many people think today that, if we are to better understand ourselves and our relationships to the world and other people, the last problem we must solve is consciousness. Many others think that if we understand the brain, its structure, and its functioning, consciousness will cease to be a problem. The more critical supporters of both views agree on one thing: consciousness is not a problem, it is more a mystery. If a problem is something about which we formulate a question, to which it is possible to seek a reasonable answer, then consciousness is a mystery, because it is still not possible to formulate a question which could be answered in a way that could be verified or refuted by the normal methods of science. Perhaps the psychiatrist Daniel M. Wegner best grasped the present state of knowledge with the statement: “All human experience states that we consciously control our actions, but all theories are against this.” In spite of all the unclearness and disputes about what consciousness is and how it works, the view has begun to prevail in recent years that language and consciousness are the link that makes a group of individuals into a community.

[div class=attrib]More from theSource here.[end-div]