All posts by Mike

On the mystery of human consciousness

[div class=attrib]From Eurozine:[end-div]

Philosophers and natural scientists regularly dismiss consciousness as irrelevant. However, even its critics agree that consciousness is less a problem than a mystery. One way into the mystery is through an understanding of autism.

It started with a letter from Michaela Martinková:

Our eldest son, aged almost eight, has Asperger’s Syndrome (AS). It is a diagnosis that falls into the autistic spectrum, but his IQ is very much above average. In an effort to find out how he thinks, I decided that I must find out how we think, and so I read into the cognitive sciences and epistemology. I found what I needed there, although I have an intense feeling that precisely the way of thinking of such people as our son is missing from the mosaic of these sciences. And I think that this missing piece could rearrange the whole mosaic.

In the book Philosophy and the Cognitive Sciences, you write, among other things: “Actually the only handicap so far observed in these children (with autism and AS) is that they cannot use human psychology. They cannot postulate intentional states in their own minds and in the minds of other people.” I think that deeper knowledge of autism, and especially of Asperger’s Syndrome as its version found in people with higher IQ in the framework of autism, could be immensely enriching for the cognitive sciences. I am convinced that these people think in an entirely different way from us.

Why the present interest in autism? It is generally known that some people whose diagnosis falls under Asperger’s Syndrome, namely people with Asperger’s Syndrome and high-functional autism, show a remarkable combination of highly above-average intelligence and well below-average social ability. The causes of this peculiarity, although far from being sufficiently clarified, are usually explained by reduced ability in the areas of verbal communication and empathy, which form the basis of social intelligence. And why consciousness? Many people think today that, if we are to better understand ourselves and our relationships to the world and other people, the last problem we must solve is consciousness. Many others think that if we understand the brain, its structure, and its functioning, consciousness will cease to be a problem. The more critical supporters of both views agree on one thing: consciousness is not a problem, it is more a mystery. If a problem is something about which we formulate a question, to which it is possible to seek a reasonable answer, then consciousness is a mystery, because it is still not possible to formulate a question which could be answered in a way that could be verified or refuted by the normal methods of science. Perhaps the psychiatrist Daniel M. Wegner best grasped the present state of knowledge with the statement: “All human experience states that we consciously control our actions, but all theories are against this.” In spite of all the unclearness and disputes about what consciousness is and how it works, the view has begun to prevail in recent years that language and consciousness are the link that makes a group of individuals into a community.

[div class=attrib]More from theSource here.[end-div]

Suprealist art, suprealist life

[div class=attrib]From Eurozine:[end-div]

Suprealism is a “movement” pioneered by Leonard Lapin that combines suprematism and realism; it mirrors the “suprealist world”, where art is packaged for consumer culture.

In 1993, when I started the suprealist phase of my work, which was followed by the “Suprealist manifesto” and the exhibition at Vaal gallery in Tallinn, a prominent art critic proclaimed that it represented the “hara-kiri of the old avant-garde”. A decade has passed, and the “old avant-gardist” and his suprealism are still alive and kicking, while, as if following my prophecy, life and its cultural representations have become more and more suprealist.

The term “suprealism” emerged quite naturally: its first half originates from the “suprematism” of the early twentieth-century Russian avant-garde, which claimed to represent the highest form of being, abandoning Earth and conquering space. The other half relates to the familiar, dogmatically imposed “realism”, which was the only officially tolerated style under communist rule. Initially, I attempted to bring to the concept the structures of high art and images from mass culture. The most popular domain which attracted most attention was of course pornography. During my 1996 exhibition at the Latvian Museum of Foreign Art, in Riga, the exhibition room containing 30 of my “pornographical works” was closed. There were similar incidents in Bristol, where some of my pieces were censored, not to speak about angry reactions in Estonia. It is remarkable that it is art that highlights what is otherwise hypocritically hidden behind cellophane in news kiosks. But nobody is dismantling the kiosks – the rage is directed at an artist’s exhibition.

An important event in the history of suprealism happened in 2001, when the Estonian Art Museum held an exhibition on the anniversary of the nineteenth-century Estonian academic painter Johan Köler. The exhibition was advertised with posters representing Köler’s sugary painting “A maid at a well”, sometimes ten times the size of the original. Since during the Soviet rule, Köler was officially turned into a predecessor of socialist realism, our generation has a complex and ambiguous relationship with this master. When the 2001 exhibition repeated the old stereotypical clichés about the artist, I expressed my disappointment by relating the exhibition posters to modern commercial packaging, advertisements, and catalogues. It was the starting point of the series “Suprealist artists”, which I am still continuing, using cheap reproductions of classical and modern art and packages, puzzles, flyers, ads, and so on, belonging to the contemporary consumer world. I use them to make new visual structures for the new century.

The “rape of art” as an advertising method is becoming more and more visible: many famous twentieth-century modernists are used in some way in advertising, which brings the images of Dali, Magritte, or Picasso to the consuming masses.

[div class=attrib]More from theSource here.[end-div]

The Memory Code

[div class=attrib]From Scientific American:[end-div]

Researchers are closing in on the rules that the brain uses to lay down memories. Discovery of this memory code could lead to the design of smarter computers and robots and even to new ways to peer into the human mind.

INTRODUCTION
Anyone who has ever been in an earthquake has vivid memories of it: the ground shakes, trembles, buckles and heaves; the air fills with sounds of rumbling, cracking and shattering glass; cabinets fly open; books, dishes and knickknacks tumble from shelves. We remember such episodes–with striking clarity and for years afterward–because that is what our brains evolved to do: extract information from salient events and use that knowledge to guide our responses to similar situations in the future. This ability to learn from past experience allows all animals to adapt to a world that is complex and ever changing.

For decades, neuroscientists have attempted to unravel how the brain makes memories. Now, by combining a set of novel experiments with powerful mathematical analyses and an ability to record simultaneously the activity of more than 200 neurons in awake mice, my colleagues and I have discovered what we believe is the basic mechanism the brain uses to draw vital information from experiences and turn that information into memories. Our results add to a growing body of work indicating that a linear flow of signals from one neuron to another is not enough to explain how the brain represents perceptions and memories. Rather, the coordinated activity of large populations of neurons is needed.

Furthermore, our studies indicate that neuronal populations involved in encoding memories also extract the kind of generalized concepts that allow us to transform our daily experiences into knowledge and ideas. Our findings bring biologists closer to deciphering the universal neural code: the rules the brain follows to convert collections of electrical impulses into perception, memory, knowledge and, ultimately, behavior. Such understanding could allow investigators to develop more seamless brain-machine interfaces, design a whole new generation of smart computers and robots, and perhaps even assemble a codebook of the mind that would make it possible to decipher–by monitoring neural activity–what someone remembers and thinks.

HISTORICAL PERSPECTIVE
My group’s research into the brain code grew out of work focused on the molecular basis of learning and memory. In the fall of 1999 we generated a strain of mice engineered to have improved memory. This “smart” mouse–nicknamed Doogie after the brainy young doctor in the early-1990s TV dramedy Doogie Howser, M.D.—learns faster and remembers things longer than wild-type mice. The work generated great interest and debate and even made the cover of Time magazine. But our findings left me asking, What exactly is a memory?

Scientists knew that converting perceptual experiences into long-lasting memories requires a brain region called the hippocampus. And we even knew what molecules are critical to the process, such as the NMDA receptor, which we altered to produce Doogie. But no one knew how, exactly, the activation of nerve cells in the brain represents memory. A few years ago I began to wonder if we could find a way to describe mathematically or physiologically what memory is. Could we identify the relevant neural network dynamic and visualize the activity pattern that occurs when a memory is formed?

For the better part of a century, neuroscientists had been attempting to discover which patterns of nerve cell activity represent information in the brain and how neural circuits process, modify and store information needed to control and shape behavior. Their earliest efforts involved simply trying to correlate neural activity–the frequency at which nerve cells fire–with some sort of measurable physiological or behavioral response. For example, in the mid-1920s Edgar Adrian performed electrical recordings on frog tissue and found that the firing rate of individual stretch nerves attached to a muscle varies with the amount of weight that is put on the muscle. This study was the first to suggest that information (in this case the intensity of a stimulus) can be conveyed by changes in neural activity–work for which he later won a Nobel Prize.

Since then, many researchers using a single electrode to monitor the activity of one neuron at a time have shown that, when stimulated, neurons in different areas of the brain also change their firing rates. For example, pioneering experiments by David H. Hubel and Torsten N. Wiesel demonstrated that the neurons in the primary visual cortex of cats, an area at the back of the brain, respond vigorously to the moving edges of a bar of light. Charles G. Gross of Princeton University and Robert Desimone of the Massachusetts Institute of Technology found that neurons in a different brain region of the monkey (the inferotemporal cortex) can alter their behavior in response to more complex stimuli, such as pictures of faces.

[div class=attrib]More from the source here.[end-div]

A Simpler Origin for Life

[div class=attrib]From Scientific American:[end-div]

Extraordinary discoveries inspire extraordinary claims. Thus, James Watson reported that immediately after he and Francis Crick uncovered the structure of DNA, Crick “winged into the Eagle (pub) to tell everyone within hearing that we had discovered the secret of life.” Their structure–an elegant double helix–almost merited such enthusiasm. Its proportions permitted information storage in a language in which four chemicals, called bases, played the same role as 26 letters do in the English language.

Further, the information was stored in two long chains, each of which specified the contents of its partner. This arrangement suggested a mechanism for reproduction: The two strands of the DNA double helix parted company, and new DNA building blocks that carry the bases, called nucleotides, lined up along the separated strands and linked up. Two double helices now existed in place of one, each a replica of the original.

[div class=attrib]More from theSource here.[end-div]

The Mystery of Methane on Mars and Titan

[div class=attrib]From Scientific American:[end-div]

It might mean life, it might mean unusual geologic activity; whichever it is, the presence of methane in the atmospheres of Mars and Titan is one of the most tantalizing puzzles in our solar system.

Of all the planets in the solar system other than Earth, Mars has arguably the greatest potential for life, either extinct or extant. It resembles Earth in so many ways: its formation process, its early climate history, its reservoirs of water, its volcanoes and other geologic processes. Microorganisms would fit right in. Another planetary body, Saturn’s largest moon Titan, also routinely comes up in discussions of extraterrestrial biology. In its primordial past, Titan possessed conditions conducive to the formation of molecular precursors of life, and some scientists believe it may have been alive then and might even be alive now.

To add intrigue to these possibilities, astronomers studying both these worlds have detected a gas that is often associated with living things: methane. It exists in small but significant quantities on Mars, and Titan is literally awash with it. A biological source is at least as plausible as a geologic one, for Mars if not for Titan. Either explanation would be fascinating in its own way, revealing either that we are not alone in the universe or that both Mars and Titan harbor large underground bodies of water together with unexpected levels of geochemical activity. Understanding the origin and fate of methane on these bodies will provide crucial clues to the processes that shape the formation, evolution and habitability of terrestrial worlds in this solar system and possibly in others.

[div class=attrib]More from theSource here.[end-div]

Can we say what we want?

[div class=attrib]From Eurozine:[end-div]

The French satirical paper Charlie-Hebdo has just been acquitted of publicly insulting Muslims by reprinting the notorious Danish cartoons featuring the Prophet. Influential Islamic groups had sued it for inciting hatred. Is free speech really in danger worldwide?

The understanding and practices of freedom of expression are being challenged in the twenty-first century. Some of the controversies of the past year or so that have drawn worldwide attention have included the row over Danish cartoons seen as anti-Muslim, the imprisonment of a British historian in Austria for Holocaust denial, and disputes over a French law forbidding denial of the Armenian genocide.

These debates are not new: the suppression of competing views and dissent, and of anything deemed immoral, heretical, or offensive, has dominated social, religious, and political history. These have returned to the fore in response to the stimuli of the communication revolution and of the events of 9/11. The global reach of most of our messages, including the culturally and politically specific, has rendered all expressions and their controls a prize worth fighting for, even to the death. Does this imply that stronger restrictions on freedom of expression should be established?

Freedom of expression, including the right to access to information, is a fundamental human right, central to achieving individual freedoms and real democracy. It increases the knowledge base and participation within a society and can also secure external checks on state accountability.

Yet freedom of expression is not absolute. The extent to which expression ought to be protected or censored has been the object of many impassionate debates. Few argue that freedom of expression is absolute and suffers no limits. But the line between what is permissible and what is not is always contested. Unlike many others, this right depends on its context and its definition is mostly left to the discretion of states.

Under international human rights standards, the right to freedom of expression may be restricted in order to protect the rights or reputation of others and national security, public order, or public health or morals, and provided it is necessary in a democratic society to do so and it is done by law. This formulation is found in both the International Covenant on Civil and Political Rights under article 19, and in the European Convention on Human Rights.

[div class=attrib]More from theSource here.[end-div]

The Movies in Our Eyes

[div class=attrib]From Scientific American:[end-div]

The retina processes information much morethan anyone has ever imagined, sending a dozen different movies to the brain.

We take our astonishing visual capabilities so much for granted that few of us ever stop to consider how we actually see. For decades, scientists have likened our visual-processing machinery to a television camera: the eye’s lens focuses incoming light onto an array of photoreceptors in the retina. These light detectors magically convert those photons into electrical signals that are sent along the optic nerve to the brain for processing. But recent experiments by the two of us and others indicate that this analogy is inadequate. The retina actually performs a significant amount of preprocessing right inside the eye and then sends a series of partial representations to the brain for interpretation.

We came to this surprising conclusion after investigating the retinas of rabbits, which are remarkably similar to those in humans. (Our work with salamanders has led to similar results.) The retina, it appears, is a tiny crescent of brain matter that has been brought out to the periphery to gain more direct access to the world. How does the retina construct the representations it sends? What do they “look” like when they reach the brain’s visual centers? How do they convey the vast richness of the real world? Do they impart meaning, helping the brain to analyze a scene? These are just some of the compelling questions the work has begun to answer.

[div class=attrib]More from theSource here.[end-div]

The concept of God – and why we don’t need it

[div class=attrib]From Eurozine:[end-div]

In these newly religious times, it no longer seems superfluous to rearm the atheists with arguments. When push comes to shove, atheists can only trust their reason, writes Burkhard Müller.

Some years ago I wrote a book entitled Drawing a Line – A Critique of Christianity [Schlußstrich – Kritik des Christentums], which argued that Christianity was false: not only in terms of its historical record, but fundamentally, as a very concept. I undertook to uncover this falsity as a contradiction in terms. While I do not wish to retract any of what I said at the time, I would now go beyond what I argued then in two respects.

For one thing, I no longer wish to adopt the same aggressive tone. The book was written at the beginning of the 1990s, when I was still living in Würzburg (in Bavaria), a bastion of Roman Catholicism. It is a prosperous city, powerful and conscious of the fact, which made it more than capable of provoking my ire; whereas for thirteen years now I have been living in the new East of Germany, where roughly eighty per cent of the population no longer recognize Christianity even as a rumour, where it appears as the exception, not the rule, and where one has the opportunity to reflect on the truth of the claim “this is as good as it gets”.

The second point is this: it seems to me that institutionalized, dogmatic Christianity, as expressed in the words of the Holy Scriptures and – more succinctly still – in the Credo, is losing ground. This is not only at the expense of a stupid and potentially violent strain of fundamentalism, as manifested in Islam and the American religious Right, but in Europe mostly at the expense of an often rather intellectually woolly and mawkish eclecticism. I will not be dealing here with any theological system in its doctrinal sense. I want rather to sound out the religious impulse, even – and especially – in its more diffuse form, and to get to its root. That is to say, to enquire of the concept of God whether in practice it accomplishes what is expected of it.

For people do not believe in God because they have been shown the proof of his existence. All such proofs presented by philosophers and theologians through the millennia have, by their very nature, the regrettable flaw that a proof can only refer to the circumstances of existing things, whereas God, as the predecessor of all circumstances, comes before, so to speak, and outside the realm of the demonstrable. These proofs, then, all have the character of something tacked on, giving the impression of a thin veneer on a very hefty block of wood. Belief in God, where it does not merely arise out of an unquestioned tradition, demands a spontaneous act on the part of the believer which the believers themselves will tend to describe as an act of faith, their opponents as a purely arbitrary decision; one, nevertheless, that always stems from a need of some kind. People believe in God because along with this belief goes an expectation that a particular wish will be fulfilled for them, a particular problem solved. What kinds of need are these, and how can God meet them?

[div class=attrib]More from theSource here.[end-div]

A Digital Life

[div class=attrib]From Scientific American:[end-div]

New systems may allow people to record everything they see and hear–and even things they cannot sense–and to store all these data in a personal digital archive.

Human memory can be maddeningly elusive. We stumble upon its limitations every day, when we forget a friend’s telephone number, the name of a business contact or the title of a favorite book. People have developed a variety of strategies for combating forgetfulness–messages scribbled on Post-it notes, for example, or electronic address books carried in handheld devices–but important information continues to slip through the cracks. Recently, however, our team at Microsoft Research has begun a quest to digitally chronicle every aspect of a person’s life, starting with one of our own lives (Bell’s). For the past six years, we have attempted to record all of Bell’s communications with other people and machines, as well as the images he sees, the sounds he hears and the Web sites he visits–storing everything in a personal digital archive that is both searchable and secure.

Digital memories can do more than simply assist the recollection of past events, conversations and projects. Portable sensors can take readings of things that are not even perceived by humans, such as oxygen levels in the blood or the amount of carbon dioxide in the air. Computers can then scan these data to identify patterns: for instance, they might determine which environmental conditions worsen a child’s asthma. Sensors can also log the three billion or so heartbeats in a person’s lifetime, along with other physiological indicators, and warn of a possible heart attack. This information would allow doctors to spot irregularities early, providing warnings before an illness becomes serious. Your physician would have access to a detailed, ongoing health record, and you would no longer have to rack your brain to answer questions such as “When did you first feel this way?”

[div class=attrib]More from theSource here.[end-div]

The Universe’s Invisible Hand

[div class=attrib]From Scientific American:[end-div]

Dark energy does more than hurry along the expansion of the universe. It also has a stranglehold on the shape and spacing of galaxies.

What took us so long? Only in 1998 did astronomers discover we had been missing nearly three quarters of the contents of the universe, the so-called dark energy–an unknown form of energy that surrounds each of us, tugging at us ever so slightly, holding the fate of the cosmos in its grip, but to which we are almost totally blind. Some researchers, to be sure, had anticipated that such energy existed, but even they will tell you that its detection ranks among the most revolutionary discoveries in 20th-century cosmology. Not only does dark energy appear to make up the bulk of the universe, but its existence, if it stands the test of time, will probably require the development of new theories of physics.

Scientists are just starting the long process of figuring out what dark energy is and what its implications are. One realization has already sunk in: although dark energy betrayed its existence through its effect on the universe as a whole, it may also shape the evolution of the universe’s inhabitants–stars, galaxies, galaxy clusters. Astronomers may have been staring at its handiwork for decades without realizing it.

[div class=attrib]More from theSource here.[end-div]

Evolved for Cancer?

[div class=attrib]From Scientific American:[end-div]

Natural selection lacks the power to erase cancer from our species and, some scientists argue, may even have provided tools that help tumors grow.

Natural selection is not natural perfection. Living creatures have evolved some remarkably complex adaptations, but we are still very vulnerable to disease. Among the most tragic of those ills–and perhaps most enigmatic–is cancer. A cancerous tumor is exquisitely well adapted for survival in its own grotesque way. Its cells continue to divide long after ordinary cells would stop. They destroy surrounding tissues to make room for themselves, and they trick the body into supplying them with energy to grow even larger. But the tumors that afflict us are not foreign parasites that have acquired sophisticated strategies for attacking our bodies. They are made of our own cells, turned against us. Nor is cancer some bizarre rarity: a woman in the U.S. has a 39 percent chance of being diagnosed with some type of cancer in her lifetime. A man has a 45 percent chance.

These facts make cancer a grim yet fascinating puzzle for evolutionary biologists. If natural selection is powerful enough to produce complex adaptations, from the eye to the immune system, why has it been unable to wipe out cancer? The answer, these investigators argue, lies in the evolutionary process itself. Natural selection has favored certain defenses against cancer but cannot eliminate it altogether. Ironically, natural selection may even inadvertently provide some of the tools that cancer cells can use to grow.

[div class=attrib]More from theSource here.[end-div]

The Dark Ages of the Universe

[div class=attrib]From Scientific American:[end-div]

Astronomers are trying to fill in the blank pages in our photo album of the infant universe.

When I look up into the sky at night, I often wonder whether we humans are too preoccupied with ourselves. There is much more to the universe than meets the eye on earth. As an astrophysicist I have the privilege of being paid to think about it, and it puts things in perspective for me. There are things that I would otherwise be bothered by–my own death, for example. Everyone will die sometime, but when I see the universe as a whole, it gives me a sense of longevity. I do not care so much about myself as I would otherwise, because of the big picture.

Cosmologists are addressing some of the fundamental questions that people attempted to resolve over the centuries through philosophical thinking, but we are doing so based on systematic observation and a quantitative methodology. Perhaps the greatest triumph of the past century has been a model of the universe that is supported by a large body of data. The value of such a model to our society is sometimes underappreciated. When I open the daily newspaper as part of my morning routine, I often see lengthy descriptions of conflicts between people about borders, possessions or liberties. Today’s news is often forgotten a few days later. But when one opens ancient texts that have appealed to a broad audience over a longer period of time, such as the Bible, what does one often find in the opening chapter? A discussion of how the constituents of the universe–light, stars, life–were created. Although -humans are often caught up with mundane problems, they are curious about the big -picture. As citizens of the universe we -cannot help but wonder how the first sources of light formed, how life came into existence and whether we are alone as in-telligent beings in this vast space. Astronomers in the 21st century are uniquely positioned to answer these big questions.

[div class=attrib]More from theSource here.[end-div]

Mirrors in the Mind

[div class=attrib]From Scientific American:[end-div]

A special class of brain cells reflects the outside world, revealing a new avenue for human understanding, connecting and learning

John watches Mary, who is grasping a flower. John knows what Mary is doing–she is picking up the flower–and he also knows why she is doing it. Mary is smiling at John, and he guesses that she will give him the flower as a present. The simple scene lasts just moments, and John’s grasp of what is happening is nearly instantaneous. But how exactly does he understand Mary’s action, as well as her intention, so effortlessly?

A decade ago most neuroscientists and psychologists would have attributed an individual’s understanding of someone else’s actions and, especially, intentions to a rapid reasoning process not unlike that used to solve a logical problem: some sophisticated cognitive apparatus in John’s brain elaborated on the information his senses took in and compared it with similar previously stored experiences, allowing John to arrive at a conclusion about what Mary was up to and why.
[div class=attrib]More from theSource here.[end-div]

Viral Nanoelectronics

[div class=attrib]From Scientific American:[end-div]

M.I.T. breeds viruses that coat themselves in selected substances, then self-assemble into such devices as liquid crystals, nanowires and electrodes.

For many years, materials scientists wanted to know how the abalone, a marine snail, constructed its magnificently strong shell from unpromising minerals, so that they could make similar materials themselves. Angela M. Belcher asked a different question: Why not get the abalone to make things for us?

She put a thin glass slip between the abalone and its shell, then removed it. “We got a flat pearl,” she says, “which we could use to study shell formation on an hour-by-hour basis, without having to sacrifice the animal.” It turns out the abalone manufactures proteins that induce calcium carbonate molecules to adopt two distinct yet seamlessly melded crystalline forms–one strong, the other fast-growing. The work earned her a Ph.D. from the University of California, Santa Barbara, in 1997 and paved her way to consultancies with the pearl industry, a professorship at the Massachusetts Institute of Technology, and a founding role in a start-up company called Cambrios in Mountain View, Calif.
[div class=attrib]More from theSource here.[end-div]

A Plan to Keep Carbon in Check

[div class=attrib]By Robert H. Socolow and Stephen W. Pacala, From Scientific American:[end-div]

Getting a grip on greenhouse gases is daunting but doable. The technologies already exist. But there is no time to lose.

Retreating glaciers, stronger hurricanes, hotter summers, thinner polar bears: the ominous harbingers of global warming are driving companies and governments to work toward an unprecedented change in the historical pattern of fossil-fuel use. Faster and faster, year after year for two centuries, human beings have been transferring carbon to the atmosphere from below the surface of the earth. Today the world’s coal, oil and natural gas industries dig up and pump out about seven billion tons of carbon a year, and society burns nearly all of it, releasing carbon dioxide (CO2). Ever more people are convinced that prudence dictates a reversal of the present course of rising CO2 emissions.

The boundary separating the truly dangerous consequences of emissions from the merely unwise is probably located near (but below) a doubling of the concentration of CO2 that was in the atmosphere in the 18th century, before the Industrial Revolution began. Every increase in concentration carries new risks, but avoiding that danger zone would reduce the likelihood of triggering major, irreversible climate changes, such as the disappearance of the Greenland ice cap. Two years ago the two of us provided a simple framework to relate future CO2 emissions to this goal.

[div class=attrib]More from theSource here.[end-div]

Plan B for Energy

[div class=attrib]From Scientific American:[end-div]

If efficiency improvements and incremental advances in today’s technologies fail to halt global warming, could revolutionary new carbon-free energy sources save the day? Don’t count on it–but don’t count it out, either.

To keep this world tolerable for life as we like it, humanity must complete a marathon of technological change whose finish line lies far over the horizon. Robert H. Socolow and Stephen W. Pacala of Princeton University have compared the feat to a multigenerational relay race [see their article “A Plan to Keep Carbon in Check”]. They outline a strategy to win the first 50-year leg by reining back carbon dioxide emissions from a century of unbridled acceleration. Existing technologies, applied both wisely and promptly, should carry us to this first milestone without trampling the global economy. That is a sound plan A.

The plan is far from foolproof, however. It depends on societies ramping up an array of carbon-reducing practices to form seven “wedges,” each of which keeps 25 billion tons of carbon in the ground and out of the air. Any slow starts or early plateaus will pull us off track. And some scientists worry that stabilizing greenhouse gas emissions will require up to 18 wedges by 2056, not the seven that Socolow and Pacala forecast in their most widely cited model.
[div class=attrib]More from theSource here.[end-div]

The Expert Mind

[div class=attrib]From Scientific American:[end-div]

Studies of the mental processes of chess grandmasters have revealed clues to how people become experts in other fields as well.

A man walks along the inside of a circle of chess tables, glancing at each for two or three seconds before making his move. On the outer rim, dozens of amateurs sit pondering their replies until he completes the circuit. The year is 1909, the man is Jose Raul Capablanca of Cuba, and the result is a whitewash: 28 wins in as many games. The exhibition was part of a tour in which Capablanca won 168 games in a row.

How did he play so well, so quickly? And how far ahead could he calculate under such constraints? “I see only one move ahead,” Capablanca is said to have answered, “but it is always the correct one.”

[div class=attrib]More from theSource here.end-div]

A Power Grid for the Hydrogen Economy

[div class=attrib]From Scientific American:[end-div]

On the afternoon of August 14, 2003, electricity failed to arrive in New York City, plunging the eight million inhabitants of the Big Apple–along with 40 million other people throughout the northeastern U.S. and Ontario–into a tense night of darkness. After one power plant in Ohio had shut down, elevated power loads overheated high-voltage lines, which sagged into trees and short-circuited. Like toppling dominoes, the failures cascaded through the electrical grid, knocking 265 power plants offline and darkening 24,000 square kilometers.

That incident–and an even more extensive blackout that affected 56 million people in Italy and Switzerland a month later–called attention to pervasive problems with modern civilization’s vital equivalent of a biological circulatory system, its interconnected electrical networks. In North America the electrical grid has evolved in piecemeal fashion over the past 100 years. Today the more than $1-trillion infrastructure spans the continent with millions of kilometers of wire operating at up to 765,000 volts. Despite its importance, no single organization has control over the operation, maintenance or protection of the grid; the same is true in Europe. Dozens of utilities must cooperate even as they compete to generate and deliver, every second, exactly as much power as customers demand–and no more. The 2003 blackouts raised calls for greater government oversight and spurred the industry to move more quickly, through its Intelli-Grid Consortium and the Grid-Wise program of the U.S. Department of Energy, to create self-healing systems for the grid that may prevent some kinds of outages from cascading. But reliability is not the only challenge–and arguably not even the most important challenge–that the grid faces in the decades ahead.

[div class=attrib]More from theSource here.[end-div]

‘Thirst For Knowledge’ May Be Opium Craving

[div class=attrib]From ScienceDaily:[end-div]

Neuroscientists have proposed a simple explanation for the pleasure of grasping a new concept: The brain is getting its fix.

The “click” of comprehension triggers a biochemical cascade that rewards the brain with a shot of natural opium-like substances, said Irving Biederman of the University of Southern California. He presents his theory in an invited article in the latest issue of American Scientist.

“While you’re trying to understand a difficult theorem, it’s not fun,” said Biederman, professor of neuroscience in the USC College of Letters, Arts and Sciences.

“But once you get it, you just feel fabulous.”

The brain’s craving for a fix motivates humans to maximize the rate at which they absorb knowledge, he said.

“I think we’re exquisitely tuned to this as if we’re junkies, second by second.”

Biederman hypothesized that knowledge addiction has strong evolutionary value because mate selection correlates closely with perceived intelligence.

Only more pressing material needs, such as hunger, can suspend the quest for knowledge, he added.

The same mechanism is involved in the aesthetic experience, Biederman said, providing a neurological explanation for the pleasure we derive from art.

[div class=attrib]More from theSource here.[end-div]

Raiders of the lost dimension

[div class=attrib]From Los Alamos National Laboratory:[end-div]

A team of scientists working at the National High Magnetic Field Laboratory’s Pulsed Field Facility at Los Alamos has uncovered an intriguing phenomenon while studying magnetic waves in barium copper silicate, a 2,500-year-old pigment known as Han purple. The researchers discovered that when they exposed newly grown crystals of the pigment to very high magnetic fields at very low temperatures, it entered a rarely observed state of matter. At the threshold of that matter state–called the quantum critical point-the waves actually lose a dimension. That is, the magnetic waves go from a three-dimensional to a two-dimensional pattern. The discovery is yet another step toward understanding the quantum mechanics of the universe.

Writing about the work in today’s issue of the scientific journal Nature, the researchers describe how they discovered that at high magnetic fields (above 23 Tesla) and at temperatures between 1 and 3 degrees Kelvin (or roughly minus 460 degrees Fahrenheit), the magnetic waves in Han purple crystals “exist” in a unique state of matter called a Bose Einstein condensate (BEC). In the BEC state, magnetic waves propagate simultaneously in all of three directions (up-down, forward-backward and left-right). At the quantum critical point, however, the waves stop propagating in the up-down dimension, causing the magnetic ripples to exist in only two dimensions, much the same way as ripples are confined to the surface of a pond.

“The reduced dimensionality really came as a surprise,” said Neil Harrison, an experimental physicist at the Los Alamos Pulsed Field Facility, “just when we thought we had reached an understanding of the quantum nature of its magnetic BEC.”

[div class=attrib]More from theSource here.[end-div]

Dependable Software by Design

[div class=attrib]From Scientific American:[end-div]

Computers fly our airliners and run most of the world’s banking, communications, retail and manufacturing systems. Now powerful analysis tools will at last help software engineers ensure the reliability of their designs.

An architectural marvel when it opened 11 years ago, the new Denver International Airport’s high-tech jewel was to be its automated baggage handler. It would autonomously route luggage around 26 miles of conveyors for rapid, seamless delivery to planes and passengers. But software problems dogged the system, delaying the airport’s opening by 16 months and adding hundreds of millions of dollars in cost overruns. Despite years of tweaking, it never ran reliably. Last summer airport managers finally pulled the plug–reverting to traditional manually loaded baggage carts and tugs with human drivers. The mechanized handler’s designer, BAE Automated Systems, was liquidated, and United Airlines, its principal user, slipped into bankruptcy, in part because of the mess.

The high price of poor software design is paid daily by millions of frustrated users. Other notorious cases include costly debacles at the U.S. Internal Revenue Service (a failed $4-billion modernization effort in 1997, followed by an equally troubled $8-billion updating project); the Federal Bureau of Investigation (a $170-million virtual case-file management system was scrapped in 2005); and the Federal Aviation Administration (a lingering and still unsuccessful attempt to renovate its aging air-traffic control system).

[div class=attrib]More from theSource here.[end-div]

The First Few Microseconds

[div class=attrib]From Scientific American:[end-div]

In recent experiments, physicists have replicated conditions of the infant universe–with startling results.

For the past five years, hundreds of scientists have been using a powerful new atom smasher at Brookhaven National Laboratory on Long Island to mimic conditions that existed at the birth of the universe. Called the Relativistic Heavy Ion Collider (RHIC, pronounced “rick”), it clashes two opposing beams of gold nuclei traveling at nearly the speed of light. The resulting collisions between pairs of these atomic nuclei generate exceedingly hot, dense bursts of matter and energy to simulate what happened during the first few microseconds of the big bang. These brief “mini bangs” give physicists a ringside seat on some of the earliest moments of creation.

During those early moments, matter was an ultrahot, superdense brew of particles called quarks and gluons rushing hither and thither and crashing willy-nilly into one another. A sprinkling of electrons, photons and other light elementary particles seasoned the soup. This mixture had a temperature in the trillions of degrees, more than 100,000 times hotter than the sun’s core.

[div class=attrib]More from theSource here.[end-div]

Computing with Quantum Knots

[div class=attrib]From Scientific American:[end-div]

A machine based on bizarre particles called anyons that represents a calculation as a set of braids in spacetime might be a shortcut to practical quantum computation.

Quantum computers promise to perform calculations believed to be impossible for ordinary computers. Some of those calculations are of great real-world importance. For example, certain widely used encryption methods could be cracked given a computer capable of breaking a large number into its component factors within a reasonable length of time. Virtually all encryption methods used for highly sensitive data are vulnerable to one quantum algorithm or another.

The extra power of a quantum computer comes about because it operates on information represented as qubits, or quantum bits, instead of bits. An ordinary classical bit can be either a 0 or a 1, and standard microchip architectures enforce that dichotomy rigorously. A qubit, in contrast, can be in a so-called superposition state, which entails proportions of 0 and 1 coexisting together. One can think of the possible qubit states as points on a sphere. The north pole is a classical 1, the south pole a 0, and all the points in between are all the possible superpositions of 0 and 1 [see “Rules for a Complex Quantum World,” by Michael A. Nielsen; Scientific American, November 2002]. The freedom that qubits have to roam across the entire sphere helps to give quantum computers their unique capabilities.

[div class=attrib]More from theSource here.[end-div]

The Limits of Reason

[div class=attrib]From Scientific American:[end-div]

Ideas on complexity and randomness originally suggested by Gottfried W. Leibniz in 1686, combined with modern information theory, imply that there can never be a “theory of everything” for all of mathematics.

In 1956 Scientific American published an article by Ernest Nagel and James R. Newman entitled “Gödel’s Proof.” Two years later the writers published a book with the same title–a wonderful work that is still in print. I was a child, not even a teenager, and I was obsessed by this little book. I remember the thrill of discovering it in the New York Public Library. I used to carry it around with me and try to explain it to other children.

It fascinated me because Kurt Gödel used mathematics to show that mathematics itself has limitations. Gödel refuted the position of David Hilbert , who about a century ago declared that there was a theory of everything for math, a finite set of principles from which one could mindlessly deduce all mathematical truths by tediously following the rules of symbolic logic. But Gödel demonstrated that mathematics contains true statements that cannot be proved that way. His result is based on two self-referential paradoxes: “This statement is false” and “This statement is unprovable.”.

[div class=attrib]More from theSource here.[end-div]

Unlocking the Secrets of Longevity Genes

[div class=attrib]From Scientific American:[end-div]

A handful of genes that control the body’s defenses during hard times can also dramatically improve health and prolong life in diverse organisms. Understanding how they work may reveal the keys to extending human life span while banishing diseases of old age.

You can assume quite a bit about the state of a used car just from its mileage and model year. The wear and tear of heavy driving and the passage of time will have taken an inevitable toll. The same appears to be true of aging in people, but the analogy is flawed because of a crucial difference between inanimate machines and living creatures: deterioration is not inexorable in biological systems, which can respond to their environments and use their own energy to defend and repair themselves.

At one time, scientists believed aging to be not just deterioration but an active continuation of an organism’s genetically programmed development. Once an individual achieved maturity, “aging genes” began to direct its progress toward the grave. This idea has been discredited, and conventional wisdom now holds that aging really is just wearing out over time because the body’s normal maintenance and repair mechanisms simply wane. Evolutionary natural selection, the logic goes, has no reason to keep them working once an organism has passed its reproductive age.

[div class=attrib]More from theSource here.[end-div]