I Didn’t Sin—It Was My Brain

[div class=attrib]From Discover:[end-div]

Why does being bad feel so good? Pride, envy, greed, wrath, lust, gluttony, and sloth: It might sound like just one more episode of The Real Housewives of New Jersey, but this enduring formulation of the worst of human failures has inspired great art for thousands of years. In the 14th century Dante depicted ghoulish evildoers suffering for eternity in his masterpiece, The Divine Comedy. Medieval muralists put the fear of God into churchgoers with lurid scenarios of demons and devils. More recently George Balanchine choreographed their dance.

Today these transgressions are inspiring great science, too. New research is explaining where these behaviors come from and helping us understand why we continue to engage in them—and often celebrate them—even as we declare them to be evil. Techniques such as functional magnetic resonance imaging (fMRI), which highlights metabolically active areas of the brain, now allow neuroscientists to probe the biology behind bad intentions.

The most enjoyable sins engage the brain’s reward circuitry, including evolutionarily ancient regions such as the nucleus accumbens and hypothalamus; located deep in the brain, they provide us such fundamental feelings as pain, pleasure, reward, and punishment. More disagreeable forms of sin such as wrath and envy enlist the dorsal anterior cingulate cortex (dACC). This area, buried in the front of the brain, is often called the brain’s “conflict detector,” coming online when you are confronted with contradictory information, or even simply when you feel pain. The more social sins (pride, envy, lust, wrath) recruit the medial prefrontal cortex (mPFC), brain terrain just behind the forehead, which helps shape the awareness of self.

No understanding of temptation is complete without considering restraint, and neuroscience has begun to illuminate this process as well. As we struggle to resist, inhibitory cognitive control networks involving the front of the brain activate to squelch the impulse by tempering its appeal. Meanwhile, research suggests that regions such as the caudate—partly responsible for body movement and coordination—suppress the physical impulse. It seems to be the same whether you feel a spark of lechery, a surge of jealousy, or the sudden desire to pop somebody in the mouth: The two sides battle it out, the devilish reward system versus the angelic brain regions that hold us in check.

It might be too strong to claim that evolution has wired us for sin, but excessive indulgence in lust or greed could certainly put you ahead of your competitors. “Many of these sins you could think of as virtues taken to the extreme,” says Adam Safron, a research consultant at Northwestern University whose neuroimaging studies focus on sexual behavior. “From the perspective of natural selection, you want the organism to eat, to procreate, so you make them rewarding. But there’s a potential for that process to go beyond the bounds.”

[div class=attrib]More from theSource here[end-div]

Stephen Hawking Is Making His Comeback

[div class=attrib]From Discover:[end-div]

As an undergraduate at Oxford University, Stephen William Hawking was a wise guy, a provocateur. He was popular, a lively coxswain for the crew team. Physics came easy. He slept through lectures, seldom studied, and criticized his professors. That all changed when he started graduate school at Cambridge in 1962 and subsequently learned that he had only a few years to live.

The symptoms first appeared while Hawking was still at Oxford. He could not row a scull as easily as he once had; he took a few bad, clumsy falls. A college doctor told him not to drink so much beer. By 1963 his condition had gotten bad enough that his mother brought him to a hospital in London, where he received the devastating diagnosis: motor neuron disease, as ALS is called in the United Kingdom. The prognosis was grim and final: rapid wasting of nerves and muscles, near-total paralysis, and death from respiratory failure in three to five years.

Not surprisingly, Hawking grew depressed, seeking solace in the music of Wagner (contrary to some media reports, however, he says he did not go on a drinking binge). And yet he did not disengage from life. Later in 1963 he met Jane Wilde, a student of medieval poetry at the University of London. They fell in love and resolved to make the most of what they both assumed would be a tragically short relationship. In 1965 they married, and Hawking returned to physics with newfound energy.

Also that year, Hawking had an encounter that led to his first major contribution to his field. The occasion was a talk at Kings College in London given by Roger Penrose, an eminent mathematician then at Birkbeck College. Penrose had just proved something remarkable and, for physicists, disturbing: Black holes, the light-trapping chasms in space-time that form in the aftermath of the collapse of massive stars, must all contain singularities—points where space, time, and the very laws of physics fall apart.

Before Penrose’s work, many physicists had regarded singularities as mere curiosities, permitted by Einstein’s theory of general relativity but unlikely to exist. The standard assumption was that a singularity could form only if a perfectly spherical star collapsed with perfect symmetry, the kind of ideal conditions that never occur in the real world. Penrose proved otherwise. He found that any star massive enough to form a black hole upon its death must create a singularity. This realization meant that the laws of physics could not be used to describe everything in the universe; the singularity was a cosmic abyss.

At a subsequent lecture, Hawking grilled Penrose on his ideas. “He asked some awkward questions,” Penrose says. “He was very much on the ball. I had probably been a bit vague in one of my statements, and he was sharpening it up a bit. I was a little alarmed that he noticed something that I had glossed over, and that he was able to spot it so quickly.”

Hawking had just renewed his search for a subject for his Ph.D. thesis, a project he had abandoned after receiving the ALS diagnosis. His condition had stabilized somewhat, and his future no longer looked completely bleak. Now he had his subject: He wanted to apply Penrose’s approach to the cosmos at large.

Physicists have known since 1929 that the universe is expanding. Hawking reasoned that if the history of the universe could be run backward, so that the universe was shrinking instead of expanding, it would behave (mathematically at least) like a collapsing star, the same sort of phenomenon Penrose had analyzed. Hawking’s work was timely. In 1965, physicists working at Bell Labs in New Jersey discovered the cosmic microwave background radiation, the first direct evidence that the universe began with the Big Bang. But was the Big Bang a singularity, or was it a concentrated, hot ball of energy—awesome and mind-bending, but still describable by the laws of physics?

[div class=attrib]More from theSource here.[end-div]

How Much of Your Memory Is True?

[div class=attrib]From Discover:[end-div]

Rita Magil was driving down a Montreal boulevard one sunny morning in 2002 when a car came blasting through a red light straight toward her. “I slammed the brakes, but I knew it was too late,” she says. “I thought I was going to die.” The oncoming car smashed into hers, pushing her off the road and into a building with large cement pillars in front. A pillar tore through the car, stopping only about a foot from her face. She was trapped in the crumpled vehicle, but to her shock, she was still alive.

The accident left Magil with two broken ribs and a broken collarbone. It also left her with post-traumatic stress disorder (PTSD) and a desperate wish to forget. Long after her bones healed, Magil was plagued by the memory of the cement barriers looming toward her. “I would be doing regular things—cooking something, shopping, whatever—and the image would just come into my mind from nowhere,” she says. Her heart would pound; she would start to sweat and feel jumpy all over. It felt visceral and real, like something that was happening at that very moment.

Most people who survive accidents or attacks never develop PTSD. But for some, the event forges a memory that is pathologically potent, erupting into consciousness again and again. “PTSD really can be characterized as a disorder of memory,” says McGill University psychologist Alain Brunet, who studies and treats psychological trauma. “It’s about what you wish to forget and what you cannot forget.” This kind of memory is not misty and water­colored. It is relentless.

More than a year after her accident, Magil saw Brunet’s ad for an experimental treatment for PTSD, and she volunteered. She took a low dose of a common blood-pressure drug, propranolol, that reduces activity in the amygdala, a part of the brain that processes emotions. Then she listened to a taped re-creation of her car accident. She had relived that day in her mind a thousand times. The difference this time was that the drug broke the link between her factual memory and her emotional memory. Propranolol blocks the action of adrenaline, so it prevented her from tensing up and getting anxious. By having Magil think about the accident while the drug was in her body, Brunet hoped to permanently change how she remembered the crash. It worked. She did not forget the accident but was actively able to reshape her memory of the event, stripping away the terror while leaving the facts behind.

Brunet’s experiment emerges from one of the most exciting and controversial recent findings in neuroscience: that we alter our memories just by remembering them. Karim Nader of McGill—the scientist who made this discovery—hopes it means that people with PTSD can cure themselves by editing their memories. Altering remembered thoughts might also liberate people imprisoned by anxiety, obsessive-compulsive disorder, even addiction. “There is no such thing as a pharmacological cure in psychiatry,” Brunet says. “But we may be on the verge of changing that.”

[div class=attrib]More from theSource here[end-div]

Building an Interstate Highway System for Energy

[div class=attrib]From Discover:[end-div]

President Obama plans to spend billions building it. General Electric is already running slick ads touting the technology behind it. And Greenpeace declares that it is a great idea. But what exactly is a “smart grid”? According to one big-picture description, it is much of what today’s power grid is not, and more of what it must become if the United States is to replace carbon-belching, coal-fired power with renewable energy generated from sun and wind.

Today’s power grids are designed for local delivery, linking customers in a given city or region to power plants relatively nearby. But local grids are ill-suited to distributing energy from the alternative sources of tomorrow. North America’s strongest winds, most intense sunlight, and hottest geothermal springs are largely concentrated in remote regions hundreds or thousands of miles from the big cities that need electricity most. “Half of the population in the United States lives within 100 miles of the coasts, but most of the wind resources lie between North Dakota and West Texas,” says Michael Heyeck, senior vice president for transmission at the utility giant American Electric Power. Worse, those winds constantly ebb and flow, creating a variable supply.

Power engineers are already sketching the outlines of the next-generation electrical grid that will keep our homes and factories humming with clean—but fluctuating—renewable energy. The idea is to expand the grid from the top down by adding thousands of miles of robust new transmission lines, while enhancing communication from the bottom up with electronics enabling millions of homes and businesses to optimize their energy use.

The Grid We Have
When electricity leaves a power plant today, it is shuttled from place to place over high-voltage lines, those cables on steel pylons that cut across landscapes and run virtually contiguously from coast to coast. Before it reaches your home or office, the voltage is reduced incrementally by passing through one or more intermediate points, called substations. The substations process the power until it can flow to outlets in homes and businesses at the safe level of 110 volts.

The vast network of power lines delivering the juice may be interconnected, but pushing electricity all the way from one coast to the other is unthinkable with the present technology. That is because the network is an agglomeration of local systems patched together to exchange relatively modest quantities of surplus power. In fact, these systems form three distinct grids in the United States: the Eastern, Western, and Texas interconnects. Only a handful of transfer stations can move power between the different grids.

[div class=attrib]More from theSource here.[end-div]

A Scientist’s Guide to Finding Alien Life: Where, When, and in What Universe

[div class=attrib]From Discover:[end-div]

Things were not looking so good for alien life in 1976, after the Viking I spacecraft landed on Mars, stretched out its robotic arm, and gathered up a fist-size pile of red dirt for chemical testing. Results from the probe’s built-in lab were anything but encouraging. There were no clear signs of biological activity, and the pictures Viking beamed back showed a bleak, frozen desert world, backing up that grim assessment. It appeared that our best hope for finding life on another planet had blown away like dust in a Martian windstorm.

What a difference 33 years makes. Back then, Mars seemed the only remotely plausible place beyond Earth where biology could have taken root. Today our conception of life in the universe is being turned on its head as scientists are finding a whole lot of inviting real estate out there. As a result, they are beginning to think not in terms of single places to look for life but in terms of “habitable zones”—maps of the myriad places where living things could conceivably thrive beyond Earth. Such abodes of life may lie on other planets and moons throughout our galaxy, throughout the universe, and even beyond.

The pace of progress is staggering. Just last November new studies of Saturn’s moon Enceladus strengthened the case for a reservoir of warm water buried beneath its craggy surface. Nobody had ever thought of this roughly 300-mile-wide icy satellite as anything special—until the Cassini spacecraft witnessed geysers of water vapor blowing out from its surface. Now Enceladus joins Jupiter’s moon Europa on the growing list of unlikely solar system locales that seem to harbor liquid water and, in principle, the ingredients for life.

Astronomers are also closing in on a possibly huge number of Earth-like worlds around other stars. Since the mid-1990s they have already identified roughly 340 extrasolar planets. Most of these are massive gaseous bodies, but the latest searches are turning up ever-smaller worlds. Two months ago the European satellite Corot spotted an extrasolar planet less than twice the diameter of Earth (see “The Inspiring Boom in Super-Earths”), and NASA’s new Kepler probe is poised to start searching for genuine analogues of Earth later this year. Meanwhile, recent discoveries show that microorganisms are much hardier than we thought, meaning that even planets that are not terribly Earth-like might still be suited to biology.

Together, these findings indicate that Mars was only the first step of the search, not the last. The habitable zones of the cosmos are vast, it seems, and they may be teeming with life.

[div class=attrib]More from theSource here.[end-div]

The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

[div class=attrib]From Discover:[end-div]

The farther we peer into space, the more we realize that the nature of the universe cannot be understood fully by inspecting spiral galaxies or watching distant supernovas. It lies deeper. It involves our very selves.

This insight snapped into focus one day while one of us (Lanza) was walking through the woods. Looking up, he saw a huge golden orb web spider tethered to the overhead boughs. There the creature sat on a single thread, reaching out across its web to detect the vibrations of a trapped insect struggling to escape. The spider surveyed its universe, but everything beyond that gossamer pinwheel was incomprehensible. The human observer seemed as far-off to the spider as telescopic objects seem to us. Yet there was something kindred: We humans, too, lie at the heart of a great web of space and time whose threads are connected according to laws that dwell in our minds.

Is the web possible without the spider? Are space and time physical objects that would continue to exist even if living creatures were removed from the scene?

Figuring out the nature of the real world has obsessed scientists and philosophers for millennia. Three hundred years ago, the Irish empiricist George Berkeley contributed a particularly prescient observation: The only thing we can perceive are our perceptions. In other words, consciousness is the matrix upon which the cosmos is apprehended. Color, sound, temperature, and the like exist only as perceptions in our head, not as absolute essences. In the broadest sense, we cannot be sure of an outside universe at all.

For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

[div class=attrib]More from theSource here.[end-div]

L’Aquila: The other casualty

18th-century Church of Santa Maria del Suffragio. Image courtesy of The New York Times.The earthquake in central Italy last week zeroed in on the beautiful medieval hill town of L’Aquila. It claimed the lives of 294 young and old, injured several thousand more, and made tens of thousands homeless. This is a heart-wrenching human tragedy. It’s also a cultural one. The quake razed centuries of L’Aquila’s historical buildings, broke the foundations of many of the town’s churches and public spaces, destroyed countless cultural artifacts, and forever buried much of the town’s irreplaceable art under tons of twisted iron and fractured stone.

Like many small and lesser known towns in Italy, L?Aquila did not boast a roster of works by ?a-list? artists on its walls, ceilings and piazzas; no Michelangelos or Da Vincis here, no works by Giotto or Raphael. And yet, the cultural loss is no less significant, for the quake destroyed much of the common art that the citizens of L?Aquila shared as a social bond. It?s the everyday art that they passed on their way to home or school or work; the fountains in the piazzas, the ornate porticos, the painted building facades, the hand-carved doors, the marble statues on street corners, the frescoes and paintings by local artists hanging on the ordinary walls. It?s this everyday art – the art that surrounded and nourished the citizens of L?Aquila – that is gone.

New York Times columnist, Michael Kimmelman put it this way in his April 11, 2009 article:

Italy is not like America. Art isn?t reduced here to a litany of obscene auction prices or lamentations over the bursting bubble of shameless excess. It?s a matter of daily life, linking home and history. Italians don?t visit museums much, truth be told, because they already live in them and can?t live without them. The art world might retrieve a useful lesson from the rubble.

I don’t fully agree with Mr.Kimmelman. There’s plenty of excess and pretentiousness in the salons of Paris, London and even Beijing and Mumbai, not just the serious art houses of New York. And yet, he has accurately observed the plight of L’Aquila. How often have you seen people confronted with the aftermath of a natural (or manmade) tragedy sifting through the remains, looking for a precious artifact – a sentimental photo, a memorable painting, a meaningful gift. These tragic situations often make people realize what is truly precious (aside from life and family and friends), and it’s not the plasma TV.

The Strange Forests that Drink—and Eat—Fog

[div class=attrib]From Discover:[end-div]

On the rugged roadway approaching Fray Jorge National Park in north-central Chile, you are surrounded by desert. This area receives less than six inches of rain a year, and the dry terrain is more suggestive of the badlands of the American Southwest than of the lush landscapes of the Amazon. Yet as the road climbs, there is an improbable shift. Perched atop the coastal mountains here, some 1,500 to 2,000 feet above the level of the nearby Pacific Ocean, are patches of vibrant rain forest covering up to 30 acres apiece. Trees stretch as much as 100 feet into the sky, with ferns, mosses, and bromeliads adorning their canopies. Then comes a second twist: As you leave your car and follow a rising path from the shrub into the forest, it suddenly starts to rain. This is not rain from clouds in the sky above, but fog dripping from the tree canopy. These trees are so efficient at snatching moisture out of the air that the fog provides them with three-quarters of all the water they need.

Understanding these pocket rain forests and how they sustain themselves in the middle of a rugged desert has become the life’s work of a small cadre of scientists who are only now beginning to fully appreciate Fray Jorge’s third and deepest surprise: The trees that grow here do more than just drink the fog. They eat it too.

Fray Jorge lies at the north end of a vast rain forest belt that stretches southward some 600 miles to the tip of Chile. In the more southerly regions of this zone, the forest is wetter, thicker, and more contiguous, but it still depends on fog to survive dry summer conditions. Kathleen C. Weathers, an ecosystem scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York, has been studying the effects of fog on forest ecosystems for 25 years, and she still cannot quite believe how it works. “One step inside a fog forest and it’s clear that you’ve entered a remarkable ecosystem,” she says. “The ways in which trees, leaves, mosses, and bromeliads have adapted to harvest tiny droplets of water that hang in the atmosphere is unparalleled.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Juan J. Armesto/Foundation Senda Darwin Archive[end-div]

CERN celebrates 20th anniversary of World Wide Web

theDiagonal doesn’t normally post “newsy” items. So, we are making an exception in this case for two reasons: first, the “web” wasn’t around in 1989 so we wouldn’t have been able to post a news release on our blog announcing its birth; second, in 1989 Tim Berners-Lee’s then manager waved off his proposal with a “Vague, but exciting” annotation, so without the benefit of the hindsight we now have and lacking in foresight that we so desire, we may just have dismissed it. The rest, as they say “is history”.

[div class=attrib]From Interactions.org:[end-div]

Web inventor Tim Berners-Lee today returned to the birthplace of his brainchild, 20 years after submitting his paper ‘Information Management: A Proposal’ to his manager Mike Sendall in March 1989. By writing the words ‘Vague, but exciting’ on the document’s cover, and giving Berners-Lee the go-ahead to continue, Sendall signed into existence the information revolution of our time: the World Wide Web. In September the following year, Berners-Lee took delivery of a computer called a NeXT cube, and by December 1990 the Web was up and running, albeit between just a couple of computers at CERN*.

Today’s event takes a look back at some of the early history, and pre-history, of the World Wide Web at CERN, includes a keynote speech from Tim Berners-Lee, and concludes with a series of talks from some of today’s Web pioneers.

“It’s a pleasure to be back at CERN today,” said Berners-Lee. “CERN has come a long way since 1989, and so has the Web, but its roots will always be here.”

The World Wide Web is undoubtedly the most well known spin-off from CERN, but it’s not the only one. Technologies developed at CERN have found applications in domains as varied as solar energy collection and medical imaging.

“When CERN scientists find a technological hurdle in the way of their ambitions, they have a tendency to solve it,” said CERN Director General Rolf Heuer. “I’m pleased to say that the spirit of innovation that allowed Tim Berners-Lee to invent the Web at CERN, and allowed CERN to nurture it, is alive and well today.”

[div class=attrib]More from theSource here.[end-div]

Evolution by Intelligent Design

[div class=attrib]From Discover:[end-div]

“There are no shortcuts in evolution,” famed Supreme Court justice Louis Brandeis once said. He might have reconsidered those words if he could have foreseen the coming revolution in biotechnology, including the ability to alter genes and manipulate stem cells. These breakthroughs could bring on an age of directed reproduction and evolution in which humans will bypass the incremental process of natural selection and set off on a high-speed genetic course of their own. Here are some of the latest and greatest advances.

Embryos From the Palm of Your Hand
In as little as five years, scientists may be able to create sperm and egg cells from any cell in the body, enabling infertile couples, gay couples, or sterile people to reproduce. The technique could also enable one person to provide both sperm and egg for an offspring—an act of “ultimate incest,” according to a report from the Hinxton Group, an international consortium of scientists and bioethicists whose members include such heavyweights as Ruth Faden, director of the Johns Hopkins Berman Institute of Bioethics, and Peter J. Donovan, a professor of biochemistry at the University of California at Irvine.

The Hinxton Group’s prediction comes in the wake of recent news that scientists at the University of Wisconsin and Kyoto University in Japan have transformed adult human skin cells into pluripotent stem cells, the powerhouse cells that can self-replicate (perhaps indefinitely) and develop into almost any kind of cell in the body. In evolutionary terms, the ability to change one type of cell into others—including a sperm or egg cell, or even an embryo—means that humans can now wrest control of reproduction away from nature, notes Robert Lanza, a scientist at Advanced Cell Technology in Massachusetts. “With this breakthrough we now have a working technology whereby anyone can pass on their genes to a child by using just a few skin cells,” he says.

[div class=attrib]More from theSource here.[end-div]

Is Quantum Mechanics Controlling Your Thoughts?

[div class=attrib]From Discover:[end-div]

Graham Fleming sits down at an L-shaped lab bench, occupying a footprint about the size of two parking spaces. Alongside him, a couple of off-the-shelf lasers spit out pulses of light just millionths of a billionth of a second long. After snaking through a jagged path of mirrors and lenses, these minus­cule flashes disappear into a smoky black box containing proteins from green sulfur bacteria, which ordinarily obtain their energy and nourishment from the sun. Inside the black box, optics manufactured to billionths-of-a-meter precision detect something extraordinary: Within the bacterial proteins, dancing electrons make seemingly impossible leaps and appear to inhabit multiple places at once.

Peering deep into these proteins, Fleming and his colleagues at the University of California at Berkeley and at Washington University in St. Louis have discovered the driving engine of a key step in photosynthesis, the process by which plants and some microorganisms convert water, carbon dioxide, and sunlight into oxygen and carbohydrates. More efficient by far in its ability to convert energy than any operation devised by man, this cascade helps drive almost all life on earth. Remarkably, photosynthesis appears to derive its ferocious efficiency not from the familiar physical laws that govern the visible world but from the seemingly exotic rules of quantum mechanics, the physics of the subatomic world. Somehow, in every green plant or photosynthetic bacterium, the two disparate realms of physics not only meet but mesh harmoniously. Welcome to the strange new world of quantum biology.

On the face of things, quantum mechanics and the biological sciences do not mix. Biology focuses on larger-scale processes, from molecular interactions between proteins and DNA up to the behavior of organisms as a whole; quantum mechanics describes the often-strange nature of electrons, protons, muons, and quarks—the smallest of the small. Many events in biology are considered straightforward, with one reaction begetting another in a linear, predictable way. By contrast, quantum mechanics is fuzzy because when the world is observed at the subatomic scale, it is apparent that particles are also waves: A dancing electron is both a tangible nugget and an oscillation of energy. (Larger objects also exist in particle and wave form, but the effect is not noticeable in the macroscopic world.)

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Dylan Burnette/Olympus Bioscapes Imaging Competition.[end-div]

Invisibility Becomes More than Just a Fantasy

[div class=attrib]From Discover:[end-div]

Two years ago a team of engineers amazed the world (Harry Potter fans in particular) by developing the technology needed to make an invisibility cloak. Now researchers are creating laboratory-engineered wonder materials that can conceal objects from almost anything that travels as a wave. That includes light and sound and—at the subatomic level—matter itself. And lest you think that cloaking applies only to the intangible world, 2008 even brought a plan for using cloaking techniques to protect shorelines from giant incoming waves.

Engineer Xiang Zhang, whose University of California at Berkeley lab is behind much of this work, says, “We can design materials that have properties that never exist in nature.”

These engineered substances, known as metamaterials, get their unusual properties from their size and shape, not their chemistry. Because of the way they are composed, they can shuffle waves—be they of light, sound, or water—away from an object. To cloak something, concentric rings of the metamaterial are placed around the object to be concealed. Tiny structures—like loops or cylinders—within the rings divert the incoming waves around the object, preventing both reflection and absorption. The waves meet up again on the other side, appearing just as they would if nothing were there.

The first invisibility cloak, designed by engineers at Duke University and Imperial College London, worked for only a narrow band of microwaves. Xiang and his colleagues created metamaterials that can bend visible light backward—a much greater challenge because visible light waves are so small, under 700 nanometers wide. That meant the engineers had to devise cloaking components only tens of nanometers apart.

[div class=attrib]More from theSource here.[end-div]

Why I Blog

[div class=attrib]By Andrew Sullivan for the Altantic[end-div]

The word blog is a conflation of two words: Web and log. It contains in its four letters a concise and accurate self-description: it is a log of thoughts and writing posted publicly on the World Wide Web. In the monosyllabic vernacular of the Internet, Web log soon became the word blog.

This form of instant and global self-publishing, made possible by technology widely available only for the past decade or so, allows for no retroactive editing (apart from fixing minor typos or small glitches) and removes from the act of writing any considered or lengthy review. It is the spontaneous expression of instant thought—impermanent beyond even the ephemera of daily journalism. It is accountable in immediate and unavoidable ways to readers and other bloggers, and linked via hypertext to continuously multiplying references and sources. Unlike any single piece of print journalism, its borders are extremely porous and its truth inherently transitory. The consequences of this for the act of writing are still sinking in.

A ship’s log owes its name to a small wooden board, often weighted with lead, that was for centuries attached to a line and thrown over the stern. The weight of the log would keep it in the same place in the water, like a provisional anchor, while the ship moved away. By measuring the length of line used up in a set period of time, mariners could calculate the speed of their journey (the rope itself was marked by equidistant “knots” for easy measurement). As a ship’s voyage progressed, the course came to be marked down in a book that was called a log.

In journeys at sea that took place before radio or radar or satellites or sonar, these logs were an indispensable source for recording what actually happened. They helped navigators surmise where they were and how far they had traveled and how much longer they had to stay at sea. They provided accountability to a ship’s owners and traders. They were designed to be as immune to faking as possible. Away from land, there was usually no reliable corroboration of events apart from the crew’s own account in the middle of an expanse of blue and gray and green; and in long journeys, memories always blur and facts disperse. A log provided as accurate an account as could be gleaned in real time.

As you read a log, you have the curious sense of moving backward in time as you move forward in pages—the opposite of a book. As you piece together a narrative that was never intended as one, it seems—and is—more truthful. Logs, in this sense, were a form of human self-correction. They amended for hindsight, for the ways in which human beings order and tidy and construct the story of their lives as they look back on them. Logs require a letting-go of narrative because they do not allow for a knowledge of the ending. So they have plot as well as dramatic irony—the reader will know the ending before the writer did.

[div class=attrib]More from theSource here.[end-div]

The LHC Begins Its Search for the “God Particle

[div class=attrib]From Discover:[end-div]

The most astonishing thing about the Large Hadron Collider (LHC), the ring-shaped particle accelerator that revved up for the first time on September 10 in a tunnel near Geneva, is that it ever got built. Twenty-six nations pitched in more than $8 billion to fund the project. Then CERN—the European Organization for Nuclear Research—enlisted the help of 5,000 scientists and engineers to construct a machine of unprecedented size, complexity, and ambition.

Measuring almost 17 miles in circumference, the LHC uses 9,300 superconducting magnets, cooled by liquid helium to 1.9 degrees Kelvin above absolute zero (–271.3º C.), to accelerate two streams of protons in opposite directions. It has detectors as big as apartment buildings to find out what happens when these protons cross paths and collide at 99.999999 percent of the speed of light. Yet roughly the same percentage of the human race has no idea what the LHC’s purpose is. Might it destroy the earth by spawning tiny, ravenous black holes? (Not a chance, physicists say. Collisions more energetic than the ones at the LHC happen naturally all the time, and we are still here.)

In fact, the goal of the LHC is at once simple and grandiose: It was created to discover new particles. One of the most sought of these is the Higgs boson, also known as the God particle because, according to current theory, it endowed all other particles with mass. Or perhaps the LHC will find “supersymmetric” particles, exotic partners to known particles like electrons and quarks. Such a discovery would be a big step toward developing a unified description of the four fundamental forces—the “theory of everything” that would explain all the basic interactions in the universe. As a bonus, some of those supersymmetric particles might turn out to be dark matter, the unseen stuff that seems to hold galaxies together.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib] Image courtesy of Maximillien Brice/CERN.[end-div]

What is art? The answer, from a little bird?

I’ve been pondering a concrete answer to this question, and others like it for some time. I do wonder “what is art?” and “what is great art?” and “what distinguishes fine art from its non-fine cousins?” and “what makes some art better than other art?”

In formulating my answers to these questions I’ve been looking inward and searching outward. I’ve been digesting the musings of our great philosophers and eminent scholars and authors. I’m close to penning some blog-worthy articles that crystallize my current thinking on the subject, but I’m not quite ready. Not yet. So, in the meantime you and I will have to make do with deep thoughts on the subject of art from some of my friends…

[youtube]pDo_vs3Aip4[/youtube]

The Vogels. Or, how to become a world class art collector on a postal clerk’s salary

I’m missing Art Basel | Miami this year. Last year’s event and surrounding shows displayed so much contemporary (and some modern) art, from so many artists and galleries that my head was buzzing for days afterward. This year I have our art251 gallery to co-run, so I’ve been visiting Art Basel virtually – reading the press releases, following the exhibitors and tuning in to the podcasts and vids, using the great tubes of the internet.

The best story by far to emerge this year from Art Basel | Miami is the continuing odyssey of Herb and Dorothy Vogel, their passion for contemporary art and their outstanding collection. On December 5, the documentary “Herb and Dorothy” was screened at Art Basel’s Art Loves Film night. And so their real-life art fairytale goes something like this…

[youtube]fMuYV_qvyEk[/youtube]

Over the last 40-plus years they have amassed a cutting-edge, world-class collection of contemporary art. In all they have collected around 4,000 works. Over time they have crammed art into every spare inch of space inside their one-bedroom Manhattan apartment. In 1992 they gave around 2,000 important pieces – paintings, drawings and sculptures – to the National Gallery of Art, in Washington, D.C. Then, in April of this year the National Gallery announced that an additional 2,500 of Vogels’ artworks would go to museums across the country: fifty works for fifty States. The National Gallery simply didn’t have enough space to house the Vogel’s immense collection.

So, why is this story so compelling?

Well, it’s compelling because they are just like you and me. They are not super-rich, they have no condo in Aspen, nor do they moor a yacht in Monte Carlo. They’re not hedge fund managers. They didn’t make a fortune before the dot.com bubble burst.

Herb Vogel, 86, is a retired postal clerk and Dorothy Vogel, 76, a retired librarian. They started collecting art in the 1960s and continue to this day. Their plan was simple and guided by two rules: the art had to be affordable, and small enough to fit in their apartment. Early on they decided to use Herb’s income for buying art, and Dorothy’s to paying living expenses. Though now retired they still follow the plan. They collect art because they love art and finding new art. In Dorothy’s words,

“We didn’t buy this art to make money… We did it to enjoy the art. And you know, it gives you a nice feeling to actually own it, and have it about you. … We started buying art for ourselves, in the 1960s, and from the beginning we chose carefully.”

More telling is Dorothy’s view of the art world, and the New York art scene:

“We never really got close to other people who collect… Most collectors have a lot of money, and they don’t go about their collecting in quite the same way. My husband had wanted to be an artist, and I learned from him. We were living vicariously through the work of every artist we bought. At some point, we realized that collecting this art was a sort of creative act. It became our art, in more ways than one. … I enjoyed the search, I guess. The looking and the finding. When you go to a store, and you’re searching for your size, don’t you get satisfaction when you find it?”

And Herb adds the final words:

“The art itself.”

So, within their modest means and limitations they have proved to be visionaries; many of the artists they supported early on have since become world-renowned. And, they have taken their rightful place among the great art collectors of the world, such as Getty and Rockefeller, and Broad and Saatchi. The Vogels used their limitations to their advantage – helping them focus, rather than being a hinderance. Above all, they used their eyes to find and collect great art, not their ears.

Why has manga become a global cultural product?

[div class=attrib]From Eurozine:[end-div]

In the West, manga has become a key part of the cultural accompaniment to economic globalization. No mere side-effect of Japan’s economic power, writes Jean-Marie Bouissou, manga is ideally suited to the cultural obsessions of the early twenty-first century.

Multiple paradoxes

Paradox surrounds the growth of manga in western countries such as France, Italy and the USA since the 1970s, and of genres descended from it: anime (cartoons), television serials and video games. The first parodox is that, whereas western countries have always imagined their culture and values as universal and sought to spread them (if only as cover for their imperial ambitions), Japan has historically been sceptical about sharing its culture with the world. The Shinto religion, for example, is perhaps unique in being strictly “national”: the very idea of a “Shintoist” foreigner would strike the Japanese as absurd.

The second paradox is that manga, in the form it has taken since 1945, is shot through with a uniquely Japanese historical experience. It depicts the trauma of a nation opened at gunpoint in 1853 by the “black ships” of Commodore Matthew Perry in 1853, frog-marched into modernity, and dragged into a contest with the West which ended in the holocaust of Hiroshima. It was this nation’s children – call them “Generation Tezuka” – who became the first generation of mangaka [manga creators]. They had seen their towns flattened by US bombers, their fathers defeated, their emperor stripped of his divinity, and their schoolbooks and the value-system they contained cast into the dustbin of history.

This defeated nation rebuilt itself through self-sacrificing effort and scarcely twenty years later had become the second economic power of the free world. Yet it received neither recognition (the 1980s were the years of “Japan-bashing” in the West), nor the security to which it aspired, before its newly-regained pride was crushed once more by the long crisis of the 1990s. Such a trajectory – unique, convulsive, dramatic, overshadowed by racial discrimination – differs radically from that of the old European powers, or that of young, triumphant America. Hence, it is all the more stunning that its collective imagination has spawned a popular culture capable of attaining “universality”.

At the start of the twenty-first century, Japan has become the world’s second largest exporter of cultural products. Manga has conquered 45 per cent of the French comic market, and Shonen Jump – the most important manga weekly for Japanese teenagers, whose circulation reached 6 million during the mid-1990s – has begun appearing in an American version. Manga, long considered fit only for children or poorly-educated youths, is starting to seduce a sophisticated generation of French thirty-somethings. This deserves an explanation.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of readbestmanga.[end-div]

Sex appeal

[div class=attrib]From Eurozine:[end-div]

Having condemned hyper-sexualized culture, the American religious Right is now wildly pro-sex, as long as it is marital sex. By replacing the language of morality with the secular notion of self-esteem, repression has found its way back onto school curricula – to the detriment of girls and women in particular. “We are living through an assault on female sexual independence”, writes Dagmar Herzog.

“Waves of pleasure flow over me; it feels like sliding down a mountain waterfall,” rhapsodises one delighted woman. Another recalls: “It’s like having a million tiny pleasure balloons explode inside of me all at once.”

These descriptions come not from Cosmopolitan, not from an erotic website, not from a Black Lace novel and certainly not from a porn channel. They are, believe it or not, part of the new philosophy of the Religious Right in America. We’ve always known that sex sells. Well, now it’s being used to sell both God and the Republicans in one extremely suggestive package. And in dressing up the old repressive values in fishnet stockings and flouncy lingerie, the forces of conservatism have beaten the liberals at their own game.

Choose almost any sex-related issue. From pornography and sex education to reproductive rights and treatment for sexually transmitted diseases, Americans have allowed a conservative religious movement not only to dictate the terms of conversation but also to change the nation’s laws and public health policies. And meanwhile American liberals have remained defensive and tongue-tied.

So how did the Religious Right – that avid and vocal movement of politicised conservative evangelical Protestants (joined together also with a growing number of conservative Catholics) – manage so effectively to harness what has traditionally been the province of the permissive left?

Quite simply, it has changed tactics and is now going out of its way to assert, loudly and enthusiastically, that, in contrast to what is generally believed, it is far from being sexually uptight. On the contrary, it is wildly pro-sex, provided it’s marital sex. Evangelical conservatives in particular have begun not only to rail against the evils of sexual misery within marriage (and the way far too many wives feel like not much more than sperm depots for insensitive, emotionally absent husbands), but also, in the most graphically detailed, explicit terms, to eulogise about the prospect of ecstasy.

[div class=attrib]More from theSource here.[end-div]

The society of the query and the Googlization of our lives

[div class=attrib]From Eurozine:[end-div]

“There is only one way to turn signals into information, through interpretation”, wrote the computer critic Joseph Weizenbaum. As Google’s hegemony over online content increases, argues Geert Lovink, we should stop searching and start questioning.

A spectre haunts the world’s intellectual elites: information overload. Ordinary people have hijacked strategic resources and are clogging up once carefully policed media channels. Before the Internet, the mandarin classes rested on the idea that they could separate “idle talk” from “knowledge”. With the rise of Internet search engines it is no longer possible to distinguish between patrician insights and plebeian gossip. The distinction between high and low, and their co-mingling on occasions of carnival, belong to a bygone era and should no longer concern us. Nowadays an altogether new phenomenon is causing alarm: search engines rank according to popularity, not truth. Search is the way we now live. With the dramatic increase of accessed information, we have become hooked on retrieval tools. We look for telephone numbers, addresses, opening times, a person’s name, flight details, best deals and in a frantic mood declare the ever growing pile of grey matter “data trash”. Soon we will search and only get lost. Old hierarchies of communication have not only imploded, communication itself has assumed the status of cerebral assault. Not only has popular noise risen to unbearable levels, we can no longer stand yet another request from colleagues and even a benign greeting from friends and family has acquired the status of a chore with the expectation of reply. The educated class deplores that fact that chatter has entered the hitherto protected domain of science and philosophy, when instead they should be worrying about who is going to control the increasingly centralized computing grid.

What today’s administrators of noble simplicity and quiet grandeur cannot express, we should say for them: there is a growing discontent with Google and the way the Internet organizes information retrieval. The scientific establishment has lost control over one of its key research projects – the design and ownership of computer networks, now used by billions of people. How did so many people end up being that dependent on a single search engine? Why are we repeating the Microsoft saga once again? It seems boring to complain about a monopoly in the making when average Internet users have such a multitude of tools at their disposal to distribute power. One possible way to overcome this predicament would be to positively redefine Heidegger’s Gerede. Instead of a culture of complaint that dreams of an undisturbed offline life and radical measures to filter out the noise, it is time to openly confront the trivial forms of Dasein today found in blogs, text messages and computer games. Intellectuals should no longer portray Internet users as secondary amateurs, cut off from a primary and primordial relationship with the world. There is a greater issue at stake and it requires venturing into the politics of informatic life. It is time to address the emergence of a new type of corporation that is rapidly transcending the Internet: Google.

The World Wide Web, which should have realized the infinite library Borges described in his short story The Library of Babel (1941), is seen by many of its critics as nothing but a variation of Orwell’s Big Brother (1948). The ruler, in this case, has turned from an evil monster into a collection of cool youngsters whose corporate responsibility slogan is “Don’t be evil”. Guided by a much older and experienced generation of IT gurus (Eric Schmidt), Internet pioneers (Vint Cerf) and economists (Hal Varian), Google has expanded so fast, and in such a wide variety of fields, that there is virtually no critic, academic or business journalist who has been able to keep up with the scope and speed with which Google developed in recent years. New applications and services pile up like unwanted Christmas presents. Just add Google’s free email service Gmail, the video sharing platform YouTube, the social networking site Orkut, GoogleMaps and GoogleEarth, its main revenue service AdWords with the Pay-Per-Click advertisements, office applications such as Calendar, Talks and Docs. Google not only competes with Microsoft and Yahoo, but also with entertainment firms, public libraries (through its massive book scanning program) and even telecom firms. Believe it or not, the Google Phone is coming soon. I recently heard a less geeky family member saying that she had heard that Google was much better and easier to use than the Internet. It sounded cute, but she was right. Not only has Google become the better Internet, it is taking over software tasks from your own computer so that you can access these data from any terminal or handheld device. Apple’s MacBook Air is a further indication of the migration of data to privately controlled storage bunkers. Security and privacy of information are rapidly becoming the new economy and technology of control. And the majority of users, and indeed companies, are happily abandoning the power to self-govern their informational resources.

[div class=attrib]More from theSource here.[end-div]

Manufactured scarcity

[div class=attrib]From Eurozine:[end-div]

“Manufacturing scarcity” is the new watchword in “Green capitalism”. James Heartfield explains how for the energy sector, it has become a license to print money. Increasing profits by cutting output was pioneered by Enron in the 1990s; now the model of restricted supply together with domestic energy generation is promoted worldwide.

The corporate raiders of the 1980s first worked out that you might be able to make more money downsizing, or even breaking up industry than building it up. It is a perverse result of the profit motive that private gain should grow out of public decay. But even the corporate raiders never dreamt of making deindustrialisation into an avowed policy goal which the rest of us would pay for.

What some of the cannier Green Capitalists realised is that scarcity increases price, and manufacturing scarcity can increase returns. What could be more old hat, they said, than trying to make money by making things cheaper? Entrepreneurs disdained the “fast moving consumer goods” market.

Of course there is a point to all this. If labour gets too efficient the chances of wringing more profits from industry get less. The more productive labour is, the lower, in the end, will be the rate of return on investments. That is because the source of new value is living labour; but greater investment in new technologies tends to replace living labour with machines, which produce no additional value of their own.[2] Over time the rate of return must fall. Business theory calls this the diminishing rate of return.[3] Businessmen know it as the “race for the bottom” – the competitive pressure to make goods cheaper and cheaper, making it that much harder to sell enough to make a profit. Super efficient labour would make the capitalistic organisation of industry redundant. Manufacturing scarcity, restricting output and so driving up prices is one short-term way to secure profits and maybe even the profit-system. Of course that would also mean abandoning the historic justification for capitalism, that it increased output and living standards. Environmentalism might turn out to be the way to save capitalism, just at the point when industrial development had shown it to be redundant.

[div class=attrib]More from theSource here.[end-div]

Artists beware! You may be outsourced next to…

China perhaps, or even a dog!

15jul08-tilamook.jpg

As you know, a vast amount of global manufacturing is outsourced to China. In fact, a fair deal of so-called “original” art now comes from China as well, where art factories of “copyworkers” are busy reproducing works by old masters or, for a few extra Yuan, originals in this or that particular style. For instance, the city of Dafen, China manufactures more “Van Goghs” in a couple of weeks than the real Van Gogh created in his entire lifetime. Dafen produces some great bargains — $2 for an unframed old master, $3 for a custom version (prices before enormous markup) — if you like to buy your art by the square foot.

You’ve probably also seen miscellaneous watercolors emanating from talented elephants in Thailand, the late Congo’s tempera paintings auctioned at Bonhams, or the German artist chimpanzee who, with her handlers, recently fooled an expert into believing her work was that of Ernst Wilhelm Nay.

Well, now comes a second biography of Tilamook Cheddar, or Tillie, the most successful animal painter in the history of, well, animal painters. Tillie, a Jack Russell terrier from Brooklyn, NY, has been painting for around 7 years, and has headlined 17 solo shows across the country and in Europe.

Despite these somewhat disturbing developments, I think artists will be around for some time. But, what about gallerists and art dealers? Could you see the Toshiba robot or a couple of (smart) lab rats or an Art-o-mat replacing your friendly gallery owners? Please don’t answer this one!

Portrait of The Dog. Image courtesy of T.Cheddar.

Robert Rauschenberg, American Artist, Dies at 82

[div class=attrib]From The New York Times:[end-div]

Robert Rauschenberg, the irrepressibly prolific American artist who time and again reshaped art in the 20th century, died on Monday night at his home on Captiva Island, Fla. He was 82.

The cause was heart failure, said Arne Glimcher, chairman of PaceWildenstein, the Manhattan gallery that represents Mr. Rauschenberg.

Mr. Rauschenberg’s work gave new meaning to sculpture. “Canyon,” for instance, consisted of a stuffed bald eagle attached to a canvas. “Monogram” was a stuffed goat girdled by a tire atop a painted panel. “Bed” entailed a quilt, sheet and pillow, slathered with paint, as if soaked in blood, framed on the wall. All became icons of postwar modernism.

A painter, photographer, printmaker, choreographer, onstage performer, set designer and, in later years, even a composer, Mr. Rauschenberg defied the traditional idea that an artist stick to one medium or style. He pushed, prodded and sometimes reconceived all the mediums in which he worked.

Building on the legacies of Marcel Duchamp, Kurt Schwitters, Joseph Cornell and others, he helped obscure the lines between painting and sculpture, painting and photography, photography and printmaking, sculpture and photography, sculpture and dance, sculpture and technology, technology and performance art — not to mention between art and life.

Mr. Rauschenberg was also instrumental in pushing American art onward from Abstract Expressionism, the dominant movement when he emerged, during the early 1950s. He became a transformative link between artists like Jackson Pollock and Willem de Kooning and those who came next, artists identified with Pop, Conceptualism, Happenings, Process Art and other new kinds of art in which he played a signal role.

No American artist, Jasper Johns once said, invented more than Mr. Rauschenberg. Mr. Johns, John Cage, Merce Cunningham and Mr. Rauschenberg, without sharing exactly the same point of view, collectively defined this new era of experimentation in American culture.

Apropos of Mr. Rauschenberg, Cage once said, “Beauty is now underfoot wherever we take the trouble to look.” Cage meant that people had come to see, through Mr. Rauschenberg’s efforts, not just that anything, including junk on the street, could be the stuff of art (this wasn’t itself new), but that it could be the stuff of an art aspiring to be beautiful — that there was a potential poetics even in consumer glut, which Mr. Rauschenberg celebrated.

“I really feel sorry for people who think things like soap dishes or mirrors or Coke bottles are ugly,” he once said, “because they’re surrounded by things like that all day long, and it must make them miserable.”

The remark reflected the optimism and generosity of spirit that Mr. Rauschenberg became known for. His work was likened to a St. Bernard: uninhibited and mostly good-natured. He could be the same way in person. When he became rich, he gave millions of dollars to charities for women, children, medical research, other artists and Democratic politicians.

A brash, garrulous, hard-drinking, open-faced Southerner, he had a charm and peculiar Delphic felicity with language that masked a complex personality and an equally multilayered emotional approach to art, which evolved as his stature did. Having begun by making quirky, small-scale assemblages out of junk he found on the street in downtown Manhattan, he spent increasing time in his later years, after he had become successful and famous, on vast international, ambassadorial-like projects and collaborations.

Conceived in his immense studio on the island of Captiva, off southwest Florida, these projects were of enormous size and ambition; for many years he worked on one that grew literally to exceed the length of its title, “The 1/4 Mile or 2 Furlong Piece.” They generally did not live up to his earlier achievements. Even so, he maintained an equanimity toward the results. Protean productivity went along with risk, he felt, and risk sometimes meant failure.

The process — an improvisatory, counterintuitive way of doing things — was always what mattered most to him. “Screwing things up is a virtue,” he said when he was 74. “Being correct is never the point. I have an almost fanatically correct assistant, and by the time she re-spells my words and corrects my punctuation, I can’t read what I wrote. Being right can stop all the momentum of a very interesting idea.”

This attitude also inclined him, as the painter Jack Tworkov once said, “to see beyond what others have decided should be the limits of art.”

He “keeps asking the question — and it’s a terrific question philosophically, whether or not the results are great art,” Mr. Tworkov said, “and his asking it has influenced a whole generation of artists.”

A Wry, Respectful Departure

That generation was the one that broke from Pollock and company. Mr. Rauschenberg maintained a deep but mischievous respect for Abstract Expressionist heroes like de Kooning and Barnett Newman. Famously, he once painstakingly erased a drawing by de Kooning, an act both of destruction and devotion. Critics regarded the all-black paintings and all-red paintings he made in the early 1950s as spoofs of de Kooning and Pollock. The paintings had roiling, bubbled surfaces made from scraps of newspapers embedded in paint.

But these were just as much homages as they were parodies. De Kooning, himself a parodist, had incorporated bits of newspapers in pictures, and Pollock stuck cigarette butts to canvases.

Mr. Rauschenberg’s “Automobile Tire Print,” from the early 1950s — resulting from Cage’s driving an inked tire of a Model A Ford over 20 sheets of white paper — poked fun at Newman’s famous “zip” paintings.

At the same time, Mr. Rauschenberg was expanding on Newman’s art. The tire print transformed Newman’s zip — an abstract line against a monochrome backdrop with spiritual pretensions — into an artifact of everyday culture, which for Mr. Rauschenberg had its own transcendent dimension.

Mr. Rauschenberg frequently alluded to cars and spaceships, even incorporating real tires and bicycles into his art. This partly reflected his own restless, peripatetic imagination. The idea of movement was logically extended when he took up dance and performance.

There was, beneath this, a darkness to many of his works, notwithstanding their irreverence. “Bed” (1955) was gothic. The all-black paintings were solemn and shuttered. The red paintings looked charred, with strips of fabric akin to bandages, from which paint dripped like blood. “Interview” (1955), which resembled a cabinet or closet with a door, enclosing photos of bullfighters, a pinup, a Michelangelo nude, a fork and a softball, suggested some black-humored encoded erotic message.

There were many other images of downtrodden and lonely people, rapt in thought; pictures of ancient frescoes, out of focus as if half remembered; photographs of forlorn, neglected sites; bits and pieces of faraway places conveying a kind of nostalgia or remoteness. In bringing these things together, the art implied consolation.

Mr. Rauschenberg, who knew that not everybody found it easy to grasp the open-endedness of his work, once described to the writer Calvin Tomkins an encounter with a woman who had reacted skeptically to “Monogram” (1955-59) and “Bed” in his 1963 retrospective at the Jewish Museum, one of the events that secured Mr. Rauschenberg’s reputation: “To her, all my decisions seemed absolutely arbitrary — as though I could just as well have selected anything at all — and therefore there was no meaning, and that made it ugly.

[div class=attrib]More from theSource here.[end-div]

Art Review | ‘Color as Field’: Weightless Color, Floating Free

[div class=attrib]From The New York Times:[end-div]

Starting in the late 1950s the great American art critic Clement Greenberg only had eyes for Color Field painting. This was the lighter-than-air abstract style, with its emphasis on stain painting and visual gorgeousness introduced by Helen Frankenthaler followed by Morris Louis, Kenneth Noland and Jules Olitski.

With the insistent support of Greenberg and his acolytes, Color Field soared as the next big, historically inevitable thing after Jackson Pollock. Then over the course of the 1970s it crashed and burned and dropped from sight. Pop and Minimal Art, which Greenberg disparaged, had more diverse critical support and greater influence on younger artists. Then Post-Minimalism came along, exploding any notion of art’s neatly linear progression.

Now Color Field painting — or as Greenberg preferred to call it, Post-Painterly Abstraction — is being reconsidered in a big way in “Color as Field: American Painting, 1950-1975,” a timely, provocative — if far from perfect — exhibition at the Smithsonian American Art Museum here. It has been organized by the American Federation of Arts and selected by the independent curator and critic Karen Wilkin. She and Carl Belz, former director of the Rose Art Museum at Brandeis University, have written essays for the catalog.

It is wonderful to see some of this work float free of the Greenbergian claims for greatness and inevitability (loyally retraced by Ms. Wilkin in her essay), and float it does, at least the best of it. The exhibition begins with the vista of Mr. Olitski’s buoyant, goofily sexy “Cleopatra Flesh” of 1962, looming at the end of a long hallway. The work sums up the fantastic soft power that these artists could elicit from brilliant color, scale and judicious amounts of pristine raw canvas. A huge blue motherly curve nearly encircles a large black planet while luring a smaller red planet into the fold, calling to mind an abstracted stuffed toy.

It is a perfect, exhilarating example of what Mr. Belz calls “one-shot painting” and likens to jazz improvisation. Basic to the thrill is our understanding that the stain painting technique involved a few rapid skilled but unrehearsed gestures, and that raw canvas offered no chance for revision. “Cleopatra’s Flesh” is an act of joyful derring-do.

The “one-shot painting” stain technique of color field was the innovation of Helen Frankenthaler, first accomplished in “Mountains and Sea,” made in 1952, when she was 24 and unknown. (It is not in this exhibition, but the method is conveyed by her 1957 “Seven Types of Ambiguity,” with its great gray splashes punctuated by peninsulas of red, yellow and blue.) The technique negotiated a common ground between Pollock’s heroic no-brush drip style and the expanses of saturated color favored especially by Barnett Newman and Mark Rothko.

In Greenberg’s eyes the torch of Abstract Expressionism (the cornerstone of his power as a critic) was being carried forward by Ms. Frankenthaler’s spirited reformulation, followed by Mr. Louis’s languid pours; Mr. Noland’s radiant targets; Mr. Olitski’s carefully controlled stains and (later) diaphanous sprayed surfaces. And this continuity confirmed the central premise of Greenbergian formalism: that all modern art mediums would be meekly reduced to their essences; for painting that meant abstractness, flatness and weightless color. As you can imagine, that didn’t leave anyone, not even the anointed few, with much to do.

Revisionist this show is not. Its 38 canvases represent 17 painters, including a selection of works by Abstract Expressionist precursors titled “Origins of Color Field.” The elders tend to look as light and jazzy as their juniors; Adolph Gottlieb, Hans Hoffman and Robert Motherwell, all present, were ultimately as much a part of Color Field as Abstract Expressionism. But even Newman’s “Horizontal Light” of 1949 seems undeniably flashy; its field of dark red is split by a narrow aqua band, called a zip, that seems to speed across the canvas. Rothko’s 1951 “Number 18,” with its shifting borders and cloud-squares of white, red and pink, has a cheerful, scintillating forthrightness.

This forthrightness expands into dazzling instantaneousness in the works of Ms. Frankenthaler and Mr. Louis, where it sometimes seems that the paint is still wet and seeping into the canvas. Ms. Frankenthaler’s high-wire act is especially evident in the jagged pools and terraces of color in the aptly titled “Flood” and in “Interior Landscape,” which centers on a single, exuberant splash. Mr. Louis manages a similar tension while seeming completely relaxed. In “Floral V,” where an inky black washes like a wave over a bouquet of brilliantly colored plumes, he achieves a silent grandeur, like a Frankenthaler with the sound off.

After the Frankenthaler and Louis works, this show dwindles into a subdued free-for-all, as most artists settle into more predetermined ways of working. Often big scale and simple composition add up to emptiness, especially when the signs of derring-do recede. Both Mr. Olitski and especially Mr. Noland are poorly represented. In Mr. Noland’s square “Space Jog,” Newman’s zips run perpendicular to one another, forming a pastel plaid on a sprayed ground of sky blue, like a Mondrian bed sheet.

[div class=attrib]More from theSource here.[end-div]

Shopping town USA

[div class=attrib]From Eurozine:[end-div]
In the course of his life, Victor Gruen completed major urban interventions in the US and western Europe that fundamentally altered the course of western urban development. Anette Baldauf describes how Gruen’s fame rests mostly on the insertion of commercial machines into the decentred US suburbs. These so-called “shopping towns” were supposed to strengthen civic life and structure the amorphous, mono-functional agglomerations of suburban sprawl. Yet within a decade, Gruen’s designs had become the architectural extension of the policies of racial and gender segregation underlying the US postwar consumer utopia.

In 1943, the US American magazine Architectural Forum invited Victor Gruen and his wife Elsie Krummeck to take part in an exchange of visions for the architechtonic shaping of the postwar period. The editors of the issue, entitled Architecture 194x, appealed to recognised modernists such as Mies van der Rohe and Charles Eames to design parts of a model town for the year “194x”, in other words for an unspecified year, by which time the Second World War would have ended. The architects Gruen & Krummeck partnership were to design a prototype for a “regional shopping centre”. The editors specified that the shopping centre was to be situated on the outskirts of the city, on traffic island between two highways and would supplement the pedestrian zone down town. “How can shopping be made more inviting?”, the editors asked Gruen & Krummeck, who, at the time of the competition, were famous for their spectacular glass designs for boutiques on Fifth Avenue and for national department store chains on the outskirts of US cities.

The two architects responded to the commission to build a “small neighbourhood shopping centre” with a design that far exceeded the specified size and function of the centre. Gruen later explained that the project reflected the couple’s dissatisfaction with Los Angeles, where long distances between shops, regular traffic jams, and an absence of pedestrian zones made shopping tiresome work. Gruen and Krummeck saw in Los Angeles the blueprint of an “an automotive-rich postwar America”. Their counter-design was oriented towards the traditional main squares of European cities. Hence, they suggested two central structural interventions: first, the automobile and the shopper were to be assigned two distinct spatial units, and second, space for consumption and civic space were to be merged. Working to this premise, Gruen and Krummeck designed a centre that was organised around a spacious green square – with garden restaurants, milk bars, and music stands. The design integrated 28 shops and 13 public facilities; among the latter were a library, a post office, a theatre, a lecture hall, a night club, a nursery, a play room, and a pony stable.

The editors of Architectural Forum rejected Gruen’s and Krummeck’s design. They insisted upon a reduced “regional shopping centre” and urged the architects to rework their submission along these lines. Gruen and Krummeck responded with an adjustment that would later prove crucial: they abandoned the idea of a green square in the centre of the complex and suggested building a closed, round building made of glass. They surrounded the inwardly directed shopping complex with two rings. The first ring was to serve as a pedestrian zone, the second as a car park. This design also failed to please. George Nelson, the editor-in-chief, was scandalised and argued that by removing the central square, the space for sitting around and strolling was lost. For him, the shopping centre as closed space was inconceivable. Eventually, Gruen and Krummeck submitted a design for a conventional shopping centre with shops arranged in a “U” shape around a courtyard. Clearly, those that would celebrate the closed shopping centre a few years later were not yet active. It was only a decade later that Gruen was able to convince two leading department-store owners of the profitability of a self-enclosed shopping centre. Excluding cars, street traders, animals, and other potential disturbances, and supported by surveillance technology, the shopping mall would embody the ideal, typical values of suburban lifestyles – order, cleanliness, and safety. Public judgement of Gruen’s “architecture of introversion” fundamentally changed, then, in the course of the 1950s. What was it, exactly, that led to this revised evaluation of a closed, inwardly directed space of consumption?

[div class=attrib]More from theSource here.[end-div]

A Solar Grand Plan

[div class=attrib]From Scientific American:[end-div]

By 2050 solar power could end U.S. dependence on foreign oil and slash greenhouse gas emissions.

High prices for gasoline and home heating oil are here to stay. The U.S. is at war in the Middle East at least in part to protect its foreign oil interests. And as China, India and other nations rapidly increase their demand for fossil fuels, future fighting over energy looms large. In the meantime, power plants that burn coal, oil and natural gas, as well as vehicles everywhere, continue to pour millions of tons of pollutants and greenhouse gases into the atmosphere annually, threatening the planet.

Well-meaning scientists, engineers, economists and politicians have proposed various steps that could slightly reduce fossil-fuel use and emissions. These steps are not enough. The U.S. needs a bold plan to free itself from fossil fuels. Our analysis convinces us that a massive switch to solar power is the logical answer.

  • A massive switch from coal, oil, natural gas and nuclear power plants to solar power plants could supply 69 percent of the U.S.’s electricity and 35 percent of its total energy by 2050.
  • A vast area of photovoltaic cells would have to be erected in the Southwest. Excess daytime energy would be stored as compressed air in underground caverns to be tapped during nighttime hours.
  • Large solar concentrator power plants would be built as well.
  • A new direct-current power transmission backbone would deliver solar electricity across the country.
  • But $420 billion in subsidies from 2011 to 2050 would be required to fund the infrastructure and make it cost-competitive.

[div class=attrib]More from theSource here.[end-div]

France: return to Babel

[div class=attrib]From Eurozine:[end-div]

Each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language, says Marc Hatzfeld. The language is then guarded by men of letters, by strict rules, not allowing for variety of expression. Against this backdrop, immigrants from ever more distant shores have arrived in France, bringing with them a different style of expression and another, more fluid, concept of language.

Today more than ever, the language issue, which might at one time have segued gracefully between pleasure in sense and sensual pleasure, is being seized on and exploited for political ends. Much of this we can put down to the concept of the nation-state, that symbolic and once radical item that was assigned the task of consolidating the fragmented political power of the time. During the long centuries from the end of the Middle Ages to the close of the Ancien Régime, this triumphant political logic sought to bind together nation, language and religion. East of the Rhine, for instance, this was particularly true of the links between nation and religion; West of the Rhine, it focused more on language. From Villers-Cotterêts[1] on, language – operating almost coercively – served as an instrument of political unification. The periodic alternation between an imperial style that was both permissive and varied when it came to customary practise, and the homogeneous and monolithic style adopted on the national front, led to constant comings and goings in the relationship between language and political power.

In France, the revocation of the Edict of Nantes by Louis XIV in 1685 resolved the relationship between nation and religion and gave language a more prominent role in defining nationality. Not long after, the language itself – by now regarded as public property – became a ward of state entitled to public protection. Taking things one step further, the eighteenth century philosophers of the Enlightenment conceived the idea of a coherent body of subject people and skilfully exploited this to clip the wings of a fabled absolute monarch in the name of another, equally mythical, form of sovereignty. All that remained was to organise the country institutionally. Henceforth, the idea that the allied forces of people, nation and language together made up the same collective history was pursued with zeal.

What we see as a result is this curious emergence of language itself as a concept. Making use of a fiction that reached down from a great height to penetrate a cultural reality that was infinitely more subtle and flexible, each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language. While we in Europe enjoy as many ways of speaking as there are localities and occupations, there are administrative and symbolic demands to fabricate the fantasy of a language that clerics and men of letters would appropriate to themselves. It is these who, in the wake of the politicians, help to eliminate the variety of ways people have of expressing themselves and of understanding one another. Some scholars, falling into what they fail to see is a highly politicised trap, complete this process by coming up with a scientific construct heavily dependent on the influence of mathematical theories such as those of de Saussure and, above all, of Jakobson. Paradoxically, this body of work relies on a highly malleable, mobile, elastic reality to develop the tight, highly structured concept that is “language” (Jacques Lacan). And from that point, language itself becomes a prisoner of Lacan’s own system – linguistics.
[div class=attrib]From theSource here.[end-div]

The Great Cosmic Roller-Coaster Ride

[div class=attrib]From Scientific American:[end-div]

Could cosmic inflation be a sign that our universe is embedded in a far vaster realm

You might not think that cosmologists could feel claustrophobic in a universe that is 46 billion light-years in radius and filled with sextillions of stars. But one of the emerging themes of 21st-century cosmology is that the known universe, the sum of all we can see, may just be a tiny region in the full extent of space. Various types of parallel universes that make up a grand “multiverse” often arise as side effects of cosmological theories. We have little hope of ever directly observing those other universes, though, because they are either too far away or somehow detached from our own universe.

Some parallel universes, however, could be separate from but still able to interact with ours, in which case we could detect their direct effects. The possibility of these worlds came to cosmologists’ attention by way of string theory, the leading candidate for the foundational laws of nature. Although the eponymous strings of string theory are extremely small, the principles governing their properties also predict new kinds of larger membranelike objects—“branes,” for short. In particular, our universe may be a three-dimensional brane in its own right, living inside a nine-dimensional space. The reshaping of higher-dimensional space and collisions between different universes may have led to some of the features that astronomers observe today.

[div class=attrib]More from theSource here.[end-div]

Windows on the Mind

[div class=attrib]From Scientific American:[end-div]

Once scorned as nervous tics, certain tiny, unconscious flicks of the eyes now turn out to underpin much of our ability to see. These movements may even reveal subliminal thoughts.

As you read this, your eyes are rapidly flicking from left to right in small hops, bringing each word sequentially into focus. When you stare at a person’s face, your eyes will similarly dart here and there, resting momentarily on one eye, the other eye, nose, mouth and other features. With a little introspection, you can detect this frequent flexing of your eye muscles as you scan a page, face or scene.

But these large voluntary eye movements, called saccades, turn out to be just a small part of the daily workout your eye muscles get. Your eyes never stop moving, even when they are apparently settled, say, on a person’s nose or a sailboat bobbing on the horizon. When the eyes fixate on something, as they do for 80 percent of your waking hours, they still jump and jiggle imperceptibly in ways that turn out to be essential for seeing. If you could somehow halt these miniature motions while fixing your gaze, a static scene would simply fade from view.

[div class=attrib]More from theSource here.[end-div]

On the mystery of human consciousness

[div class=attrib]From Eurozine:[end-div]

Philosophers and natural scientists regularly dismiss consciousness as irrelevant. However, even its critics agree that consciousness is less a problem than a mystery. One way into the mystery is through an understanding of autism.

It started with a letter from Michaela Martinková:

Our eldest son, aged almost eight, has Asperger’s Syndrome (AS). It is a diagnosis that falls into the autistic spectrum, but his IQ is very much above average. In an effort to find out how he thinks, I decided that I must find out how we think, and so I read into the cognitive sciences and epistemology. I found what I needed there, although I have an intense feeling that precisely the way of thinking of such people as our son is missing from the mosaic of these sciences. And I think that this missing piece could rearrange the whole mosaic.

In the book Philosophy and the Cognitive Sciences, you write, among other things: “Actually the only handicap so far observed in these children (with autism and AS) is that they cannot use human psychology. They cannot postulate intentional states in their own minds and in the minds of other people.” I think that deeper knowledge of autism, and especially of Asperger’s Syndrome as its version found in people with higher IQ in the framework of autism, could be immensely enriching for the cognitive sciences. I am convinced that these people think in an entirely different way from us.

Why the present interest in autism? It is generally known that some people whose diagnosis falls under Asperger’s Syndrome, namely people with Asperger’s Syndrome and high-functional autism, show a remarkable combination of highly above-average intelligence and well below-average social ability. The causes of this peculiarity, although far from being sufficiently clarified, are usually explained by reduced ability in the areas of verbal communication and empathy, which form the basis of social intelligence. And why consciousness? Many people think today that, if we are to better understand ourselves and our relationships to the world and other people, the last problem we must solve is consciousness. Many others think that if we understand the brain, its structure, and its functioning, consciousness will cease to be a problem. The more critical supporters of both views agree on one thing: consciousness is not a problem, it is more a mystery. If a problem is something about which we formulate a question, to which it is possible to seek a reasonable answer, then consciousness is a mystery, because it is still not possible to formulate a question which could be answered in a way that could be verified or refuted by the normal methods of science. Perhaps the psychiatrist Daniel M. Wegner best grasped the present state of knowledge with the statement: “All human experience states that we consciously control our actions, but all theories are against this.” In spite of all the unclearness and disputes about what consciousness is and how it works, the view has begun to prevail in recent years that language and consciousness are the link that makes a group of individuals into a community.

[div class=attrib]More from theSource here.[end-div]