All posts by Mike

A beautiful and dangerous idea: art that sells itself

Artist Caleb Larsen seems to have the right idea. Rather than relying on the subjective wants and needs of galleries and the dubious nature of the secondary art market (and some equally dubious auctioneers) his art sells itself.

His work, entitled “A Tool to Deceive and Slaughter”, is an 8-inch opaque, black acrylic cube. But while the exterior may be simplicity itself, the interior holds a fascinating premise. The cube is connected to the internet. In fact, it’s connected to eBay, where through some hidden hardware and custom programming it constantly auctions itself.

As Caleb Larsen describes,

Combining Robert Morris’ Box With the Sound of Its Own Making with Baudrillard’s writing on the art auction this sculpture exists in eternal transactional flux. It is a physical sculpture that is perptually attempting to auction itself on eBay.

Every ten minutes the black box pings a server on the internet via the ethernet connection to check if it is for sale on the ebay. If its auction has ended or it has sold, it automatically creates a new auction of itself.

If a person buys it on eBay, the current owner is required to send it to the new owner. The new owner must then plug it into ethernet, and the cycle repeats itself.

The purchase agreement on eBay is quite rigorous, including stipulations such as: the buyer must keep the artwork connected to the interent at all times with disconnections allowed only for the transportation; upon purchase the artwork must be reauctioned; failure to follow all terms of the agreement forfeits the status of the artwork as a genuine work of art.

The artist was also smart enough to gain a slice of the secondary market, by requiring each buyer to return to the artist 15 percent of the appreciated value from each sale. Christie’s and Sotheby’s eat your hearts out.

Besides trying to put auctioneers out of work, the artist has broader intentions in mind, particularly when viewed alongside his larger body of work. The piece goes to the heart of the “how” and the “why” of the art market. By placing the artwork in a constant state of transactional fluidity – it’s never permanently in the hands of its new owner – it forces us to question the nature of art in relation to its market and the nature of collecting. The work can never without question be owned and collected since it is always possible that someone else will come along, enter the auction and win. Though, the first “owner” of the piece states that this was part of the appeal. Terence Spies, a California collector attests,

I had a really strong reaction right after I won the auction. I have this thing, and I really want to keep it, but the reason I want to keep it is that it might leave… The process of the piece really gets to some of the reasons why you might be collecting art in the first place.

Now of course, owning anything is transient. The Egyptian pharaohs tried taking their possessions into the “afterlife” but even to this day are being constantly thwarted by tomb-raiders and archeologists. Perhaps to some the chase, the process of collecting, is the goal, rather than owning the art itself. As I believe Caleb Larsen intended, he’s really given me something to ponder. How different, really, is it to own this self-selling art versus wandering through the world’s museums and galleries to “own” a Picasso or Warhol or Monet for 5 minutes? Ironically, our works live on, and it is we who are transient. So I think Caleb Larsen’s title for the work should be taken tongue in cheek, for it is we who are deceiving ourselves.

The Real Rules for Time Travelers

[div class=attrib]From Discover:[end-div]

People all have their own ideas of what a time machine would look like. If you are a fan of the 1960 movie version of H. G. Wells’s classic novel, it would be a steampunk sled with a red velvet chair, flashing lights, and a giant spinning wheel on the back. For those whose notions of time travel were formed in the 1980s, it would be a souped-up stainless steel sports car. Details of operation vary from model to model, but they all have one thing in common: When someone actually travels through time, the machine ostentatiously dematerializes, only to reappear many years in the past or future. And most people could tell you that such a time machine would never work, even if it looked like a DeLorean.

They would be half right: That is not how time travel might work, but time travel in some other form is not necessarily off the table. Since time is kind of like space (the four dimensions go hand in hand), a working time machine would zoom off like a rocket rather than disappearing in a puff of smoke. Einstein described our universe in four dimensions: the three dimensions of space and one of time. So traveling back in time is nothing more or less than the fourth-dimensional version of walking in a circle. All you would have to do is use an extremely strong gravitational field, like that of a black hole, to bend space-time. From this point of view, time travel seems quite difficult but not obviously impossible.

These days, most people feel comfortable with the notion of curved space-time. What they trip up on is actually a more difficult conceptual problem, the time travel paradox. This is the worry that someone could go back in time and change the course of history. What would happen if you traveled into the past, to a time before you were born, and murdered your parents? Put more broadly, how do we avoid changing the past as we think we have already experienced it? At the moment, scientists don’t know enough about the laws of physics to say whether these laws would permit the time equivalent of walking in a circle—or, in the parlance of time travelers, a “closed timelike curve.” If they don’t permit it, there is obviously no need to worry about paradoxes. If physics is not an obstacle, however, the problem could still be constrained by logic. Do closed timelike curves necessarily lead to paradoxes?

If they do, then they cannot exist, simple as that. Logical contradictions cannot occur. More specifically, there is only one correct answer to the question “What happened at the vicinity of this particular event in space-time?” Something happens: You walk through a door, you are all by yourself, you meet someone else, you somehow never showed up, whatever it may be. And that something is whatever it is, and was whatever it was, and will be whatever it will be, once and forever. If, at a certain event, your grandfather and grandmother were getting it on, that’s what happened at that event. There is nothing you can do to change it, because it happened. You can no more change events in your past in a space-time with closed timelike curves than you can change events that already happened in ordinary space-time, with no closed timelike curves.

[div class=attrib]More from theSource here.[end-div]

Human Culture, an Evolutionary Force

[div class=attrib]From The New York Times:[end-div]

As with any other species, human populations are shaped by the usual forces of natural selection, like famine, disease or climate. A new force is now coming into focus. It is one with a surprising implication — that for the last 20,000 years or so, people have inadvertently been shaping their own evolution.

The force is human culture, broadly defined as any learned behavior, including technology. The evidence of its activity is the more surprising because culture has long seemed to play just the opposite role. Biologists have seen it as a shield that protects people from the full force of other selective pressures, since clothes and shelter dull the bite of cold and farming helps build surpluses to ride out famine.

Because of this buffering action, culture was thought to have blunted the rate of human evolution, or even brought it to a halt, in the distant past. Many biologists are now seeing the role of culture in a quite different light.

Although it does shield people from other forces, culture itself seems to be a powerful force of natural selection. People adapt genetically to sustained cultural changes, like new diets. And this interaction works more quickly than other selective forces, “leading some practitioners to argue that gene-culture co-evolution could be the dominant mode of human evolution,” Kevin N. Laland and colleagues wrote in the February issue of Nature Reviews Genetics. Dr. Laland is an evolutionary biologist at the University of St. Andrews in Scotland.

The idea that genes and culture co-evolve has been around for several decades but has started to win converts only recently. Two leading proponents, Robert Boyd of the University of California, Los Angeles, and Peter J. Richerson of the University of California, Davis, have argued for years that genes and culture were intertwined in shaping human evolution. “It wasn’t like we were despised, just kind of ignored,” Dr. Boyd said. But in the last few years, references by other scientists to their writings have “gone up hugely,” he said.

The best evidence available to Dr. Boyd and Dr. Richerson for culture being a selective force was the lactose tolerance found in many northern Europeans. Most people switch off the gene that digests the lactose in milk shortly after they are weaned, but in northern Europeans — the descendants of an ancient cattle-rearing culture that emerged in the region some 6,000 years ago — the gene is kept switched on in adulthood.

Lactose tolerance is now well recognized as a case in which a cultural practice — drinking raw milk — has caused an evolutionary change in the human genome. Presumably the extra nutrition was of such great advantage that adults able to digest milk left more surviving offspring, and the genetic change swept through the population.

[div class=attrib]More from theSource here.[end-div]

Art world swoons over Romania’s homeless genius

[div class=attrib]From The Guardian:[end-div]

The guests were chic, the bordeaux was sipped with elegant restraint and the hostess was suitably glamorous in a ­canary yellow cocktail dress. To an outside observer who made it past the soirée privée sign on the door of the Anne de Villepoix gallery on Thursday night, it would have seemed the quintessential Parisian art viewing.

Yet that would been leaving one ­crucial factor out of the equation: the man whose creations the crowd had come to see. In his black cowboy hat and pressed white collar, Ion Barladeanu looked every inch the established artist as he showed guests around the exhibition. But until 2007 no one had ever seen his work, and until mid-2008 he was living in the rubbish tip of a Bucharest tower block.

Today, in the culmination of a dream for a Romanian who grew up adoring Gallic film stars and treasures a miniature Eiffel Tower he once found in a bin, ­Barladeanu will see his first French exhibition open to the general public.

Dozens of collages he created from scraps of discarded magazines during and after the Communist regime of Nicolae Ceausescu are on sale for more than €1,000 (£895) each. They are being hailed as politically brave and culturally irreverent.

For the 63-year-old artist, the journey from the streets of Bucharest to the galleries of Europe has finally granted him recognition. “I feel as if I have been born again,” he said, as some of France’s leading collectors and curators jostled for position to see his collages. “Now I feel like a prince. A pauper can become a prince. But he can go back to being a pauper too.”

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]

The Graphene Revolution

[div class=attrib]From Discover:[end-div]

Flexible, see-through, one-atom-thick sheets of carbon could be a key component for futuristic solar cells, batteries, and roll-up LCD screens—and perhaps even microchips.

Under a transmission electron microscope it looks deceptively simple: a grid of hexa­gons resembling a volleyball net or a section of chicken wire. But graphene, a form of carbon that can be produced in sheets only one atom thick, seems poised to shake up the world of electronics. Within five years, it could begin powering faster and better transistors, computer chips, and LCD screens, according to researchers who are smitten with this new supermaterial.

Graphene’s standout trait is its uncanny facility with electrons, which can travel much more quickly through it than they can through silicon. As a result, graphene-based computer chips could be thousands of times as efficient as existing ones. “What limits conductivity in a normal material is that electrons will scatter,” says Michael Strano, a chemical engineer at MIT. “But with graphene the electrons can travel very long distances without scattering. It’s like the thinnest, most stable electrical conducting framework you can think of.”

In 2009 another MIT researcher, Tomas Palacios, devised a graphene chip that doubles the frequency of an electromagnetic signal. Using multiple chips could make the outgoing signal many times higher in frequency than the original. Because frequency determines the clock speed of the chip, boosting it enables faster transfer of data through the chip. Graphene’s extreme thinness means that it is also practically transparent, making it ideal for transmitting signals in devices containing solar cells or LEDs.

[div class=attrib]More from theSource here.[end-div]

J. Craig Venter

[div class=attrib]From Discover:[end-div]

J. Craig Venter keeps riding the cusp of each new wave in biology. When researchers started analyzing genes, he launched the Institute for Genomic Research (TIGR), decoding the genome of a bacterium for the first time in 1992. When the government announced its plan to map the human genome, he claimed he would do it first—and then he delivered results in 2001, years ahead of schedule. Armed with a deep understanding of how DNA works, Venter is now moving on to an even more extraordinary project. Starting with the stunning genetic diversity that exists in the wild, he is aiming to build custom-designed organisms that could produce clean energy, help feed the planet, and treat cancer. Venter has already transferred the genome of one species into the cell body of another. This past year he reached a major milestone, using the machinery of yeast to manufacture a genome from scratch. When he combines the steps—perhaps next year—he will have crafted a truly synthetic organism. Senior editor Pamela Weintraub discussed the implications of these efforts with Venter in DISCOVER’s editorial offices.

Here you are talking about constructing life, but you started out in deconstruction: charting the human genome, piece by piece.
Actually, I started out smaller, studying the adrenaline receptor. I was looking at one protein and its single gene for a decade. Then, in the late 1980s, I was drawn to the idea of the whole genome, and I stopped everything and switched my lab over. I had the first automatic DNA sequencer. It was the ultimate in reductionist biology—getting down to the genetic code, interpreting what it meant, including all 6 billion letters of my own genome. Only by understanding things at that level can we turn around and go the other way.

In your latest work you are trying to create “synthetic life.” What is that?
It’s a catchy phrase that people have begun using to replace “molecular biology.” The term has been overused, so we have defined a separate field that we call synthetic genomics—the digitization of biology using only DNA and RNA. You start by sequencing genomes and putting their digital code into a computer. Then you use the computer to take that information and design new life-forms.

How do you build a life-form? Throw in some mito­chondria here and some ribosomes there, surround ?it all with a membrane—?and voilà?
We started down that road, but now we are coming from the other end. We’re starting with the accomplishments of three and a half billion years of evolution by using what we call the software of life: DNA. Our software builds its own hardware. By writing new software, we can come up with totally new species. It would be as if once you put new software in your computer, somehow a whole new machine would materialize. We’re software engineers rather than construction workers.

[div class=attrib]More from theSource here[end-div]

Five Big Additions to Darwin’s Theory of Evolution

[div class=attrib]From Discover:[end-div]

Charles Darwin would have turned 200 in 2009, the same year his book On the Origin of Species celebrated its 150th anniversary. Today, with the perspective of time, Darwin’s theory of evolution by natural selection looks as impressive as ever. In fact, the double anniversary year saw progress on fronts that Darwin could never have anticipated, bringing new insights into the origin of life—a topic that contributed to his panic attacks, heart palpitations, and, as he wrote, “for 25 years extreme spasmodic daily and nightly flatulence.” One can only dream of what riches await in the biology textbooks of 2159.

1. Evolution happens on the inside, too. The battle for survival is waged not just between the big dogs but within the dog itself, as individual genes jockey for prominence. From the moment of conception, a father’s genes favor offspring that are large, strong, and aggressive (the better to court the ladies), while the mother’s genes incline toward smaller progeny that will be less of a burden, making it easier for her to live on and procreate. Genome-versus-genome warfare produces kids that are somewhere in between.

Not all genetic conflicts are resolved so neatly. In flour beetles, babies that do not inherit the selfish genetic element known as Medea succumb to a toxin while developing in the egg. Some unborn mice suffer the same fate. Such spiteful genes have become widespread not by helping flour beetles and mice survive but by eliminating individuals that do not carry the killer’s code. “There are two ways of winning a race,” says Caltech biologist Bruce Hay. “Either you can be better than everyone else, or you can whack the other guys on the legs.”

Hay is trying to harness the power of such genetic cheaters, enlisting them in the fight against malaria. He created a Medea-like DNA element that spreads through experimental fruit flies like wildfire, permeating an entire population within 10 generations. This year he and his team have been working on encoding immune-system boosters into those Medea genes, which could then be inserted into male mosquitoes. If it works, the modified mosquitoes should quickly replace competitors who do not carry the new genes; the enhanced immune systems of the new mosquitoes, in turn, would resist the spread of the malaria parasite.

2. Identity is not written just in the genes. According to modern evolutionary theory, there is no way that what we eat, do, and encounter can override the basic rules of inheritance: What is in the genes stays in the genes. That single rule secured Darwin’s place in the science books. But now biologists are finding that nature can break those rules. This year Eva Jablonka, a theoretical biologist at Tel Aviv University, published a compendium of more than 100 hereditary changes that are not carried in the DNA sequence. This “epigenetic” inheritance spans bacteria, fungi, plants, and animals.

[div class=attrib]More from theSource here.[end-div]

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]

Are Black Holes the Architects of the Universe?

[div class=attrib]From Discover:[end-div]

Black holes are finally winning some respect. After long regarding them as agents of destruction or dismissing them as mere by-products of galaxies and stars, scientists are recalibrating their thinking. Now it seems that black holes debuted in a constructive role and appeared unexpectedly soon after the Big Bang. “Several years ago, nobody imagined that there were such monsters in the early universe,” says Penn State astrophysicist Yuexing Li. “Now we see that black holes were essential in creating the universe’s modern structure.”

Black holes, tortured regions of space where the pull of gravity is so intense that not even light can escape, did not always have such a high profile. They were once thought to be very rare; in fact, Albert Einstein did not believe they existed at all. Over the past several decades, though, astronomers have realized that black holes are not so unusual after all: Supermassive ones, millions or billions of times as hefty as the sun, seem to reside at the center of most, if not all, galaxies. Still, many people were shocked in 2003 when a detailed sky survey found that giant black holes were already common nearly 13 billion years ago, when the universe was less than a billion years old. Since then, researchers have been trying to figure out where these primordial holes came from and how they influenced the cosmic events that followed.

In August, researchers at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University ran a supercomputer simulation of the early universe and provided a tantalizing glimpse into the lives of the first black holes. The story began 200 million years after the Big Bang, when the universe’s first stars formed. These beasts, about 100 times the mass of the sun, were so large and energetic that they burned all their hydrogen fuel in just a few million years. With no more energy from hydrogen fusion to counteract the enormous inward pull of their gravity, the stars collapsed until all of their mass was compressed into a point of infinite density.

The first-generation black holes were puny compared with the monsters we see at the centers of galaxies today. They grew only slowly at first—adding just 1 percent to their bulk in the next 200 million years—because the hyperactive stars that spawned them had blasted away most of the nearby gas that they could have devoured. Nevertheless, those modest-size black holes left a big mark by performing a form of stellar birth control: Radiation from the trickle of material falling into the holes heated surrounding clouds of gas to about 5,000 degrees Fahrenheit, so hot that the gas could no longer easily coalesce. “You couldn’t really form stars in that stuff,” says Marcelo Alvarez, lead author of the Kavli study.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of KIPAC/SLAC/M.Alvarez, T. Able, and J. Wise.[end-div]

Will Our Universe Collide With a Neighboring One?

[div class=attrib]From Discover:[end-div]

Relaxing on an idyllic beach on Grand Cayman Island in the Caribbean, Anthony Aguirre vividly describes the worst natural disaster he can imagine. It is, in fact, probably the worst natural disaster that anyone could imagine. An asteroid impact would be small potatoes compared with this kind of event: a catastrophic encounter with an entire other universe.

As an alien cosmos came crashing into ours, its outer boundary would look like a wall racing forward at nearly the speed of light; behind that wall would lie a set of physical laws totally different from ours that would wreck everything they touched in our universe. “If we could see things in ultraslow motion, we’d see a big mirror in the sky rushing toward us because light would be reflected by the wall,” says Aguirre, a youthful physicist at the University of California at Santa Cruz. “After that we wouldn’t see anything—because we’d all be dead.”

There is a sober purpose behind this apocalyptic glee. Aguirre is one of a growing cadre of cosmologists who theorize that our universe is just one of many in a “multiverse” of universes. In their effort to grasp the implications of this idea, they have been calculating the odds that universes could interact with their neighbors or even smash into each other. While investigating what kind of gruesome end might result, they have stumbled upon a few surprises. There are tantalizing hints that our universe has already survived such a collision—and bears the scars to prove it.

Aguirre has organized a conference on Grand Cayman to address just such mind-boggling matters. The conversations here venture into multiverse mishaps and other matters of cosmological genesis and destruction. At first blush the setting seems incongruous: The tropical sun beats down dreamily, the smell of broken coconuts drifts from beneath the palm trees, and the ocean roars rhythmically in the background. But the locale is perhaps fitting. The winds are strong for this time of year, reminding the locals of hurricane Ivan, which devastated the capital city of George Town in 2004, lifting whole apartment blocks and transporting buildings across streets. In nature, peace and violence are never far from each other.

Much of today’s interest in multiple universes stems from concepts developed in the early 1980s by the pioneering cosmologists Alan Guth at MIT and Andrei Linde, then at the Lebedev Physical Institute in Moscow. Guth proposed that our universe went through an incredibly rapid growth spurt, known as inflation, in the first 10-30 second or so after the Big Bang. Such extreme expansion, driven by a powerful repulsive energy that quickly dissipated as the universe cooled, would solve many mysteries. Most notably, inflation could explain why the cosmos as we see it today is amazingly uniform in all directions. If space was stretched mightily during those first instants of existence, any extreme lumpiness or hot and cold spots would have immediately been smoothed out. This theory was modified by Linde, who had hit on a similar idea independently. Inflation made so much sense that it quickly became a part of the mainstream model of cosmology.

Soon after, Linde and Alex Vilenkin at Tufts University came to the startling realization that inflation may not have been a onetime event. If it could happen once, it could—and indeed should—happen again and again for eternity. Stranger still, every eruption of inflation would create a new bubble of space and energy. The result: an infinite progression of new universes, each bursting forth with its own laws of physics.

In such a bubbling multiverse of universes, it seems inevitable that universes would sometimes collide. But for decades cosmologists neglected this possibility, reckoning that the odds were small and that if it happened, the results would be irrelevant because anyone and anything near the collision would be annihilated.

[div class=attrib]More from theSource here.[end-div]

I Didn’t Sin—It Was My Brain

[div class=attrib]From Discover:[end-div]

Why does being bad feel so good? Pride, envy, greed, wrath, lust, gluttony, and sloth: It might sound like just one more episode of The Real Housewives of New Jersey, but this enduring formulation of the worst of human failures has inspired great art for thousands of years. In the 14th century Dante depicted ghoulish evildoers suffering for eternity in his masterpiece, The Divine Comedy. Medieval muralists put the fear of God into churchgoers with lurid scenarios of demons and devils. More recently George Balanchine choreographed their dance.

Today these transgressions are inspiring great science, too. New research is explaining where these behaviors come from and helping us understand why we continue to engage in them—and often celebrate them—even as we declare them to be evil. Techniques such as functional magnetic resonance imaging (fMRI), which highlights metabolically active areas of the brain, now allow neuroscientists to probe the biology behind bad intentions.

The most enjoyable sins engage the brain’s reward circuitry, including evolutionarily ancient regions such as the nucleus accumbens and hypothalamus; located deep in the brain, they provide us such fundamental feelings as pain, pleasure, reward, and punishment. More disagreeable forms of sin such as wrath and envy enlist the dorsal anterior cingulate cortex (dACC). This area, buried in the front of the brain, is often called the brain’s “conflict detector,” coming online when you are confronted with contradictory information, or even simply when you feel pain. The more social sins (pride, envy, lust, wrath) recruit the medial prefrontal cortex (mPFC), brain terrain just behind the forehead, which helps shape the awareness of self.

No understanding of temptation is complete without considering restraint, and neuroscience has begun to illuminate this process as well. As we struggle to resist, inhibitory cognitive control networks involving the front of the brain activate to squelch the impulse by tempering its appeal. Meanwhile, research suggests that regions such as the caudate—partly responsible for body movement and coordination—suppress the physical impulse. It seems to be the same whether you feel a spark of lechery, a surge of jealousy, or the sudden desire to pop somebody in the mouth: The two sides battle it out, the devilish reward system versus the angelic brain regions that hold us in check.

It might be too strong to claim that evolution has wired us for sin, but excessive indulgence in lust or greed could certainly put you ahead of your competitors. “Many of these sins you could think of as virtues taken to the extreme,” says Adam Safron, a research consultant at Northwestern University whose neuroimaging studies focus on sexual behavior. “From the perspective of natural selection, you want the organism to eat, to procreate, so you make them rewarding. But there’s a potential for that process to go beyond the bounds.”

[div class=attrib]More from theSource here[end-div]

Stephen Hawking Is Making His Comeback

[div class=attrib]From Discover:[end-div]

As an undergraduate at Oxford University, Stephen William Hawking was a wise guy, a provocateur. He was popular, a lively coxswain for the crew team. Physics came easy. He slept through lectures, seldom studied, and criticized his professors. That all changed when he started graduate school at Cambridge in 1962 and subsequently learned that he had only a few years to live.

The symptoms first appeared while Hawking was still at Oxford. He could not row a scull as easily as he once had; he took a few bad, clumsy falls. A college doctor told him not to drink so much beer. By 1963 his condition had gotten bad enough that his mother brought him to a hospital in London, where he received the devastating diagnosis: motor neuron disease, as ALS is called in the United Kingdom. The prognosis was grim and final: rapid wasting of nerves and muscles, near-total paralysis, and death from respiratory failure in three to five years.

Not surprisingly, Hawking grew depressed, seeking solace in the music of Wagner (contrary to some media reports, however, he says he did not go on a drinking binge). And yet he did not disengage from life. Later in 1963 he met Jane Wilde, a student of medieval poetry at the University of London. They fell in love and resolved to make the most of what they both assumed would be a tragically short relationship. In 1965 they married, and Hawking returned to physics with newfound energy.

Also that year, Hawking had an encounter that led to his first major contribution to his field. The occasion was a talk at Kings College in London given by Roger Penrose, an eminent mathematician then at Birkbeck College. Penrose had just proved something remarkable and, for physicists, disturbing: Black holes, the light-trapping chasms in space-time that form in the aftermath of the collapse of massive stars, must all contain singularities—points where space, time, and the very laws of physics fall apart.

Before Penrose’s work, many physicists had regarded singularities as mere curiosities, permitted by Einstein’s theory of general relativity but unlikely to exist. The standard assumption was that a singularity could form only if a perfectly spherical star collapsed with perfect symmetry, the kind of ideal conditions that never occur in the real world. Penrose proved otherwise. He found that any star massive enough to form a black hole upon its death must create a singularity. This realization meant that the laws of physics could not be used to describe everything in the universe; the singularity was a cosmic abyss.

At a subsequent lecture, Hawking grilled Penrose on his ideas. “He asked some awkward questions,” Penrose says. “He was very much on the ball. I had probably been a bit vague in one of my statements, and he was sharpening it up a bit. I was a little alarmed that he noticed something that I had glossed over, and that he was able to spot it so quickly.”

Hawking had just renewed his search for a subject for his Ph.D. thesis, a project he had abandoned after receiving the ALS diagnosis. His condition had stabilized somewhat, and his future no longer looked completely bleak. Now he had his subject: He wanted to apply Penrose’s approach to the cosmos at large.

Physicists have known since 1929 that the universe is expanding. Hawking reasoned that if the history of the universe could be run backward, so that the universe was shrinking instead of expanding, it would behave (mathematically at least) like a collapsing star, the same sort of phenomenon Penrose had analyzed. Hawking’s work was timely. In 1965, physicists working at Bell Labs in New Jersey discovered the cosmic microwave background radiation, the first direct evidence that the universe began with the Big Bang. But was the Big Bang a singularity, or was it a concentrated, hot ball of energy—awesome and mind-bending, but still describable by the laws of physics?

[div class=attrib]More from theSource here.[end-div]

How Much of Your Memory Is True?

[div class=attrib]From Discover:[end-div]

Rita Magil was driving down a Montreal boulevard one sunny morning in 2002 when a car came blasting through a red light straight toward her. “I slammed the brakes, but I knew it was too late,” she says. “I thought I was going to die.” The oncoming car smashed into hers, pushing her off the road and into a building with large cement pillars in front. A pillar tore through the car, stopping only about a foot from her face. She was trapped in the crumpled vehicle, but to her shock, she was still alive.

The accident left Magil with two broken ribs and a broken collarbone. It also left her with post-traumatic stress disorder (PTSD) and a desperate wish to forget. Long after her bones healed, Magil was plagued by the memory of the cement barriers looming toward her. “I would be doing regular things—cooking something, shopping, whatever—and the image would just come into my mind from nowhere,” she says. Her heart would pound; she would start to sweat and feel jumpy all over. It felt visceral and real, like something that was happening at that very moment.

Most people who survive accidents or attacks never develop PTSD. But for some, the event forges a memory that is pathologically potent, erupting into consciousness again and again. “PTSD really can be characterized as a disorder of memory,” says McGill University psychologist Alain Brunet, who studies and treats psychological trauma. “It’s about what you wish to forget and what you cannot forget.” This kind of memory is not misty and water­colored. It is relentless.

More than a year after her accident, Magil saw Brunet’s ad for an experimental treatment for PTSD, and she volunteered. She took a low dose of a common blood-pressure drug, propranolol, that reduces activity in the amygdala, a part of the brain that processes emotions. Then she listened to a taped re-creation of her car accident. She had relived that day in her mind a thousand times. The difference this time was that the drug broke the link between her factual memory and her emotional memory. Propranolol blocks the action of adrenaline, so it prevented her from tensing up and getting anxious. By having Magil think about the accident while the drug was in her body, Brunet hoped to permanently change how she remembered the crash. It worked. She did not forget the accident but was actively able to reshape her memory of the event, stripping away the terror while leaving the facts behind.

Brunet’s experiment emerges from one of the most exciting and controversial recent findings in neuroscience: that we alter our memories just by remembering them. Karim Nader of McGill—the scientist who made this discovery—hopes it means that people with PTSD can cure themselves by editing their memories. Altering remembered thoughts might also liberate people imprisoned by anxiety, obsessive-compulsive disorder, even addiction. “There is no such thing as a pharmacological cure in psychiatry,” Brunet says. “But we may be on the verge of changing that.”

[div class=attrib]More from theSource here[end-div]

Building an Interstate Highway System for Energy

[div class=attrib]From Discover:[end-div]

President Obama plans to spend billions building it. General Electric is already running slick ads touting the technology behind it. And Greenpeace declares that it is a great idea. But what exactly is a “smart grid”? According to one big-picture description, it is much of what today’s power grid is not, and more of what it must become if the United States is to replace carbon-belching, coal-fired power with renewable energy generated from sun and wind.

Today’s power grids are designed for local delivery, linking customers in a given city or region to power plants relatively nearby. But local grids are ill-suited to distributing energy from the alternative sources of tomorrow. North America’s strongest winds, most intense sunlight, and hottest geothermal springs are largely concentrated in remote regions hundreds or thousands of miles from the big cities that need electricity most. “Half of the population in the United States lives within 100 miles of the coasts, but most of the wind resources lie between North Dakota and West Texas,” says Michael Heyeck, senior vice president for transmission at the utility giant American Electric Power. Worse, those winds constantly ebb and flow, creating a variable supply.

Power engineers are already sketching the outlines of the next-generation electrical grid that will keep our homes and factories humming with clean—but fluctuating—renewable energy. The idea is to expand the grid from the top down by adding thousands of miles of robust new transmission lines, while enhancing communication from the bottom up with electronics enabling millions of homes and businesses to optimize their energy use.

The Grid We Have
When electricity leaves a power plant today, it is shuttled from place to place over high-voltage lines, those cables on steel pylons that cut across landscapes and run virtually contiguously from coast to coast. Before it reaches your home or office, the voltage is reduced incrementally by passing through one or more intermediate points, called substations. The substations process the power until it can flow to outlets in homes and businesses at the safe level of 110 volts.

The vast network of power lines delivering the juice may be interconnected, but pushing electricity all the way from one coast to the other is unthinkable with the present technology. That is because the network is an agglomeration of local systems patched together to exchange relatively modest quantities of surplus power. In fact, these systems form three distinct grids in the United States: the Eastern, Western, and Texas interconnects. Only a handful of transfer stations can move power between the different grids.

[div class=attrib]More from theSource here.[end-div]

A Scientist’s Guide to Finding Alien Life: Where, When, and in What Universe

[div class=attrib]From Discover:[end-div]

Things were not looking so good for alien life in 1976, after the Viking I spacecraft landed on Mars, stretched out its robotic arm, and gathered up a fist-size pile of red dirt for chemical testing. Results from the probe’s built-in lab were anything but encouraging. There were no clear signs of biological activity, and the pictures Viking beamed back showed a bleak, frozen desert world, backing up that grim assessment. It appeared that our best hope for finding life on another planet had blown away like dust in a Martian windstorm.

What a difference 33 years makes. Back then, Mars seemed the only remotely plausible place beyond Earth where biology could have taken root. Today our conception of life in the universe is being turned on its head as scientists are finding a whole lot of inviting real estate out there. As a result, they are beginning to think not in terms of single places to look for life but in terms of “habitable zones”—maps of the myriad places where living things could conceivably thrive beyond Earth. Such abodes of life may lie on other planets and moons throughout our galaxy, throughout the universe, and even beyond.

The pace of progress is staggering. Just last November new studies of Saturn’s moon Enceladus strengthened the case for a reservoir of warm water buried beneath its craggy surface. Nobody had ever thought of this roughly 300-mile-wide icy satellite as anything special—until the Cassini spacecraft witnessed geysers of water vapor blowing out from its surface. Now Enceladus joins Jupiter’s moon Europa on the growing list of unlikely solar system locales that seem to harbor liquid water and, in principle, the ingredients for life.

Astronomers are also closing in on a possibly huge number of Earth-like worlds around other stars. Since the mid-1990s they have already identified roughly 340 extrasolar planets. Most of these are massive gaseous bodies, but the latest searches are turning up ever-smaller worlds. Two months ago the European satellite Corot spotted an extrasolar planet less than twice the diameter of Earth (see “The Inspiring Boom in Super-Earths”), and NASA’s new Kepler probe is poised to start searching for genuine analogues of Earth later this year. Meanwhile, recent discoveries show that microorganisms are much hardier than we thought, meaning that even planets that are not terribly Earth-like might still be suited to biology.

Together, these findings indicate that Mars was only the first step of the search, not the last. The habitable zones of the cosmos are vast, it seems, and they may be teeming with life.

[div class=attrib]More from theSource here.[end-div]

The Biocentric Universe Theory: Life Creates Time, Space, and the Cosmos Itself

[div class=attrib]From Discover:[end-div]

The farther we peer into space, the more we realize that the nature of the universe cannot be understood fully by inspecting spiral galaxies or watching distant supernovas. It lies deeper. It involves our very selves.

This insight snapped into focus one day while one of us (Lanza) was walking through the woods. Looking up, he saw a huge golden orb web spider tethered to the overhead boughs. There the creature sat on a single thread, reaching out across its web to detect the vibrations of a trapped insect struggling to escape. The spider surveyed its universe, but everything beyond that gossamer pinwheel was incomprehensible. The human observer seemed as far-off to the spider as telescopic objects seem to us. Yet there was something kindred: We humans, too, lie at the heart of a great web of space and time whose threads are connected according to laws that dwell in our minds.

Is the web possible without the spider? Are space and time physical objects that would continue to exist even if living creatures were removed from the scene?

Figuring out the nature of the real world has obsessed scientists and philosophers for millennia. Three hundred years ago, the Irish empiricist George Berkeley contributed a particularly prescient observation: The only thing we can perceive are our perceptions. In other words, consciousness is the matrix upon which the cosmos is apprehended. Color, sound, temperature, and the like exist only as perceptions in our head, not as absolute essences. In the broadest sense, we cannot be sure of an outside universe at all.

For centuries, scientists regarded Berkeley’s argument as a philosophical sideshow and continued to build physical models based on the assumption of a separate universe “out there” into which we have each individually arrived. These models presume the existence of one essential reality that prevails with us or without us. Yet since the 1920s, quantum physics experiments have routinely shown the opposite: Results do depend on whether anyone is observing. This is perhaps most vividly illustrated by the famous two-slit experiment. When someone watches a subatomic particle or a bit of light pass through the slits, the particle behaves like a bullet, passing through one hole or the other. But if no one observes the particle, it exhibits the behavior of a wave that can inhabit all possibilities—including somehow passing through both holes at the same time.

Some of the greatest physicists have described these results as so confounding they are impossible to comprehend fully, beyond the reach of metaphor, visualization, and language itself. But there is another interpretation that makes them sensible. Instead of assuming a reality that predates life and even creates it, we propose a biocentric picture of reality. From this point of view, life—particularly consciousness—creates the universe, and the universe could not exist without us.

[div class=attrib]More from theSource here.[end-div]

L’Aquila: The other casualty

18th-century Church of Santa Maria del Suffragio. Image courtesy of The New York Times.The earthquake in central Italy last week zeroed in on the beautiful medieval hill town of L’Aquila. It claimed the lives of 294 young and old, injured several thousand more, and made tens of thousands homeless. This is a heart-wrenching human tragedy. It’s also a cultural one. The quake razed centuries of L’Aquila’s historical buildings, broke the foundations of many of the town’s churches and public spaces, destroyed countless cultural artifacts, and forever buried much of the town’s irreplaceable art under tons of twisted iron and fractured stone.

Like many small and lesser known towns in Italy, L?Aquila did not boast a roster of works by ?a-list? artists on its walls, ceilings and piazzas; no Michelangelos or Da Vincis here, no works by Giotto or Raphael. And yet, the cultural loss is no less significant, for the quake destroyed much of the common art that the citizens of L?Aquila shared as a social bond. It?s the everyday art that they passed on their way to home or school or work; the fountains in the piazzas, the ornate porticos, the painted building facades, the hand-carved doors, the marble statues on street corners, the frescoes and paintings by local artists hanging on the ordinary walls. It?s this everyday art – the art that surrounded and nourished the citizens of L?Aquila – that is gone.

New York Times columnist, Michael Kimmelman put it this way in his April 11, 2009 article:

Italy is not like America. Art isn?t reduced here to a litany of obscene auction prices or lamentations over the bursting bubble of shameless excess. It?s a matter of daily life, linking home and history. Italians don?t visit museums much, truth be told, because they already live in them and can?t live without them. The art world might retrieve a useful lesson from the rubble.

I don’t fully agree with Mr.Kimmelman. There’s plenty of excess and pretentiousness in the salons of Paris, London and even Beijing and Mumbai, not just the serious art houses of New York. And yet, he has accurately observed the plight of L’Aquila. How often have you seen people confronted with the aftermath of a natural (or manmade) tragedy sifting through the remains, looking for a precious artifact – a sentimental photo, a memorable painting, a meaningful gift. These tragic situations often make people realize what is truly precious (aside from life and family and friends), and it’s not the plasma TV.

The Strange Forests that Drink—and Eat—Fog

[div class=attrib]From Discover:[end-div]

On the rugged roadway approaching Fray Jorge National Park in north-central Chile, you are surrounded by desert. This area receives less than six inches of rain a year, and the dry terrain is more suggestive of the badlands of the American Southwest than of the lush landscapes of the Amazon. Yet as the road climbs, there is an improbable shift. Perched atop the coastal mountains here, some 1,500 to 2,000 feet above the level of the nearby Pacific Ocean, are patches of vibrant rain forest covering up to 30 acres apiece. Trees stretch as much as 100 feet into the sky, with ferns, mosses, and bromeliads adorning their canopies. Then comes a second twist: As you leave your car and follow a rising path from the shrub into the forest, it suddenly starts to rain. This is not rain from clouds in the sky above, but fog dripping from the tree canopy. These trees are so efficient at snatching moisture out of the air that the fog provides them with three-quarters of all the water they need.

Understanding these pocket rain forests and how they sustain themselves in the middle of a rugged desert has become the life’s work of a small cadre of scientists who are only now beginning to fully appreciate Fray Jorge’s third and deepest surprise: The trees that grow here do more than just drink the fog. They eat it too.

Fray Jorge lies at the north end of a vast rain forest belt that stretches southward some 600 miles to the tip of Chile. In the more southerly regions of this zone, the forest is wetter, thicker, and more contiguous, but it still depends on fog to survive dry summer conditions. Kathleen C. Weathers, an ecosystem scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York, has been studying the effects of fog on forest ecosystems for 25 years, and she still cannot quite believe how it works. “One step inside a fog forest and it’s clear that you’ve entered a remarkable ecosystem,” she says. “The ways in which trees, leaves, mosses, and bromeliads have adapted to harvest tiny droplets of water that hang in the atmosphere is unparalleled.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Juan J. Armesto/Foundation Senda Darwin Archive[end-div]

Evolution by Intelligent Design

[div class=attrib]From Discover:[end-div]

“There are no shortcuts in evolution,” famed Supreme Court justice Louis Brandeis once said. He might have reconsidered those words if he could have foreseen the coming revolution in biotechnology, including the ability to alter genes and manipulate stem cells. These breakthroughs could bring on an age of directed reproduction and evolution in which humans will bypass the incremental process of natural selection and set off on a high-speed genetic course of their own. Here are some of the latest and greatest advances.

Embryos From the Palm of Your Hand
In as little as five years, scientists may be able to create sperm and egg cells from any cell in the body, enabling infertile couples, gay couples, or sterile people to reproduce. The technique could also enable one person to provide both sperm and egg for an offspring—an act of “ultimate incest,” according to a report from the Hinxton Group, an international consortium of scientists and bioethicists whose members include such heavyweights as Ruth Faden, director of the Johns Hopkins Berman Institute of Bioethics, and Peter J. Donovan, a professor of biochemistry at the University of California at Irvine.

The Hinxton Group’s prediction comes in the wake of recent news that scientists at the University of Wisconsin and Kyoto University in Japan have transformed adult human skin cells into pluripotent stem cells, the powerhouse cells that can self-replicate (perhaps indefinitely) and develop into almost any kind of cell in the body. In evolutionary terms, the ability to change one type of cell into others—including a sperm or egg cell, or even an embryo—means that humans can now wrest control of reproduction away from nature, notes Robert Lanza, a scientist at Advanced Cell Technology in Massachusetts. “With this breakthrough we now have a working technology whereby anyone can pass on their genes to a child by using just a few skin cells,” he says.

[div class=attrib]More from theSource here.[end-div]

Is Quantum Mechanics Controlling Your Thoughts?

[div class=attrib]From Discover:[end-div]

Graham Fleming sits down at an L-shaped lab bench, occupying a footprint about the size of two parking spaces. Alongside him, a couple of off-the-shelf lasers spit out pulses of light just millionths of a billionth of a second long. After snaking through a jagged path of mirrors and lenses, these minus­cule flashes disappear into a smoky black box containing proteins from green sulfur bacteria, which ordinarily obtain their energy and nourishment from the sun. Inside the black box, optics manufactured to billionths-of-a-meter precision detect something extraordinary: Within the bacterial proteins, dancing electrons make seemingly impossible leaps and appear to inhabit multiple places at once.

Peering deep into these proteins, Fleming and his colleagues at the University of California at Berkeley and at Washington University in St. Louis have discovered the driving engine of a key step in photosynthesis, the process by which plants and some microorganisms convert water, carbon dioxide, and sunlight into oxygen and carbohydrates. More efficient by far in its ability to convert energy than any operation devised by man, this cascade helps drive almost all life on earth. Remarkably, photosynthesis appears to derive its ferocious efficiency not from the familiar physical laws that govern the visible world but from the seemingly exotic rules of quantum mechanics, the physics of the subatomic world. Somehow, in every green plant or photosynthetic bacterium, the two disparate realms of physics not only meet but mesh harmoniously. Welcome to the strange new world of quantum biology.

On the face of things, quantum mechanics and the biological sciences do not mix. Biology focuses on larger-scale processes, from molecular interactions between proteins and DNA up to the behavior of organisms as a whole; quantum mechanics describes the often-strange nature of electrons, protons, muons, and quarks—the smallest of the small. Many events in biology are considered straightforward, with one reaction begetting another in a linear, predictable way. By contrast, quantum mechanics is fuzzy because when the world is observed at the subatomic scale, it is apparent that particles are also waves: A dancing electron is both a tangible nugget and an oscillation of energy. (Larger objects also exist in particle and wave form, but the effect is not noticeable in the macroscopic world.)

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Dylan Burnette/Olympus Bioscapes Imaging Competition.[end-div]

Invisibility Becomes More than Just a Fantasy

[div class=attrib]From Discover:[end-div]

Two years ago a team of engineers amazed the world (Harry Potter fans in particular) by developing the technology needed to make an invisibility cloak. Now researchers are creating laboratory-engineered wonder materials that can conceal objects from almost anything that travels as a wave. That includes light and sound and—at the subatomic level—matter itself. And lest you think that cloaking applies only to the intangible world, 2008 even brought a plan for using cloaking techniques to protect shorelines from giant incoming waves.

Engineer Xiang Zhang, whose University of California at Berkeley lab is behind much of this work, says, “We can design materials that have properties that never exist in nature.”

These engineered substances, known as metamaterials, get their unusual properties from their size and shape, not their chemistry. Because of the way they are composed, they can shuffle waves—be they of light, sound, or water—away from an object. To cloak something, concentric rings of the metamaterial are placed around the object to be concealed. Tiny structures—like loops or cylinders—within the rings divert the incoming waves around the object, preventing both reflection and absorption. The waves meet up again on the other side, appearing just as they would if nothing were there.

The first invisibility cloak, designed by engineers at Duke University and Imperial College London, worked for only a narrow band of microwaves. Xiang and his colleagues created metamaterials that can bend visible light backward—a much greater challenge because visible light waves are so small, under 700 nanometers wide. That meant the engineers had to devise cloaking components only tens of nanometers apart.

[div class=attrib]More from theSource here.[end-div]

Why I Blog

[div class=attrib]By Andrew Sullivan for the Altantic[end-div]

The word blog is a conflation of two words: Web and log. It contains in its four letters a concise and accurate self-description: it is a log of thoughts and writing posted publicly on the World Wide Web. In the monosyllabic vernacular of the Internet, Web log soon became the word blog.

This form of instant and global self-publishing, made possible by technology widely available only for the past decade or so, allows for no retroactive editing (apart from fixing minor typos or small glitches) and removes from the act of writing any considered or lengthy review. It is the spontaneous expression of instant thought—impermanent beyond even the ephemera of daily journalism. It is accountable in immediate and unavoidable ways to readers and other bloggers, and linked via hypertext to continuously multiplying references and sources. Unlike any single piece of print journalism, its borders are extremely porous and its truth inherently transitory. The consequences of this for the act of writing are still sinking in.

A ship’s log owes its name to a small wooden board, often weighted with lead, that was for centuries attached to a line and thrown over the stern. The weight of the log would keep it in the same place in the water, like a provisional anchor, while the ship moved away. By measuring the length of line used up in a set period of time, mariners could calculate the speed of their journey (the rope itself was marked by equidistant “knots” for easy measurement). As a ship’s voyage progressed, the course came to be marked down in a book that was called a log.

In journeys at sea that took place before radio or radar or satellites or sonar, these logs were an indispensable source for recording what actually happened. They helped navigators surmise where they were and how far they had traveled and how much longer they had to stay at sea. They provided accountability to a ship’s owners and traders. They were designed to be as immune to faking as possible. Away from land, there was usually no reliable corroboration of events apart from the crew’s own account in the middle of an expanse of blue and gray and green; and in long journeys, memories always blur and facts disperse. A log provided as accurate an account as could be gleaned in real time.

As you read a log, you have the curious sense of moving backward in time as you move forward in pages—the opposite of a book. As you piece together a narrative that was never intended as one, it seems—and is—more truthful. Logs, in this sense, were a form of human self-correction. They amended for hindsight, for the ways in which human beings order and tidy and construct the story of their lives as they look back on them. Logs require a letting-go of narrative because they do not allow for a knowledge of the ending. So they have plot as well as dramatic irony—the reader will know the ending before the writer did.

[div class=attrib]More from theSource here.[end-div]

The LHC Begins Its Search for the “God Particle

[div class=attrib]From Discover:[end-div]

The most astonishing thing about the Large Hadron Collider (LHC), the ring-shaped particle accelerator that revved up for the first time on September 10 in a tunnel near Geneva, is that it ever got built. Twenty-six nations pitched in more than $8 billion to fund the project. Then CERN—the European Organization for Nuclear Research—enlisted the help of 5,000 scientists and engineers to construct a machine of unprecedented size, complexity, and ambition.

Measuring almost 17 miles in circumference, the LHC uses 9,300 superconducting magnets, cooled by liquid helium to 1.9 degrees Kelvin above absolute zero (–271.3º C.), to accelerate two streams of protons in opposite directions. It has detectors as big as apartment buildings to find out what happens when these protons cross paths and collide at 99.999999 percent of the speed of light. Yet roughly the same percentage of the human race has no idea what the LHC’s purpose is. Might it destroy the earth by spawning tiny, ravenous black holes? (Not a chance, physicists say. Collisions more energetic than the ones at the LHC happen naturally all the time, and we are still here.)

In fact, the goal of the LHC is at once simple and grandiose: It was created to discover new particles. One of the most sought of these is the Higgs boson, also known as the God particle because, according to current theory, it endowed all other particles with mass. Or perhaps the LHC will find “supersymmetric” particles, exotic partners to known particles like electrons and quarks. Such a discovery would be a big step toward developing a unified description of the four fundamental forces—the “theory of everything” that would explain all the basic interactions in the universe. As a bonus, some of those supersymmetric particles might turn out to be dark matter, the unseen stuff that seems to hold galaxies together.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib] Image courtesy of Maximillien Brice/CERN.[end-div]

What is art? The answer, from a little bird?

I’ve been pondering a concrete answer to this question, and others like it for some time. I do wonder “what is art?” and “what is great art?” and “what distinguishes fine art from its non-fine cousins?” and “what makes some art better than other art?”

In formulating my answers to these questions I’ve been looking inward and searching outward. I’ve been digesting the musings of our great philosophers and eminent scholars and authors. I’m close to penning some blog-worthy articles that crystallize my current thinking on the subject, but I’m not quite ready. Not yet. So, in the meantime you and I will have to make do with deep thoughts on the subject of art from some of my friends…

[youtube]pDo_vs3Aip4[/youtube]

The Vogels. Or, how to become a world class art collector on a postal clerk’s salary

I’m missing Art Basel | Miami this year. Last year’s event and surrounding shows displayed so much contemporary (and some modern) art, from so many artists and galleries that my head was buzzing for days afterward. This year I have our art251 gallery to co-run, so I’ve been visiting Art Basel virtually – reading the press releases, following the exhibitors and tuning in to the podcasts and vids, using the great tubes of the internet.

The best story by far to emerge this year from Art Basel | Miami is the continuing odyssey of Herb and Dorothy Vogel, their passion for contemporary art and their outstanding collection. On December 5, the documentary “Herb and Dorothy” was screened at Art Basel’s Art Loves Film night. And so their real-life art fairytale goes something like this…

[youtube]fMuYV_qvyEk[/youtube]

Over the last 40-plus years they have amassed a cutting-edge, world-class collection of contemporary art. In all they have collected around 4,000 works. Over time they have crammed art into every spare inch of space inside their one-bedroom Manhattan apartment. In 1992 they gave around 2,000 important pieces – paintings, drawings and sculptures – to the National Gallery of Art, in Washington, D.C. Then, in April of this year the National Gallery announced that an additional 2,500 of Vogels’ artworks would go to museums across the country: fifty works for fifty States. The National Gallery simply didn’t have enough space to house the Vogel’s immense collection.

So, why is this story so compelling?

Well, it’s compelling because they are just like you and me. They are not super-rich, they have no condo in Aspen, nor do they moor a yacht in Monte Carlo. They’re not hedge fund managers. They didn’t make a fortune before the dot.com bubble burst.

Herb Vogel, 86, is a retired postal clerk and Dorothy Vogel, 76, a retired librarian. They started collecting art in the 1960s and continue to this day. Their plan was simple and guided by two rules: the art had to be affordable, and small enough to fit in their apartment. Early on they decided to use Herb’s income for buying art, and Dorothy’s to paying living expenses. Though now retired they still follow the plan. They collect art because they love art and finding new art. In Dorothy’s words,

“We didn’t buy this art to make money… We did it to enjoy the art. And you know, it gives you a nice feeling to actually own it, and have it about you. … We started buying art for ourselves, in the 1960s, and from the beginning we chose carefully.”

More telling is Dorothy’s view of the art world, and the New York art scene:

“We never really got close to other people who collect… Most collectors have a lot of money, and they don’t go about their collecting in quite the same way. My husband had wanted to be an artist, and I learned from him. We were living vicariously through the work of every artist we bought. At some point, we realized that collecting this art was a sort of creative act. It became our art, in more ways than one. … I enjoyed the search, I guess. The looking and the finding. When you go to a store, and you’re searching for your size, don’t you get satisfaction when you find it?”

And Herb adds the final words:

“The art itself.”

So, within their modest means and limitations they have proved to be visionaries; many of the artists they supported early on have since become world-renowned. And, they have taken their rightful place among the great art collectors of the world, such as Getty and Rockefeller, and Broad and Saatchi. The Vogels used their limitations to their advantage – helping them focus, rather than being a hinderance. Above all, they used their eyes to find and collect great art, not their ears.

Artists beware! You may be outsourced next to…

China perhaps, or even a dog!

15jul08-tilamook.jpg

As you know, a vast amount of global manufacturing is outsourced to China. In fact, a fair deal of so-called “original” art now comes from China as well, where art factories of “copyworkers” are busy reproducing works by old masters or, for a few extra Yuan, originals in this or that particular style. For instance, the city of Dafen, China manufactures more “Van Goghs” in a couple of weeks than the real Van Gogh created in his entire lifetime. Dafen produces some great bargains — $2 for an unframed old master, $3 for a custom version (prices before enormous markup) — if you like to buy your art by the square foot.

You’ve probably also seen miscellaneous watercolors emanating from talented elephants in Thailand, the late Congo’s tempera paintings auctioned at Bonhams, or the German artist chimpanzee who, with her handlers, recently fooled an expert into believing her work was that of Ernst Wilhelm Nay.

Well, now comes a second biography of Tilamook Cheddar, or Tillie, the most successful animal painter in the history of, well, animal painters. Tillie, a Jack Russell terrier from Brooklyn, NY, has been painting for around 7 years, and has headlined 17 solo shows across the country and in Europe.

Despite these somewhat disturbing developments, I think artists will be around for some time. But, what about gallerists and art dealers? Could you see the Toshiba robot or a couple of (smart) lab rats or an Art-o-mat replacing your friendly gallery owners? Please don’t answer this one!

Portrait of The Dog. Image courtesy of T.Cheddar.

Robert Rauschenberg, American Artist, Dies at 82

[div class=attrib]From The New York Times:[end-div]

Robert Rauschenberg, the irrepressibly prolific American artist who time and again reshaped art in the 20th century, died on Monday night at his home on Captiva Island, Fla. He was 82.

The cause was heart failure, said Arne Glimcher, chairman of PaceWildenstein, the Manhattan gallery that represents Mr. Rauschenberg.

Mr. Rauschenberg’s work gave new meaning to sculpture. “Canyon,” for instance, consisted of a stuffed bald eagle attached to a canvas. “Monogram” was a stuffed goat girdled by a tire atop a painted panel. “Bed” entailed a quilt, sheet and pillow, slathered with paint, as if soaked in blood, framed on the wall. All became icons of postwar modernism.

A painter, photographer, printmaker, choreographer, onstage performer, set designer and, in later years, even a composer, Mr. Rauschenberg defied the traditional idea that an artist stick to one medium or style. He pushed, prodded and sometimes reconceived all the mediums in which he worked.

Building on the legacies of Marcel Duchamp, Kurt Schwitters, Joseph Cornell and others, he helped obscure the lines between painting and sculpture, painting and photography, photography and printmaking, sculpture and photography, sculpture and dance, sculpture and technology, technology and performance art — not to mention between art and life.

Mr. Rauschenberg was also instrumental in pushing American art onward from Abstract Expressionism, the dominant movement when he emerged, during the early 1950s. He became a transformative link between artists like Jackson Pollock and Willem de Kooning and those who came next, artists identified with Pop, Conceptualism, Happenings, Process Art and other new kinds of art in which he played a signal role.

No American artist, Jasper Johns once said, invented more than Mr. Rauschenberg. Mr. Johns, John Cage, Merce Cunningham and Mr. Rauschenberg, without sharing exactly the same point of view, collectively defined this new era of experimentation in American culture.

Apropos of Mr. Rauschenberg, Cage once said, “Beauty is now underfoot wherever we take the trouble to look.” Cage meant that people had come to see, through Mr. Rauschenberg’s efforts, not just that anything, including junk on the street, could be the stuff of art (this wasn’t itself new), but that it could be the stuff of an art aspiring to be beautiful — that there was a potential poetics even in consumer glut, which Mr. Rauschenberg celebrated.

“I really feel sorry for people who think things like soap dishes or mirrors or Coke bottles are ugly,” he once said, “because they’re surrounded by things like that all day long, and it must make them miserable.”

The remark reflected the optimism and generosity of spirit that Mr. Rauschenberg became known for. His work was likened to a St. Bernard: uninhibited and mostly good-natured. He could be the same way in person. When he became rich, he gave millions of dollars to charities for women, children, medical research, other artists and Democratic politicians.

A brash, garrulous, hard-drinking, open-faced Southerner, he had a charm and peculiar Delphic felicity with language that masked a complex personality and an equally multilayered emotional approach to art, which evolved as his stature did. Having begun by making quirky, small-scale assemblages out of junk he found on the street in downtown Manhattan, he spent increasing time in his later years, after he had become successful and famous, on vast international, ambassadorial-like projects and collaborations.

Conceived in his immense studio on the island of Captiva, off southwest Florida, these projects were of enormous size and ambition; for many years he worked on one that grew literally to exceed the length of its title, “The 1/4 Mile or 2 Furlong Piece.” They generally did not live up to his earlier achievements. Even so, he maintained an equanimity toward the results. Protean productivity went along with risk, he felt, and risk sometimes meant failure.

The process — an improvisatory, counterintuitive way of doing things — was always what mattered most to him. “Screwing things up is a virtue,” he said when he was 74. “Being correct is never the point. I have an almost fanatically correct assistant, and by the time she re-spells my words and corrects my punctuation, I can’t read what I wrote. Being right can stop all the momentum of a very interesting idea.”

This attitude also inclined him, as the painter Jack Tworkov once said, “to see beyond what others have decided should be the limits of art.”

He “keeps asking the question — and it’s a terrific question philosophically, whether or not the results are great art,” Mr. Tworkov said, “and his asking it has influenced a whole generation of artists.”

A Wry, Respectful Departure

That generation was the one that broke from Pollock and company. Mr. Rauschenberg maintained a deep but mischievous respect for Abstract Expressionist heroes like de Kooning and Barnett Newman. Famously, he once painstakingly erased a drawing by de Kooning, an act both of destruction and devotion. Critics regarded the all-black paintings and all-red paintings he made in the early 1950s as spoofs of de Kooning and Pollock. The paintings had roiling, bubbled surfaces made from scraps of newspapers embedded in paint.

But these were just as much homages as they were parodies. De Kooning, himself a parodist, had incorporated bits of newspapers in pictures, and Pollock stuck cigarette butts to canvases.

Mr. Rauschenberg’s “Automobile Tire Print,” from the early 1950s — resulting from Cage’s driving an inked tire of a Model A Ford over 20 sheets of white paper — poked fun at Newman’s famous “zip” paintings.

At the same time, Mr. Rauschenberg was expanding on Newman’s art. The tire print transformed Newman’s zip — an abstract line against a monochrome backdrop with spiritual pretensions — into an artifact of everyday culture, which for Mr. Rauschenberg had its own transcendent dimension.

Mr. Rauschenberg frequently alluded to cars and spaceships, even incorporating real tires and bicycles into his art. This partly reflected his own restless, peripatetic imagination. The idea of movement was logically extended when he took up dance and performance.

There was, beneath this, a darkness to many of his works, notwithstanding their irreverence. “Bed” (1955) was gothic. The all-black paintings were solemn and shuttered. The red paintings looked charred, with strips of fabric akin to bandages, from which paint dripped like blood. “Interview” (1955), which resembled a cabinet or closet with a door, enclosing photos of bullfighters, a pinup, a Michelangelo nude, a fork and a softball, suggested some black-humored encoded erotic message.

There were many other images of downtrodden and lonely people, rapt in thought; pictures of ancient frescoes, out of focus as if half remembered; photographs of forlorn, neglected sites; bits and pieces of faraway places conveying a kind of nostalgia or remoteness. In bringing these things together, the art implied consolation.

Mr. Rauschenberg, who knew that not everybody found it easy to grasp the open-endedness of his work, once described to the writer Calvin Tomkins an encounter with a woman who had reacted skeptically to “Monogram” (1955-59) and “Bed” in his 1963 retrospective at the Jewish Museum, one of the events that secured Mr. Rauschenberg’s reputation: “To her, all my decisions seemed absolutely arbitrary — as though I could just as well have selected anything at all — and therefore there was no meaning, and that made it ugly.

[div class=attrib]More from theSource here.[end-div]

Art Review | ‘Color as Field’: Weightless Color, Floating Free

[div class=attrib]From The New York Times:[end-div]

Starting in the late 1950s the great American art critic Clement Greenberg only had eyes for Color Field painting. This was the lighter-than-air abstract style, with its emphasis on stain painting and visual gorgeousness introduced by Helen Frankenthaler followed by Morris Louis, Kenneth Noland and Jules Olitski.

With the insistent support of Greenberg and his acolytes, Color Field soared as the next big, historically inevitable thing after Jackson Pollock. Then over the course of the 1970s it crashed and burned and dropped from sight. Pop and Minimal Art, which Greenberg disparaged, had more diverse critical support and greater influence on younger artists. Then Post-Minimalism came along, exploding any notion of art’s neatly linear progression.

Now Color Field painting — or as Greenberg preferred to call it, Post-Painterly Abstraction — is being reconsidered in a big way in “Color as Field: American Painting, 1950-1975,” a timely, provocative — if far from perfect — exhibition at the Smithsonian American Art Museum here. It has been organized by the American Federation of Arts and selected by the independent curator and critic Karen Wilkin. She and Carl Belz, former director of the Rose Art Museum at Brandeis University, have written essays for the catalog.

It is wonderful to see some of this work float free of the Greenbergian claims for greatness and inevitability (loyally retraced by Ms. Wilkin in her essay), and float it does, at least the best of it. The exhibition begins with the vista of Mr. Olitski’s buoyant, goofily sexy “Cleopatra Flesh” of 1962, looming at the end of a long hallway. The work sums up the fantastic soft power that these artists could elicit from brilliant color, scale and judicious amounts of pristine raw canvas. A huge blue motherly curve nearly encircles a large black planet while luring a smaller red planet into the fold, calling to mind an abstracted stuffed toy.

It is a perfect, exhilarating example of what Mr. Belz calls “one-shot painting” and likens to jazz improvisation. Basic to the thrill is our understanding that the stain painting technique involved a few rapid skilled but unrehearsed gestures, and that raw canvas offered no chance for revision. “Cleopatra’s Flesh” is an act of joyful derring-do.

The “one-shot painting” stain technique of color field was the innovation of Helen Frankenthaler, first accomplished in “Mountains and Sea,” made in 1952, when she was 24 and unknown. (It is not in this exhibition, but the method is conveyed by her 1957 “Seven Types of Ambiguity,” with its great gray splashes punctuated by peninsulas of red, yellow and blue.) The technique negotiated a common ground between Pollock’s heroic no-brush drip style and the expanses of saturated color favored especially by Barnett Newman and Mark Rothko.

In Greenberg’s eyes the torch of Abstract Expressionism (the cornerstone of his power as a critic) was being carried forward by Ms. Frankenthaler’s spirited reformulation, followed by Mr. Louis’s languid pours; Mr. Noland’s radiant targets; Mr. Olitski’s carefully controlled stains and (later) diaphanous sprayed surfaces. And this continuity confirmed the central premise of Greenbergian formalism: that all modern art mediums would be meekly reduced to their essences; for painting that meant abstractness, flatness and weightless color. As you can imagine, that didn’t leave anyone, not even the anointed few, with much to do.

Revisionist this show is not. Its 38 canvases represent 17 painters, including a selection of works by Abstract Expressionist precursors titled “Origins of Color Field.” The elders tend to look as light and jazzy as their juniors; Adolph Gottlieb, Hans Hoffman and Robert Motherwell, all present, were ultimately as much a part of Color Field as Abstract Expressionism. But even Newman’s “Horizontal Light” of 1949 seems undeniably flashy; its field of dark red is split by a narrow aqua band, called a zip, that seems to speed across the canvas. Rothko’s 1951 “Number 18,” with its shifting borders and cloud-squares of white, red and pink, has a cheerful, scintillating forthrightness.

This forthrightness expands into dazzling instantaneousness in the works of Ms. Frankenthaler and Mr. Louis, where it sometimes seems that the paint is still wet and seeping into the canvas. Ms. Frankenthaler’s high-wire act is especially evident in the jagged pools and terraces of color in the aptly titled “Flood” and in “Interior Landscape,” which centers on a single, exuberant splash. Mr. Louis manages a similar tension while seeming completely relaxed. In “Floral V,” where an inky black washes like a wave over a bouquet of brilliantly colored plumes, he achieves a silent grandeur, like a Frankenthaler with the sound off.

After the Frankenthaler and Louis works, this show dwindles into a subdued free-for-all, as most artists settle into more predetermined ways of working. Often big scale and simple composition add up to emptiness, especially when the signs of derring-do recede. Both Mr. Olitski and especially Mr. Noland are poorly represented. In Mr. Noland’s square “Space Jog,” Newman’s zips run perpendicular to one another, forming a pastel plaid on a sprayed ground of sky blue, like a Mondrian bed sheet.

[div class=attrib]More from theSource here.[end-div]

Quantum Trickery: Testing Einstein’s Strangest Theory

[div class=attrib]From the New York Times:[end-div]

Einstein said there would be days like this.

This fall scientists announced that they had put a half dozen beryllium atoms into a “cat state.”

No, they were not sprawled along a sunny windowsill. To a physicist, a “cat state” is the condition of being two diametrically opposed conditions at once, like black and white, up and down, or dead and alive.

These atoms were each spinning clockwise and counterclockwise at the same time. Moreover, like miniature Rockettes they were all doing whatever it was they were doing together, in perfect synchrony. Should one of them realize, like the cartoon character who runs off a cliff and doesn’t fall until he looks down, that it is in a metaphysically untenable situation and decide to spin only one way, the rest would instantly fall in line, whether they were across a test tube or across the galaxy.

The idea that measuring the properties of one particle could instantaneously change the properties of another one (or a whole bunch) far away is strange to say the least – almost as strange as the notion of particles spinning in two directions at once. The team that pulled off the beryllium feat, led by Dietrich Leibfried at the National Institute of Standards and Technology, in Boulder, Colo., hailed it as another step toward computers that would use quantum magic to perform calculations.

But it also served as another demonstration of how weird the world really is according to the rules, known as quantum mechanics.

The joke is on Albert Einstein, who, back in 1935, dreamed up this trick of synchronized atoms – “spooky action at a distance,” as he called it – as an example of the absurdity of quantum mechanics.

“No reasonable definition of reality could be expected to permit this,” he, Boris Podolsky and Nathan Rosen wrote in a paper in 1935.

Today that paper, written when Einstein was a relatively ancient 56 years old, is the most cited of Einstein’s papers. But far from demolishing quantum theory, that paper wound up as the cornerstone for the new field of quantum information.

Nary a week goes by that does not bring news of another feat of quantum trickery once only dreamed of in thought experiments: particles (or at least all their properties) being teleported across the room in a microscopic version of Star Trek beaming; electrical “cat” currents that circle a loop in opposite directions at the same time; more and more particles farther and farther apart bound together in Einstein’s spooky embrace now known as “entanglement.” At the University of California, Santa Barbara, researchers are planning an experiment in which a small mirror will be in two places at once.

[div class=attrib]More from theSource here.[end-div]