Social networking: Failure to connect

[div class=attrib]From the Guardian:[end-div]

The first time I joined Facebook, I had to quit again immediately. It was my first week of university. I was alone, along with thousands of other students, in a sea of club nights and quizzes and tedious conversations about other people’s A-levels. This was back when the site was exclusively for students. I had been told, in no uncertain terms, that joining was mandatory. Failure to do so was a form of social suicide worse even than refusing to drink alcohol. I had no choice. I signed up.

Users of Facebook will know the site has one immutable feature. You don’t have to post a profile picture, or share your likes and dislikes with the world, though both are encouraged. You can avoid the news feed, the apps, the tweet-like status updates. You don’t even have to choose a favourite quote. The one thing you cannot get away from is your friend count. It is how Facebook keeps score.

Five years ago, on probably the loneliest week of my life, my newly created Facebook page looked me square in the eye and announced: “You have 0 friends.” I closed the account.

Facebook is not a good place for a lonely person, and not just because of how precisely it quantifies your isolation. The news feed, the default point of entry to the site, is a constantly updated stream of your every friend’s every activity, opinion and photograph. It is a Twitter feed in glorious technicolour, complete with pictures, polls and videos. It exists to make sure you know exactly how much more popular everyone else is, casually informing you that 14 of your friends were tagged in the album “Fun without Tom Meltzer”. It can be, to say the least, disheartening. Without a real-world social network with which to interact, social networking sites act as proof of the old cliché: you’re never so alone as when you’re in a crowd.

The pressures put on teenagers by sites such as Facebook are well-known. Reports of cyber-bullying, happy-slapping, even self-harm and suicide attempts motivated by social networking sites have become increasingly common in the eight years since Friendster – and then MySpace, Bebo and Facebook – launched. But the subtler side-effects for a generation that has grown up with these sites are only now being felt. In March this year, the NSPCC published a detailed breakdown of calls made to ChildLine in the last five years. Though overall the number of calls from children and teenagers had risen by just 10%, calls about loneliness had nearly tripled, from 1,853 five years ago to 5,525 in 2009. Among boys, the number of calls about loneliness was more than five times higher than it had been in 2004.

This is not just a teenage problem. In May, the Mental Health Foundation released a report called The Lonely Society? Its survey found that 53% of 18-34-year-olds had felt depressed because of loneliness, compared with just 32% of people over 55. The question of why was, in part, answered by another of the report’s findings: nearly a third of young people said they spent too much time communicating online and not enough in person.

[div class=attrib]More from theSource here.[end-div]

What is HTML5

There is much going on in the world on internet and web standards, including the gradual roll-out of IPv6 and HTML5. HTML5 is a much more functional markup language than its predecessors and is better suited for developing richer user interfaces and interactions. Major highlights of HTML from the infographic below.

[div class=attrib]From Focus.com:[end-div]

[div class=attrib]More from theSource here.[end-div]

Sergey Brin’s Search for a Parkinson’s Cure

[div class=attrib]From Wired:[end-div]

Several evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.

Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)

The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”

There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.

Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.

That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.

“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.

Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.

It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.

[div class=attrib]More from theSource here.[end-div]

The internet: Everything you ever need to know

[div class=attrib]From The Observer:[end-div]

In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.

A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.

The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.

Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.

All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?

So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.

But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.

But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”

[div class=attrib]More from theSource here.[end-div]

The Evolution of the Physicist’s Picture of Nature

[div class=attrib]From Scientific American:[end-div]

Editor’s Note: We are republishing this article by Paul Dirac from the May 1963 issue of Scientific American, as it might be of interest to listeners to the June 24, 2010, and June 25, 2010 Science Talk podcasts, featuring award-winning writer and physicist Graham Farmelo discussing The Strangest Man, his biography of the Nobel Prize-winning British theoretical physicist.

In this article I should like to discuss the development of general physical theory: how it developed in the past and how one may expect it to develop in the future. One can look on this continual development as a process of evolution, a process that has been going on for several centuries.

The first main step in this process of evolution was brought about by Newton. Before Newton, people looked on the world as being essentially two-dimensional-the two dimensions in which one can walk about-and the up-and-down dimension seemed to be something essentially different. Newton showed how one can look on the up-and-down direction as being symmetrical with the other two directions, by bringing in gravitational forces and showing how they take their place in physical theory. One can say that Newton enabled us to pass from a picture with two-dimensional symmetry to a picture with three-dimensional symmetry.

Einstein made another step in the same direction, showing how one can pass from a picture with three-dimensional symmetry to a picture with four­dimensional symmetry. Einstein brought in time and showed how it plays a role that is in many ways symmetrical with the three space dimensions. However, this symmetry is not quite perfect. With Einstein’s picture one is led to think of the world from a four-dimensional point of view, but the four dimensions are not completely symmetrical. There are some directions in the four-dimensional picture that are different from others: directions that are called null directions, along which a ray of light can move; hence the four-dimensional picture is not completely symmetrical. Still, there is a great deal of symmetry among the four dimensions. The only lack of symmetry, so far as concerns the equations of physics, is in the appearance of a minus sign in the equations with respect to the time dimension as compared with the three space dimensions [see top equation in diagram].

four-dimensional symmetry equation and Schrodinger's equationsWe have, then, the development from the three-dimensional picture of the world to the four-dimensional picture. The reader will probably not be happy with this situation, because the world still appears three-dimensional to his consciousness. How can one bring this appearance into the four-dimensional picture that Einstein requires the physicist to have?

What appears to our consciousness is really a three-dimensional section of the four-dimensional picture. We must take a three-dimensional section to give us what appears to our consciousness at one time; at a later time we shall have a different three-dimensional section. The task of the physicist consists largely of relating events in one of these sections to events in another section referring to a later time. Thus the picture with four­dimensional symmetry does not give us the whole situation. This becomes particularly important when one takes into account the developments that have been brought about by quantum theory. Quantum theory has taught us that we have to take the process of observation into account, and observations usually require us to bring in the three-dimensional sections of the four-dimensional picture of the universe.

The special theory of relativity, which Einstein introduced, requires us to put all the laws of physics into a form that displays four-dimensional symmetry. But when we use these laws to get results about observations, we have to bring in something additional to the four-dimensional symmetry, namely the three-dimensional sections that describe our consciousness of the universe at a certain time.

Einstein made another most important contribution to the development of our physical picture: he put forward the general theory of relativity, which requires us to suppose that the space of physics is curved. Before this physicists had always worked with a flat space, the three-dimensional flat space of Newton which was then extended to the four­dimensional flat space of special relativity. General relativity made a really important contribution to the evolution of our physical picture by requiring us to go over to curved space. The general requirements of this theory mean that all the laws of physics can be formulated in curved four-dimensional space, and that they show symmetry among the four dimensions. But again, when we want to bring in observations, as we must if we look at things from the point of view of quantum theory, we have to refer to a section of this four-dimensional space. With the four-dimensional space curved, any section that we make in it also has to be curved, because in general we cannot give a meaning to a flat section in a curved space. This leads us to a picture in which we have to take curved three­dimensional sections in the curved four­dimensional space and discuss observations in these sections.

During the past few years people have been trying to apply quantum ideas to gravitation as well as to the other phenomena of physics, and this has led to a rather unexpected development, namely that when one looks at gravitational theory from the point of view of the sections, one finds that there are some degrees of freedom that drop out of the theory. The gravitational field is a tensor field with 10 components. One finds that six of the components are adequate for describing everything of physical importance and the other four can be dropped out of the equations. One cannot, however, pick out the six important components from the complete set of 10 in any way that does not destroy the four-dimensional symmetry. Thus if one insists on preserving four-dimensional symmetry in the equations, one cannot adapt the theory of gravitation to a discussion of measurements in the way quantum theory requires without being forced to a more complicated description than is needed bv the physical situation. This result has led me to doubt how fundamental the four-dimensional requirement in physics is. A few decades ago it seemed quite certain that one had to express the whole of physics in four­dimensional form. But now it seems that four-dimensional symmetry is not of such overriding importance, since the description of nature sometimes gets simplified when one departs from it.

Now I should like to proceed to the developments that have been brought about by quantum theory. Quantum theory is the discussion of very small things, and it has formed the main subject of physics for the past 60 years. During this period physicists have been amassing quite a lot of experimental information and developing a theory to correspond to it, and this combination of theory and experiment has led to important developments in the physicist’s picture of the world.

[div class=attrib]More from theSource here.[end-div]

What Is I.B.M.’s Watson?

[div class=attrib]From The New York Times:[end-div]

“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

[div class=attrib]More from theSource here.[end-div]

Mind Over Mass Media

[div class=attrib]From the New York Times:[end-div]

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Upon Nothing

[div class=attrib]By Robert Pinsky for Slate:[end-div]

The quality of wit, like the Hindu god Shiva, both creates and destroys—sometimes, both at once: The flash of understanding negates a trite or complacent way of thinking, and that stroke of obliteration at the same time creates a new form of insight and a laugh of recognition.

Also like Shiva, wit dances. Leaping gracefully, balancing speed and poise, it can re-embody and refresh old material. Negation itself, for example—verbal play with words like nothing and nobody: In one of the oldest jokes in literature, when the menacing Polyphemus asks Odysseus for his name, Odysseus tricks the monster by giving his name as the Greek equivalent of Nobody.

Another, immensely moving version of that Homeric joke (it may have been old even when Homer used it) is central to the best-known song of the great American comic Bert Williams (1874-1922). You can hear Williams’ funny, heart-rending, subtle rendition of the song (music by Williams, lyrics by Alex Rogers) at the University of California’s Cylinder Preservation and Digitization site.

The lyricist Rogers, I suspect, was aided by Williams’ improvisations as well as his virtuoso delivery. The song’s language is sharp and plain. The plainness, an almost throw-away surface, allows Williams to weave the refrain-word “Nobody” into an intricate fabric of jaunty pathos, savage lament, sly endurance—all in three syllables, with the dialect bent and stretched and released:

When life seems full of clouds and rain,
And I am full of nothing and pain,
Who soothes my thumpin’, bumpin’ brain?
Nobody.

When winter comes with snow and sleet,
And me with hunger, and cold feet—
Who says, “Here’s twenty-five cents
Go ahead and get yourself somethin’ to eat”?
Nobody.

I ain’t never done nothin’ to Nobody.
I ain’t never got nothin’ from Nobody, no time.
And, until I get somethin’ from somebody sometime,
I’ll never do nothin’ for Nobody, no time.

In his poem “Upon Nothing,” John Wilmot (1647-80), also known as the earl of Rochester, deploys wit as a flashing blade of skepticism, slashing away not only at a variety of human behaviors and beliefs, not only at false authorities and hollow reverences, not only at language, but at knowledge—at thought itself:

“Upon Nothing”

………………………1
Nothing, thou elder brother ev’n to Shade
Thou hadst a being ere the world was made,
And, well fixed, art alone of ending not afraid.

………………………2
Ere Time and Place were, Time and Place were not,
When primitive Nothing Something straight begot,
Then all proceeded from the great united What.

………………………3
Something, the general attribute of all,
Severed from thee, its sole original,
Into thy boundless self must undistinguished fall.

………………………4
Yet Something did thy mighty power command,
And from thy fruitful emptiness’s hand
Snatched men, beasts, birds, fire, water, air, and land.

………………………5
Matter, the wicked’st offspring of thy race,
By Form assisted, flew from thy embrace
And rebel Light obscured thy reverend dusky face.

………………………6
With Form and Matter, Time and Place did join,
Body, thy foe, with these did leagues combine
To spoil thy peaceful realm and ruin all thy line.

………………………7
But turncoat Time assists the foe in vain,
And bribed by thee destroys their short-lived reign,
And to thy hungry womb drives back thy slaves again.

………………………8
Though mysteries are barred from laic eyes,
And the divine alone with warrant pries
Into thy bosom, where thy truth in private lies;

………………………9
Yet this of thee the wise may truly say:
Thou from the virtuous nothing doest delay,
And to be part of thee the wicked wisely pray.

………………………10
Great Negative, how vainly would the wise
Enquire, define, distinguish, teach, devise,
Didst thou not stand to point their blind philosophies.

………………………11
Is or Is Not, the two great ends of Fate,
And true or false, the subject of debate,
That perfect or destroy the vast designs of state;

………………………12
When they have racked the politician’s breast,
Within thy bosom most securely rest,
And when reduced to thee are least unsafe, and best.

………………………13
But, Nothing, why does Something still permit
That sacred monarchs should at council sit
With persons highly thought, at best, for nothing fit;

………………………14
Whilst weighty something modestly abstains
From princes’ coffers, and from Statesmen’s brains,
And nothing there, like stately Nothing reigns?

………………………15
Nothing, who dwell’st with fools in grave disguise,
For whom they reverend shapes and forms devise,
Lawn-sleeves, and furs, and gowns, when they like thee look wise.

………………………16
French truth, Dutch prowess, British policy,
Hibernian learning, Scotch civility,
Spaniards’ dispatch, Danes’ wit, are mainly seen in thee.

………………………17
The great man’s gratitude to his best friend,
Kings’ promises, whores’ vows, towards thee they bend,
Flow swiftly into thee, and in thee ever end.

[div class=attrib]More from theSource here.[end-div]

Immaculate creation: birth of the first synthetic cell

[div class=attrib]From the New Scientist:[end-div]

For the first time, scientists have created life from scratch – well, sort of. Craig Venter‘s team at the J. Craig Venter Institute in Rockville, Maryland, and San Diego, California, has made a bacterial genome from smaller DNA subunits and then transplanted the whole thing into another cell. So what exactly is the science behind the first synthetic cell, and what is its broader significance?

What did Venter’s team do?

The cell was created by stitching together the genome of a goat pathogen called Mycoplasma mycoides from smaller stretches of DNA synthesised in the lab, and inserting the genome into the empty cytoplasm of a related bacterium. The transplanted genome booted up in its host cell, and then divided over and over to make billions of M. mycoides cells.

Venter and his team have previously accomplished both feats – creating a synthetic genome and transplanting a genome from one bacterium into another – but this time they have combined the two.

“It’s the first self-replicating cell on the planet that’s parent is a computer,” says Venter, referring to the fact that his team converted a cell’s genome that existed as data on a computer into a living organism.

How can they be sure that the new bacteria are what they intended?

Venter and his team introduced several distinctive markers into their synthesised genome. All of them were found in the synthetic cell when it was sequenced.

These markers do not make any proteins, but they contain the names of 46 scientists on the project and several quotations written out in a secret code. The markers also contain the key to the code.

Crack the code and you can read the messages, but as a hint, Venter revealed the quotations: “To live, to err, to fall, to triumph, to recreate life out of life,” from James Joyce’s A Portrait of the Artist as a Young Man; “See things not as they are but as they might be,” which comes from American Prometheus, a biography of nuclear physicist Robert Oppenheimer; and Richard Feynman’s famous words: “What I cannot build I cannot understand.”

Does this mean they created life?

It depends on how you define “created” and “life”. Venter’s team made the new genome out of DNA sequences that had initially been made by a machine, but bacteria and yeast cells were used to stitch together and duplicate the million base pairs that it contains. The cell into which the synthetic genome was then transplanted contained its own proteins, lipids and other molecules.

Venter himself maintains that he has not created life . “We’ve created the first synthetic cell,” he says. “We definitely have not created life from scratch because we used a recipient cell to boot up the synthetic chromosome.”

Whether you agree or not is a philosophical question, not a scientific one as there is no biological difference between synthetic bacteria and the real thing, says Andy Ellington, a synthetic biologist at the University of Texas in Austin. “The bacteria didn’t have a soul, and there wasn’t some animistic property of the bacteria that changed,” he says.

What can you do with a synthetic cell?

Venter’s work was a proof of principle, but future synthetic cells could be used to create drugs, biofuels and other useful products. He is collaborating with Exxon Mobil to produce biofuels from algae and with Novartis to create vaccines.

“As soon as next year, the flu vaccine you get could be made synthetically,” Venter says.

Ellington also sees synthetic bacteria as having potential as a scientific tool. It would be interesting, he says, to create bacteria that produce a new amino acid – the chemical units that make up proteins – and see how these bacteria evolve, compared with bacteria that produce the usual suite of amino acids. “We can ask these questions about cyborg cells in ways we never could before.”

[div class=attrib]More from theSource here.[end-div]

The Search for Genes Leads to Unexpected Places

[div class=attrib]From The New York Times:[end-div]

Edward M. Marcotte is looking for drugs that can kill tumors by stopping blood vessel growth, and he and his colleagues at the University of Texas at Austin recently found some good targets — five human genes that are essential for that growth. Now they’re hunting for drugs that can stop those genes from working. Strangely, though, Dr. Marcotte did not discover the new genes in the human genome, nor in lab mice or even fruit flies. He and his colleagues found the genes in yeast.

“On the face of it, it’s just crazy,” Dr. Marcotte said. After all, these single-cell fungi don’t make blood vessels. They don’t even make blood. In yeast, it turns out, these five genes work together on a completely unrelated task: fixing cell walls.

Crazier still, Dr. Marcotte and his colleagues have discovered hundreds of other genes involved in human disorders by looking at distantly related species. They have found genes associated with deafness in plants, for example, and genes associated with breast cancer in nematode worms. The researchers reported their results recently in The Proceedings of the National Academy of Sciences.

The scientists took advantage of a peculiar feature of our evolutionary history. In our distant, amoeba-like ancestors, clusters of genes were already forming to work together on building cell walls and on other very basic tasks essential to life. Many of those genes still work together in those same clusters, over a billion years later, but on different tasks in different organisms.

[div class=attrib]More from theSource here.[end-div]

Why Athletes Are Geniuses

[div class=attrib]From Discover:[end-div]

The qualities that set a great athlete apart from the rest of us lie not just in the muscles and the lungs but also between the ears. That’s because athletes need to make complicated decisions in a flash. One of the most spectacular examples of the athletic brain operating at top speed came in 2001, when the Yankees were in an American League playoff game with the Oakland Athletics. Shortstop Derek Jeter managed to grab an errant throw coming in from right field and then gently tossed the ball to catcher Jorge Posada, who tagged the base runner at home plate. Jeter’s quick decision saved the game—and the series—for the Yankees. To make the play, Jeter had to master both conscious decisions, such as whether to intercept the throw, and unconscious ones. These are the kinds of unthinking thoughts he must make in every second of every game: how much weight to put on a foot, how fast to rotate his wrist as he releases a ball, and so on.

In recent years neuroscientists have begun to catalog some fascinating differences between average brains and the brains of great athletes. By understanding what goes on in athletic heads, researchers hope to understand more about the workings of all brains—those of sports legends and couch potatoes alike.

As Jeter’s example shows, an athlete’s actions are much more than a set of automatic responses; they are part of a dynamic strategy to deal with an ever-changing mix of intricate challenges. Even a sport as seemingly straightforward as pistol shooting is surprisingly complex. A marksman just points his weapon and fires, and yet each shot calls for many rapid decisions, such as how much to bend the elbow and how tightly to contract the shoulder muscles. Since the shooter doesn’t have perfect control over his body, a slight wobble in one part of the arm may require many quick adjustments in other parts. Each time he raises his gun, he has to make a new calculation of what movements are required for an accurate shot, combining previous experience with whatever variations he is experiencing at the moment.

To explain how brains make these on-the-fly decisions, Reza Shadmehr of Johns Hopkins University and John Krakauer of Columbia University two years ago reviewed studies in which the brains of healthy people and of brain-damaged patients who have trouble controlling their movements were scanned. They found that several regions of the brain collaborate to make the computations needed for detailed motor actions. The brain begins by setting a goal—pick up the fork, say, or deliver the tennis serve—and calculates the best course of action to reach it. As the brain starts issuing commands, it also begins to make predictions about what sort of sensations should come back from the body if it achieves the goal. If those predictions don’t match the actual sensations, the brain then revises its plan to reduce error. Shadmehr and Krakauer’s work demonstrates that the brain does not merely issue rigid commands; it also continually updates its solution to the problem of how to move the body. Athletes may perform better than the rest of us because their brains can find better solutions than ours do.

[div class=attrib]More from theSource here.[end-div]

Forget Avatar, the real 3D revolution is coming to your front room

[div class=attrib]From The Guardian:[end-div]

Enjoy eating goulash? Fed up with needing three pieces of cutlery? It could be that I have a solution for you – and not just for you but for picnickers who like a bit of bread with their soup, too. Or indeed for anyone who has dreamed of seeing the spoon and the knife incorporated into one, easy to use, albeit potentially dangerous instrument. Ladies and gentlemen, I would like to introduce you to the Knoon.

The Knoon came to me in a dream – I had a vision of a soup spoon with a knife stuck to its top, blade pointing upwards. Given the potential for lacerating your mouth on the Knoon’s sharp edge, maybe my dream should have stayed just that. But thanks to a technological leap that is revolutionising manufacturing and, some hope, may even change the nature of our consumer society, I now have a Knoon sitting right in front of me. I had the idea, I drew it up and then I printed my cutlery out.

3D is this year’s buzzword in Hollywood. From Avatar to Clash of the Titans, it’s a new take on an old fad that’s coming to save the movie industry. But with less glitz and a degree less fanfare, 3D printing is changing our vision of the world too, and ultimately its effects might prove a degree more special.

Thinglab is a company that specialises in 3D printing. Based in a nondescript office building in east London, its team works mainly with commercial clients to print models that would previously have been assembled by hand. Architects design their buildings in 3D software packages and pass them to Thinglab to print scale models. When mobile phone companies come up with a new handset, they print prototypes first in order to test size, shape and feel. Jewellers not only make prototypes, they use them as a basis for moulds. Sculptors can scan in their original works, adjust the dimensions and rattle off a series of duplicates (signatures can be added later).

All this work is done in the Thinglab basement, a kind of temple to 3D where motion capture suits hang from the wall and a series of next generation TV screens (no need for 3D glasses) sit in the corner. In the middle of the room lurk two hulking 3D printers. Their facades give them the faces of miserable robots.

“We had David Hockney in here recently and he was gobsmacked,” says Robin Thomas, one of Thinglab’s directors, reeling a list of intrigued celebrities who have made a pilgrimage to his basement. “Boy George came in and we took a scan of his face.” Above the printers sit a collection of the models they’ve produced: everything from a car’s suspension system to a rendering of John Cleese’s head. “If a creative person wakes up in the morning with an idea,” says Thomas, “they could have a model by the end of the day. People who would have spent days, weeks months on these type of models can now do it with a printer. If they can think of it, we can make it.”

[div class=attrib]More from theSource here.[end-div]

A beautiful and dangerous idea: art that sells itself

Artist Caleb Larsen seems to have the right idea. Rather than relying on the subjective wants and needs of galleries and the dubious nature of the secondary art market (and some equally dubious auctioneers) his art sells itself.

His work, entitled “A Tool to Deceive and Slaughter”, is an 8-inch opaque, black acrylic cube. But while the exterior may be simplicity itself, the interior holds a fascinating premise. The cube is connected to the internet. In fact, it’s connected to eBay, where through some hidden hardware and custom programming it constantly auctions itself.

As Caleb Larsen describes,

Combining Robert Morris’ Box With the Sound of Its Own Making with Baudrillard’s writing on the art auction this sculpture exists in eternal transactional flux. It is a physical sculpture that is perptually attempting to auction itself on eBay.

Every ten minutes the black box pings a server on the internet via the ethernet connection to check if it is for sale on the ebay. If its auction has ended or it has sold, it automatically creates a new auction of itself.

If a person buys it on eBay, the current owner is required to send it to the new owner. The new owner must then plug it into ethernet, and the cycle repeats itself.

The purchase agreement on eBay is quite rigorous, including stipulations such as: the buyer must keep the artwork connected to the interent at all times with disconnections allowed only for the transportation; upon purchase the artwork must be reauctioned; failure to follow all terms of the agreement forfeits the status of the artwork as a genuine work of art.

The artist was also smart enough to gain a slice of the secondary market, by requiring each buyer to return to the artist 15 percent of the appreciated value from each sale. Christie’s and Sotheby’s eat your hearts out.

Besides trying to put auctioneers out of work, the artist has broader intentions in mind, particularly when viewed alongside his larger body of work. The piece goes to the heart of the “how” and the “why” of the art market. By placing the artwork in a constant state of transactional fluidity – it’s never permanently in the hands of its new owner – it forces us to question the nature of art in relation to its market and the nature of collecting. The work can never without question be owned and collected since it is always possible that someone else will come along, enter the auction and win. Though, the first “owner” of the piece states that this was part of the appeal. Terence Spies, a California collector attests,

I had a really strong reaction right after I won the auction. I have this thing, and I really want to keep it, but the reason I want to keep it is that it might leave… The process of the piece really gets to some of the reasons why you might be collecting art in the first place.

Now of course, owning anything is transient. The Egyptian pharaohs tried taking their possessions into the “afterlife” but even to this day are being constantly thwarted by tomb-raiders and archeologists. Perhaps to some the chase, the process of collecting, is the goal, rather than owning the art itself. As I believe Caleb Larsen intended, he’s really given me something to ponder. How different, really, is it to own this self-selling art versus wandering through the world’s museums and galleries to “own” a Picasso or Warhol or Monet for 5 minutes? Ironically, our works live on, and it is we who are transient. So I think Caleb Larsen’s title for the work should be taken tongue in cheek, for it is we who are deceiving ourselves.

The Real Rules for Time Travelers

[div class=attrib]From Discover:[end-div]

People all have their own ideas of what a time machine would look like. If you are a fan of the 1960 movie version of H. G. Wells’s classic novel, it would be a steampunk sled with a red velvet chair, flashing lights, and a giant spinning wheel on the back. For those whose notions of time travel were formed in the 1980s, it would be a souped-up stainless steel sports car. Details of operation vary from model to model, but they all have one thing in common: When someone actually travels through time, the machine ostentatiously dematerializes, only to reappear many years in the past or future. And most people could tell you that such a time machine would never work, even if it looked like a DeLorean.

They would be half right: That is not how time travel might work, but time travel in some other form is not necessarily off the table. Since time is kind of like space (the four dimensions go hand in hand), a working time machine would zoom off like a rocket rather than disappearing in a puff of smoke. Einstein described our universe in four dimensions: the three dimensions of space and one of time. So traveling back in time is nothing more or less than the fourth-dimensional version of walking in a circle. All you would have to do is use an extremely strong gravitational field, like that of a black hole, to bend space-time. From this point of view, time travel seems quite difficult but not obviously impossible.

These days, most people feel comfortable with the notion of curved space-time. What they trip up on is actually a more difficult conceptual problem, the time travel paradox. This is the worry that someone could go back in time and change the course of history. What would happen if you traveled into the past, to a time before you were born, and murdered your parents? Put more broadly, how do we avoid changing the past as we think we have already experienced it? At the moment, scientists don’t know enough about the laws of physics to say whether these laws would permit the time equivalent of walking in a circle—or, in the parlance of time travelers, a “closed timelike curve.” If they don’t permit it, there is obviously no need to worry about paradoxes. If physics is not an obstacle, however, the problem could still be constrained by logic. Do closed timelike curves necessarily lead to paradoxes?

If they do, then they cannot exist, simple as that. Logical contradictions cannot occur. More specifically, there is only one correct answer to the question “What happened at the vicinity of this particular event in space-time?” Something happens: You walk through a door, you are all by yourself, you meet someone else, you somehow never showed up, whatever it may be. And that something is whatever it is, and was whatever it was, and will be whatever it will be, once and forever. If, at a certain event, your grandfather and grandmother were getting it on, that’s what happened at that event. There is nothing you can do to change it, because it happened. You can no more change events in your past in a space-time with closed timelike curves than you can change events that already happened in ordinary space-time, with no closed timelike curves.

[div class=attrib]More from theSource here.[end-div]

Human Culture, an Evolutionary Force

[div class=attrib]From The New York Times:[end-div]

As with any other species, human populations are shaped by the usual forces of natural selection, like famine, disease or climate. A new force is now coming into focus. It is one with a surprising implication — that for the last 20,000 years or so, people have inadvertently been shaping their own evolution.

The force is human culture, broadly defined as any learned behavior, including technology. The evidence of its activity is the more surprising because culture has long seemed to play just the opposite role. Biologists have seen it as a shield that protects people from the full force of other selective pressures, since clothes and shelter dull the bite of cold and farming helps build surpluses to ride out famine.

Because of this buffering action, culture was thought to have blunted the rate of human evolution, or even brought it to a halt, in the distant past. Many biologists are now seeing the role of culture in a quite different light.

Although it does shield people from other forces, culture itself seems to be a powerful force of natural selection. People adapt genetically to sustained cultural changes, like new diets. And this interaction works more quickly than other selective forces, “leading some practitioners to argue that gene-culture co-evolution could be the dominant mode of human evolution,” Kevin N. Laland and colleagues wrote in the February issue of Nature Reviews Genetics. Dr. Laland is an evolutionary biologist at the University of St. Andrews in Scotland.

The idea that genes and culture co-evolve has been around for several decades but has started to win converts only recently. Two leading proponents, Robert Boyd of the University of California, Los Angeles, and Peter J. Richerson of the University of California, Davis, have argued for years that genes and culture were intertwined in shaping human evolution. “It wasn’t like we were despised, just kind of ignored,” Dr. Boyd said. But in the last few years, references by other scientists to their writings have “gone up hugely,” he said.

The best evidence available to Dr. Boyd and Dr. Richerson for culture being a selective force was the lactose tolerance found in many northern Europeans. Most people switch off the gene that digests the lactose in milk shortly after they are weaned, but in northern Europeans — the descendants of an ancient cattle-rearing culture that emerged in the region some 6,000 years ago — the gene is kept switched on in adulthood.

Lactose tolerance is now well recognized as a case in which a cultural practice — drinking raw milk — has caused an evolutionary change in the human genome. Presumably the extra nutrition was of such great advantage that adults able to digest milk left more surviving offspring, and the genetic change swept through the population.

[div class=attrib]More from theSource here.[end-div]

Art world swoons over Romania’s homeless genius

[div class=attrib]From The Guardian:[end-div]

The guests were chic, the bordeaux was sipped with elegant restraint and the hostess was suitably glamorous in a ­canary yellow cocktail dress. To an outside observer who made it past the soirée privée sign on the door of the Anne de Villepoix gallery on Thursday night, it would have seemed the quintessential Parisian art viewing.

Yet that would been leaving one ­crucial factor out of the equation: the man whose creations the crowd had come to see. In his black cowboy hat and pressed white collar, Ion Barladeanu looked every inch the established artist as he showed guests around the exhibition. But until 2007 no one had ever seen his work, and until mid-2008 he was living in the rubbish tip of a Bucharest tower block.

Today, in the culmination of a dream for a Romanian who grew up adoring Gallic film stars and treasures a miniature Eiffel Tower he once found in a bin, ­Barladeanu will see his first French exhibition open to the general public.

Dozens of collages he created from scraps of discarded magazines during and after the Communist regime of Nicolae Ceausescu are on sale for more than €1,000 (£895) each. They are being hailed as politically brave and culturally irreverent.

For the 63-year-old artist, the journey from the streets of Bucharest to the galleries of Europe has finally granted him recognition. “I feel as if I have been born again,” he said, as some of France’s leading collectors and curators jostled for position to see his collages. “Now I feel like a prince. A pauper can become a prince. But he can go back to being a pauper too.”

[div class=attrib]More from theSource here.[end-div]

The Chess Master and the Computer

[div class=attrib]By Gary Kasparov, From the New York Review of Books:[end-div]

In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.

It illustrates the state of computer chess at the time that it didn’t come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the “Kasparov” brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess.

Eleven years later I narrowly defeated the supercomputer Deep Blue in a match. Then, in 1997, IBM redoubled its efforts—and doubled Deep Blue’s processing power—and I lost the rematch in an event that made headlines around the world. The result was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer. (“The Brain’s Last Stand” read the Newsweek headline.) Others shrugged their shoulders, surprised that humans could still compete at all against the enormous calculating power that, by 1997, sat on just about every desk in the first world.

It was the specialists—the chess players and the programmers and the artificial intelligence enthusiasts—who had a more nuanced appreciation of the result. Grandmasters had already begun to see the implications of the existence of machines that could play—if only, at this point, in a select few types of board configurations—with godlike perfection. The computer chess people were delighted with the conquest of one of the earliest and holiest grails of computer science, in many cases matching the mainstream media’s hyperbole. The 2003 book Deep Blue by Monty Newborn was blurbed as follows: “a rare, pivotal watershed beyond all other triumphs: Orville Wright’s first flight, NASA’s landing on the moon….”

[div class=attrib]More from theSource here.[end-div]

The Man Who Builds Brains

[div class=attrib]From Discover:[end-div]

On the quarter-mile walk between his office at the École Polytechnique Fédérale de Lausanne in Switzerland and the nerve center of his research across campus, Henry Markram gets a brisk reminder of the rapidly narrowing gap between human and machine. At one point he passes a museumlike display filled with the relics of old supercomputers, a memorial to their technological limitations. At the end of his trip he confronts his IBM Blue Gene/P—shiny, black, and sloped on one side like a sports car. That new supercomputer is the center­piece of the Blue Brain Project, tasked with simulating every aspect of the workings of a living brain.

Markram, the 47-year-old founder and codirector of the Brain Mind Institute at the EPFL, is the project’s leader and cheerleader. A South African neuroscientist, he received his doctorate from the Weizmann Institute of Science in Israel and studied as a Fulbright Scholar at the National Institutes of Health. For the past 15 years he and his team have been collecting data on the neocortex, the part of the brain that lets us think, speak, and remember. The plan is to use the data from these studies to create a comprehensive, three-dimensional simulation of a mammalian brain. Such a digital re-creation that matches all the behaviors and structures of a biological brain would provide an unprecedented opportunity to study the fundamental nature of cognition and of disorders such as depression and schizophrenia.

Until recently there was no computer powerful enough to take all our knowledge of the brain and apply it to a model. Blue Gene has changed that. It contains four monolithic, refrigerator-size machines, each of which processes data at a peak speed of 56 tera­flops (teraflops being one trillion floating-point operations per second). At $2 million per rack, this Blue Gene is not cheap, but it is affordable enough to give Markram a shot with this ambitious project. Each of Blue Gene’s more than 16,000 processors is used to simulate approximately one thousand virtual neurons. By getting the neurons to interact with one another, Markram’s team makes the computer operate like a brain. In its trial runs Markram’s Blue Gene has emulated just a single neocortical column in a two-week-old rat. But in principle, the simulated brain will continue to get more and more powerful as it attempts to rival the one in its creator’s head. “We’ve reached the end of phase one, which for us is the proof of concept,” Markram says. “We can, I think, categorically say that it is possible to build a model of the brain.” In fact, he insists that a fully functioning model of a human brain can be built within a decade.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Michelangelo’s Labor Pains

[div class=attrib]By Robert Pinsky for Slate:[end-div]

After a certain point, reverence can become automatic. Our admiration for great works of art can get a bit reflexive, then synthetic, then can harden into a pious coating that repels real attention. Michelangelo’s painted ceiling of the Sistine Chapel in the Vatican might be an example of such automatic reverence. Sometimes, a fresh look or a hosing-down is helpful—if only by restoring the meaning of “work” to the phrase “work of art.”

Michelangelo (1475-1564) himself provides a refreshing dose of reality. A gifted poet as well as a sculptor and painter, he wrote energetically about despair, detailing with relish the unpleasant side of his work on the famous ceiling. The poem, in Italian, is an extended (or “tailed”) sonnet, with a coda of six lines appended to the standard 14. The translation I like best is by the American poet Gail Mazur. Her lines are musical but informal, with a brio conveying that the Italian artist knew well enough that he and his work were great—but that he enjoyed vigorously lamenting his discomfort, pain, and inadequacy to the task. No wonder his artistic ideas are bizarre and no good, says Michelangelo: They must come through the medium of his body, that “crooked blowpipe” (Mazur’s version of “cerbottana torta“). Great artist, great depression, great imaginative expression of it. This is a vibrant, comic, but heartfelt account of the artist’s work:

Michelangelo: To Giovanni da Pistoia
“When the Author Was Painting the Vault of the Sistine Chapel” —1509

I’ve already grown a goiter from this torture,
hunched up here like a cat in Lombardy
(or anywhere else where the stagnant water’s poison).
My stomach’s squashed under my chin, my beard’s
pointing at heaven, my brain’s crushed in a casket,
my breast twists like a harpy’s. My brush,
above me all the time, dribbles paint
so my face makes a fine floor for droppings!

My haunches are grinding into my guts,
my poor ass strains to work as a counterweight,
every gesture I make is blind and aimless.
My skin hangs loose below me, my spine’s
all knotted from folding over itself.
I’m bent taut as a Syrian bow.

Because I’m stuck like this, my thoughts
are crazy, perfidious tripe:
anyone shoots badly through a crooked blowpipe.

My painting is dead.
Defend it for me, Giovanni, protect my honor.
I am not in the right place—I am not a painter.

[div class=attrib]More from theSource here.[end-div]

The Graphene Revolution

[div class=attrib]From Discover:[end-div]

Flexible, see-through, one-atom-thick sheets of carbon could be a key component for futuristic solar cells, batteries, and roll-up LCD screens—and perhaps even microchips.

Under a transmission electron microscope it looks deceptively simple: a grid of hexa­gons resembling a volleyball net or a section of chicken wire. But graphene, a form of carbon that can be produced in sheets only one atom thick, seems poised to shake up the world of electronics. Within five years, it could begin powering faster and better transistors, computer chips, and LCD screens, according to researchers who are smitten with this new supermaterial.

Graphene’s standout trait is its uncanny facility with electrons, which can travel much more quickly through it than they can through silicon. As a result, graphene-based computer chips could be thousands of times as efficient as existing ones. “What limits conductivity in a normal material is that electrons will scatter,” says Michael Strano, a chemical engineer at MIT. “But with graphene the electrons can travel very long distances without scattering. It’s like the thinnest, most stable electrical conducting framework you can think of.”

In 2009 another MIT researcher, Tomas Palacios, devised a graphene chip that doubles the frequency of an electromagnetic signal. Using multiple chips could make the outgoing signal many times higher in frequency than the original. Because frequency determines the clock speed of the chip, boosting it enables faster transfer of data through the chip. Graphene’s extreme thinness means that it is also practically transparent, making it ideal for transmitting signals in devices containing solar cells or LEDs.

[div class=attrib]More from theSource here.[end-div]

J. Craig Venter

[div class=attrib]From Discover:[end-div]

J. Craig Venter keeps riding the cusp of each new wave in biology. When researchers started analyzing genes, he launched the Institute for Genomic Research (TIGR), decoding the genome of a bacterium for the first time in 1992. When the government announced its plan to map the human genome, he claimed he would do it first—and then he delivered results in 2001, years ahead of schedule. Armed with a deep understanding of how DNA works, Venter is now moving on to an even more extraordinary project. Starting with the stunning genetic diversity that exists in the wild, he is aiming to build custom-designed organisms that could produce clean energy, help feed the planet, and treat cancer. Venter has already transferred the genome of one species into the cell body of another. This past year he reached a major milestone, using the machinery of yeast to manufacture a genome from scratch. When he combines the steps—perhaps next year—he will have crafted a truly synthetic organism. Senior editor Pamela Weintraub discussed the implications of these efforts with Venter in DISCOVER’s editorial offices.

Here you are talking about constructing life, but you started out in deconstruction: charting the human genome, piece by piece.
Actually, I started out smaller, studying the adrenaline receptor. I was looking at one protein and its single gene for a decade. Then, in the late 1980s, I was drawn to the idea of the whole genome, and I stopped everything and switched my lab over. I had the first automatic DNA sequencer. It was the ultimate in reductionist biology—getting down to the genetic code, interpreting what it meant, including all 6 billion letters of my own genome. Only by understanding things at that level can we turn around and go the other way.

In your latest work you are trying to create “synthetic life.” What is that?
It’s a catchy phrase that people have begun using to replace “molecular biology.” The term has been overused, so we have defined a separate field that we call synthetic genomics—the digitization of biology using only DNA and RNA. You start by sequencing genomes and putting their digital code into a computer. Then you use the computer to take that information and design new life-forms.

How do you build a life-form? Throw in some mito­chondria here and some ribosomes there, surround ?it all with a membrane—?and voilà?
We started down that road, but now we are coming from the other end. We’re starting with the accomplishments of three and a half billion years of evolution by using what we call the software of life: DNA. Our software builds its own hardware. By writing new software, we can come up with totally new species. It would be as if once you put new software in your computer, somehow a whole new machine would materialize. We’re software engineers rather than construction workers.

[div class=attrib]More from theSource here[end-div]

Five Big Additions to Darwin’s Theory of Evolution

[div class=attrib]From Discover:[end-div]

Charles Darwin would have turned 200 in 2009, the same year his book On the Origin of Species celebrated its 150th anniversary. Today, with the perspective of time, Darwin’s theory of evolution by natural selection looks as impressive as ever. In fact, the double anniversary year saw progress on fronts that Darwin could never have anticipated, bringing new insights into the origin of life—a topic that contributed to his panic attacks, heart palpitations, and, as he wrote, “for 25 years extreme spasmodic daily and nightly flatulence.” One can only dream of what riches await in the biology textbooks of 2159.

1. Evolution happens on the inside, too. The battle for survival is waged not just between the big dogs but within the dog itself, as individual genes jockey for prominence. From the moment of conception, a father’s genes favor offspring that are large, strong, and aggressive (the better to court the ladies), while the mother’s genes incline toward smaller progeny that will be less of a burden, making it easier for her to live on and procreate. Genome-versus-genome warfare produces kids that are somewhere in between.

Not all genetic conflicts are resolved so neatly. In flour beetles, babies that do not inherit the selfish genetic element known as Medea succumb to a toxin while developing in the egg. Some unborn mice suffer the same fate. Such spiteful genes have become widespread not by helping flour beetles and mice survive but by eliminating individuals that do not carry the killer’s code. “There are two ways of winning a race,” says Caltech biologist Bruce Hay. “Either you can be better than everyone else, or you can whack the other guys on the legs.”

Hay is trying to harness the power of such genetic cheaters, enlisting them in the fight against malaria. He created a Medea-like DNA element that spreads through experimental fruit flies like wildfire, permeating an entire population within 10 generations. This year he and his team have been working on encoding immune-system boosters into those Medea genes, which could then be inserted into male mosquitoes. If it works, the modified mosquitoes should quickly replace competitors who do not carry the new genes; the enhanced immune systems of the new mosquitoes, in turn, would resist the spread of the malaria parasite.

2. Identity is not written just in the genes. According to modern evolutionary theory, there is no way that what we eat, do, and encounter can override the basic rules of inheritance: What is in the genes stays in the genes. That single rule secured Darwin’s place in the science books. But now biologists are finding that nature can break those rules. This year Eva Jablonka, a theoretical biologist at Tel Aviv University, published a compendium of more than 100 hereditary changes that are not carried in the DNA sequence. This “epigenetic” inheritance spans bacteria, fungi, plants, and animals.

[div class=attrib]More from theSource here.[end-div]

The meaning of network culture

[div class=attrib]From Eurozine:[end-div]

Whereas in postmodernism, being was left in a free-floating fabric of emotional intensities, in contemporary culture the existence of the self is affirmed through the network. Kazys Varnelis discusses what this means for the democratic public sphere.

Not all at once but rather slowly, in fits and starts, a new societal condition is emerging: network culture. As digital computing matures and meshes with increasingly mobile networking technology, society is also changing, undergoing a cultural shift. Just as modernism and postmodernism served as crucial heuristic devices in their day, studying network culture as a historical phenomenon allows us to better understand broader sociocultural trends and structures, to give duration and temporality to our own, ahistorical time.

If more subtle than the much-talked about economic collapse of fall 2008, this shift in society is real and far more radical, underscoring even the logic of that collapse. During the space of a decade, the network has become the dominant cultural logic. Our economy, public sphere, culture, even our subjectivity are mutating rapidly and show little evidence of slowing down the pace of their evolution. The global economic crisis only demonstrated our faith in the network and its dangers. Over the last two decades, markets and regulators had increasingly placed their faith in the efficient market hypothesis, which posited that investors were fundamentally rational and, fed information by highly efficient data networks, would always make the right decision. The failure came when key parts of the network – the investors, regulators, and the finance industry – failed to think through the consequences of their actions and placed their trust in each other.

The collapse of the markets seems to have been sudden, but it was actually a long-term process, beginning with bad decisions made longer before the collapse. Most of the changes in network culture are subtle and only appear radical in retrospect. Take our relationship with the press. One morning you noted with interest that your daily newspaper had established a website. Another day you decided to stop buying the paper and just read it online. Then you started reading it on a mobile Internet platform, or began listening to a podcast of your favourite column while riding a train. Perhaps you dispensed with official news entirely, preferring a collection of blogs and amateur content. Eventually the paper may well be distributed only on the net, directly incorporating user comments and feedback. Or take the way cell phones have changed our lives. When you first bought a mobile phone, were you aware of how profoundly it would alter your life? Soon, however, you found yourself abandoning the tedium of scheduling dinner plans with friends in advance, instead coordinating with them en route to a particular neighbourhood. Or if your friends or family moved away to university or a new career, you found that through a social networking site like Facebook and through the every-present telematic links of the mobile phone, you did not lose touch with them.

If it is difficult to realize the radical impact of the contemporary, this is in part due to the hype about the near-future impact of computing on society in the 1990s. The failure of the near-future to be realized immediately, due to the limits of the technology of the day, made us jaded. The dot.com crash only reinforced that sense. But slowly, technology advanced and society changed, finding new uses for it, in turn spurring more change. Network culture crept up on us. Its impact on us today is radical and undeniable.

[div class=attrib]More from theSource here.[end-div]

The Madness of Crowds and an Internet Delusion

[div class=attrib]From The New York Times:[end-div]

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

[div class=attrib]More from theSource here.[end-div]

Your Digital Privacy? It May Already Be an Illusion

[div class=attrib]From Discover:[end-div]

As his friends flocked to social networks like Facebook and MySpace, Alessandro Acquisti, an associate professor of information technology at Carnegie Mellon University, worried about the downside of all this online sharing. “The personal information is not particularly sensitive, but what happens when you combine those pieces together?” he asks. “You can come up with something that is much more sensitive than the individual pieces.”

Acquisti tested his idea in a study, reported earlier this year in Proceedings of the National Academy of Sciences. He took seemingly innocuous pieces of personal data that many people put online (birthplace and date of birth, both frequently posted on social networking sites) and combined them with information from the Death Master File, a public database from the U.S. Social Security Administration. With a little clever analysis, he found he could determine, in as few as 1,000 tries, someone’s Social Security number 8.5 percent of the time. Data thieves could easily do the same thing: They could keep hitting the log-on page of a bank account until they got one right, then go on a spending spree. With an automated program, making thousands of attempts is no trouble at all.

The problem, Acquisti found, is that the way the Death Master File numbers are created is predictable. Typically the first three digits of a Social Security number, the “area number,” are based on the zip code of the person’s birthplace; the next two, the “group number,” are assigned in a predetermined order within a particular area-number group; and the final four, the “serial number,” are assigned consecutively within each group number. When Acquisti plotted the birth information and corresponding Social Security numbers on a graph, he found that the set of possible IDs that could be assigned to a person with a given date and place of birth fell within a restricted range, making it fairly simple to sift through all of the possibilities.

To check the accuracy of his guesses, Acquisti used a list of students who had posted their birth information on a social network and whose Social Security numbers were matched anon­ymously by the university they attended. His system worked—yet another reason why you should never use your Social Security number as a password for sensitive transactions.

Welcome to the unnerving world of data mining, the fine art (some might say black art) of extracting important or sensitive pieces from the growing cloud of information that surrounds almost all of us. Since data persist essentially forever online—just check out the Internet Archive Wayback Machine, the repository of almost everything that ever appeared on the Internet—some bit of seemingly harmless information that you post today could easily come back to haunt you years from now.

[div class=attrib]More from theSource here.[end-div]

For Expatriates in China, Creative Lives of Plenty

[div class=attrib]From The New York Times:[end-div]

THERE was a chill in the morning air in 2005 when dozens of artists from China, Europe and North America emerged from their red-brick studios here to find the police blocking the gates to Suojiacun, their compound on the city’s outskirts. They were told that the village of about 100 illegally built structures was to be demolished, and were given two hours to pack.

By noon bulldozers were smashing the walls of several studios, revealing ripped-apart canvases and half-glazed clay vases lying in the rubble. But then the machines ceased their pulverizing, and the police dispersed, leaving most of the buildings unscathed. It was not the first time the authorities had threatened to evict these artists, nor would it be the last. But it was still frightening.

“I had invested everything in my studio,” said Alessandro Rolandi, a sculptor and performance artist originally from Italy who had removed his belongings before the destruction commenced. “I was really worried about my work being destroyed.”

He eventually left Suojiacun, but he has remained in China. Like the artists’ colony, the country offers challenges, but expatriates here say that the rewards outweigh the hardships. Mr. Rolandi is one of many artists (five are profiled here) who have left the United States and Europe for China, seeking respite from tiny apartments, an insular art world and nagging doubts about whether it’s best to forgo art for a reliable office job. They have discovered a land of vast creative possibility, where scale is virtually limitless and costs are comically low. They can rent airy studios, hire assistants, experiment in costly mediums like bronze and fiberglass.

“Today China has become one of the most important places to create and invent,” said Jérôme Sans, director of the Ullens Center for Contemporary Art in Beijing. “A lot of Western artists are coming here to live the dynamism and make especially crazy work they could never do anywhere else in the world.”

Rania Ho

A major challenge for foreigners, no matter how fluent or familiar with life here, is that even if they look like locals, it is virtually impossible to feel truly of this culture. For seven years Rania Ho, the daughter of Chinese immigrants born and raised in San Francisco, has lived in Beijing, where she runs a small gallery in a hutong, or alley, near one of the city’s main temples. “Being Chinese-American makes it easier to be an observer of what’s really happening because I’m camouflaged,” she said. “But it doesn’t mean I understand any more what people are thinking.”

Still, Ms. Ho, 40, revels in her role as outsider in a society that she says is blindly enthusiastic about remaking itself. She creates and exhibits work by both foreign and Chinese artists that often plays with China’s fetishization of mechanized modernity.

Because she lives so close to military parades and futuristic architecture, she said that her own pieces — like a water fountain gushing on the roof of her gallery and a cardboard table that levitates a Ping-Pong ball — chuckle at the “hypnotic properties of unceasing labor.” She said they are futile responses to the absurd experiences she shares with her neighbors, who are constantly seeing their world transform before their eyes. “Being in China forces one to reassess everything,” she said, “which is at times difficult and exhausting, but for a majority of the time it’s all very amusing and enlightening.”

[div class=attrib]More from theSource here.[end-div]

Are Black Holes the Architects of the Universe?

[div class=attrib]From Discover:[end-div]

Black holes are finally winning some respect. After long regarding them as agents of destruction or dismissing them as mere by-products of galaxies and stars, scientists are recalibrating their thinking. Now it seems that black holes debuted in a constructive role and appeared unexpectedly soon after the Big Bang. “Several years ago, nobody imagined that there were such monsters in the early universe,” says Penn State astrophysicist Yuexing Li. “Now we see that black holes were essential in creating the universe’s modern structure.”

Black holes, tortured regions of space where the pull of gravity is so intense that not even light can escape, did not always have such a high profile. They were once thought to be very rare; in fact, Albert Einstein did not believe they existed at all. Over the past several decades, though, astronomers have realized that black holes are not so unusual after all: Supermassive ones, millions or billions of times as hefty as the sun, seem to reside at the center of most, if not all, galaxies. Still, many people were shocked in 2003 when a detailed sky survey found that giant black holes were already common nearly 13 billion years ago, when the universe was less than a billion years old. Since then, researchers have been trying to figure out where these primordial holes came from and how they influenced the cosmic events that followed.

In August, researchers at the Kavli Institute for Particle Astrophysics and Cosmology at Stanford University ran a supercomputer simulation of the early universe and provided a tantalizing glimpse into the lives of the first black holes. The story began 200 million years after the Big Bang, when the universe’s first stars formed. These beasts, about 100 times the mass of the sun, were so large and energetic that they burned all their hydrogen fuel in just a few million years. With no more energy from hydrogen fusion to counteract the enormous inward pull of their gravity, the stars collapsed until all of their mass was compressed into a point of infinite density.

The first-generation black holes were puny compared with the monsters we see at the centers of galaxies today. They grew only slowly at first—adding just 1 percent to their bulk in the next 200 million years—because the hyperactive stars that spawned them had blasted away most of the nearby gas that they could have devoured. Nevertheless, those modest-size black holes left a big mark by performing a form of stellar birth control: Radiation from the trickle of material falling into the holes heated surrounding clouds of gas to about 5,000 degrees Fahrenheit, so hot that the gas could no longer easily coalesce. “You couldn’t really form stars in that stuff,” says Marcelo Alvarez, lead author of the Kavli study.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of KIPAC/SLAC/M.Alvarez, T. Able, and J. Wise.[end-div]

MondayPoem: Adam’s Curse

[div class=attrib]By Robert Pinsky for Slate:[end-div]

Poetry can resemble incantation, but sometimes it also resembles conversation. Certain poems combine the two—the cadences of speech intertwined with the forms of song in a varying way that heightens the feeling. As in a screenplay or in fiction, the things that people in a poem say can seem natural, even spontaneous, yet also work to propel the emotional action along its arc.

The casual surface of speech and the inward energy of art have a clear relation in “Adam’s Curse” by William Butler Yeats (1865-1939). A couple and their friend are together at the end of a summer day. In the poem, two of them speak, first about poetry and then about love. All of the poem’s distinct narrative parts—the setting, the dialogue, the stunning and unspoken conclusion—are conveyed in the strict form of rhymed couplets throughout. I have read the poem many times, for many years, and every time, something in me is hypnotized by the dance of sentence and rhyme. Always, in a certain way, the conclusion startles me. How can the familiar be somehow surprising? It seems to be a principle of art; and in this case, the masterful, unshowy rhyming seems to be a part of it. The couplet rhyme profoundly drives and tempers the gradually gathering emotional force of the poem in ways beyond analysis.

Yeats’ dialogue creates many nuances of tone. It is even a little funny at times: The poet’s self-conscious self-pity about how hard he works (he does most of the talking) is exaggerated with a smile, and his categories for the nonpoet or nonmartyr “world” have a similar, mildly absurd sweeping quality: bankers, schoolmasters, clergymen … This is not wit, exactly, but the slightly comical tone friends might use sitting together on a summer evening. I hear the same lightness of touch when the woman says, “Although they do not talk of it at school.” The smile comes closest to laughter when the poet in effect mocks himself gently, speaking of those lovers who “sigh and quote with learned looks/ Precedents out of beautiful old books.” The plain monosyllables of “old books” are droll in the context of these lovers. (Yeats may feel that he has been such a lover in his day.)

The plainest, most straightforward language in the poem, in some ways, comes at the very end—final words, not uttered in the conversation, are more private and more urgent than what has come before. After the almost florid, almost conventionally poetic description of the sunset, the courtly hint of a love triangle falls away. The descriptive language of the summer twilight falls away. The dialogue itself falls away—all yielding to the idea that this concluding thought is “only for your ears.” That closing passage of interior thoughts, what in fiction might be called “omniscient narration,” makes the poem feel, to me, as though not simply heard but overheard.

“Adam’s Curse”

We sat together at one summer’s end,
That beautiful mild woman, your close friend,
And you and I, and talked of poetry.
I said, “A line will take us hours maybe;
Yet if it does not seem a moment’s thought,
Our stitching and unstitching has been naught.
Better go down upon your marrow-bones
And scrub a kitchen pavement, or break stones
Like an old pauper, in all kinds of weather;
For to articulate sweet sounds together
Is to work harder than all these, and yet
Be thought an idler by the noisy set
Of bankers, schoolmasters, and clergymen
The martyrs call the world.”

And thereupon
That beautiful mild woman for whose sake
There’s many a one shall find out all heartache
On finding that her voice is sweet and low
Replied, “To be born woman is to know—
Although they do not talk of it at school—
That we must labour to be beautiful.”
I said, “It’s certain there is no fine thing
Since Adam’s fall but needs much labouring.
There have been lovers who thought love should be
So much compounded of high courtesy
That they would sigh and quote with learned looks
Precedents out of beautiful old books;
Yet now it seems an idle trade enough.”

We sat grown quiet at the name of love;
We saw the last embers of daylight die,
And in the trembling blue-green of the sky
A moon, worn as if it had been a shell
Washed by time’s waters as they rose and fell
About the stars and broke in days and years.

I had a thought for no one’s but your ears:
That you were beautiful, and that I strove
To love you in the old high way of love;
That it had all seemed happy, and yet we’d grown
As weary-hearted as that hollow moon.

[div class=atrrib]More from theSource here.[end-div]

Will Our Universe Collide With a Neighboring One?

[div class=attrib]From Discover:[end-div]

Relaxing on an idyllic beach on Grand Cayman Island in the Caribbean, Anthony Aguirre vividly describes the worst natural disaster he can imagine. It is, in fact, probably the worst natural disaster that anyone could imagine. An asteroid impact would be small potatoes compared with this kind of event: a catastrophic encounter with an entire other universe.

As an alien cosmos came crashing into ours, its outer boundary would look like a wall racing forward at nearly the speed of light; behind that wall would lie a set of physical laws totally different from ours that would wreck everything they touched in our universe. “If we could see things in ultraslow motion, we’d see a big mirror in the sky rushing toward us because light would be reflected by the wall,” says Aguirre, a youthful physicist at the University of California at Santa Cruz. “After that we wouldn’t see anything—because we’d all be dead.”

There is a sober purpose behind this apocalyptic glee. Aguirre is one of a growing cadre of cosmologists who theorize that our universe is just one of many in a “multiverse” of universes. In their effort to grasp the implications of this idea, they have been calculating the odds that universes could interact with their neighbors or even smash into each other. While investigating what kind of gruesome end might result, they have stumbled upon a few surprises. There are tantalizing hints that our universe has already survived such a collision—and bears the scars to prove it.

Aguirre has organized a conference on Grand Cayman to address just such mind-boggling matters. The conversations here venture into multiverse mishaps and other matters of cosmological genesis and destruction. At first blush the setting seems incongruous: The tropical sun beats down dreamily, the smell of broken coconuts drifts from beneath the palm trees, and the ocean roars rhythmically in the background. But the locale is perhaps fitting. The winds are strong for this time of year, reminding the locals of hurricane Ivan, which devastated the capital city of George Town in 2004, lifting whole apartment blocks and transporting buildings across streets. In nature, peace and violence are never far from each other.

Much of today’s interest in multiple universes stems from concepts developed in the early 1980s by the pioneering cosmologists Alan Guth at MIT and Andrei Linde, then at the Lebedev Physical Institute in Moscow. Guth proposed that our universe went through an incredibly rapid growth spurt, known as inflation, in the first 10-30 second or so after the Big Bang. Such extreme expansion, driven by a powerful repulsive energy that quickly dissipated as the universe cooled, would solve many mysteries. Most notably, inflation could explain why the cosmos as we see it today is amazingly uniform in all directions. If space was stretched mightily during those first instants of existence, any extreme lumpiness or hot and cold spots would have immediately been smoothed out. This theory was modified by Linde, who had hit on a similar idea independently. Inflation made so much sense that it quickly became a part of the mainstream model of cosmology.

Soon after, Linde and Alex Vilenkin at Tufts University came to the startling realization that inflation may not have been a onetime event. If it could happen once, it could—and indeed should—happen again and again for eternity. Stranger still, every eruption of inflation would create a new bubble of space and energy. The result: an infinite progression of new universes, each bursting forth with its own laws of physics.

In such a bubbling multiverse of universes, it seems inevitable that universes would sometimes collide. But for decades cosmologists neglected this possibility, reckoning that the odds were small and that if it happened, the results would be irrelevant because anyone and anything near the collision would be annihilated.

[div class=attrib]More from theSource here.[end-div]