All posts by Mike

Small Change. Why the Revolution will Not be Tweeted

[div class=attrib]From The New Yorker:[end-div]

At four-thirty in the afternoon on Monday, February 1, 1960, four college students sat down at the lunch counter at the Woolworth’s in downtown Greensboro, North Carolina. They were freshmen at North Carolina A. & T., a black college a mile or so away.

“I’d like a cup of coffee, please,” one of the four, Ezell Blair, said to the waitress.

“We don’t serve Negroes here,” she replied.

The Woolworth’s lunch counter was a long L-shaped bar that could seat sixty-six people, with a standup snack bar at one end. The seats were for whites. The snack bar was for blacks. Another employee, a black woman who worked at the steam table, approached the students and tried to warn them away. “You’re acting stupid, ignorant!” she said. They didn’t move. Around five-thirty, the front doors to the store were locked. The four still didn’t move. Finally, they left by a side door. Outside, a small crowd had gathered, including a photographer from the Greensboro Record. “I’ll be back tomorrow with A. & T. College,” one of the students said.

By next morning, the protest had grown to twenty-seven men and four women, most from the same dormitory as the original four. The men were dressed in suits and ties. The students had brought their schoolwork, and studied as they sat at the counter. On Wednesday, students from Greensboro’s “Negro” secondary school, Dudley High, joined in, and the number of protesters swelled to eighty. By Thursday, the protesters numbered three hundred, including three white women, from the Greensboro campus of the University of North Carolina. By Saturday, the sit-in had reached six hundred. People spilled out onto the street. White teen-agers waved Confederate flags. Someone threw a firecracker. At noon, the A. & T. football team arrived. “Here comes the wrecking crew,” one of the white students shouted.

By the following Monday, sit-ins had spread to Winston-Salem, twenty-five miles away, and Durham, fifty miles away. The day after that, students at Fayetteville State Teachers College and at Johnson C. Smith College, in Charlotte, joined in, followed on Wednesday by students at St. Augustine’s College and Shaw University, in Raleigh. On Thursday and Friday, the protest crossed state lines, surfacing in Hampton and Portsmouth, Virginia, in Rock Hill, South Carolina, and in Chattanooga, Tennessee. By the end of the month, there were sit-ins throughout the South, as far west as Texas. “I asked every student I met what the first day of the sitdowns had been like on his campus,” the political theorist Michael Walzer wrote in Dissent. “The answer was always the same: ‘It was like a fever. Everyone wanted to go.’ ” Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

The world, we are told, is in the midst of a revolution. The new tools of social media have reinvented social activism. With Facebook and Twitter and the like, the traditional relationship between political authority and popular will has been upended, making it easier for the powerless to collaborate, coördinate, and give voice to their concerns. When ten thousand protesters took to the streets in Moldova in the spring of 2009 to protest against their country’s Communist government, the action was dubbed the Twitter Revolution, because of the means by which the demonstrators had been brought together. A few months after that, when student protests rocked Tehran, the State Department took the unusual step of asking Twitter to suspend scheduled maintenance of its Web site, because the Administration didn’t want such a critical organizing tool out of service at the height of the demonstrations. “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy,” Mark Pfeifle, a former national-security adviser, later wrote, calling for Twitter to be nominated for the Nobel Peace Prize. Where activists were once defined by their causes, they are now defined by their tools. Facebook warriors go online to push for change. “You are the best hope for us all,” James K. Glassman, a former senior State Department official, told a crowd of cyber activists at a recent conference sponsored by Facebook, A. T. & T., Howcast, MTV, and Google. Sites like Facebook, Glassman said, “give the U.S. a significant competitive advantage over terrorists. Some time ago, I said that Al Qaeda was ‘eating our lunch on the Internet.’ That is no longer the case. Al Qaeda is stuck in Web 1.0. The Internet is now about interactivity and conversation.”

These are strong, and puzzling, claims. Why does it matter who is eating whose lunch on the Internet? Are people who log on to their Facebook page really the best hope for us all? As for Moldova’s so-called Twitter Revolution, Evgeny Morozov, a scholar at Stanford who has been the most persistent of digital evangelism’s critics, points out that Twitter had scant internal significance in Moldova, a country where very few Twitter accounts exist. Nor does it seem to have been a revolution, not least because the protests—as Anne Applebaum suggested in the Washington Post—may well have been a bit of stagecraft cooked up by the government. (In a country paranoid about Romanian revanchism, the protesters flew a Romanian flag over the Parliament building.) In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

Some of this grandiosity is to be expected. Innovators tend to be solipsists. They often want to cram every stray fact and experience into their new model. As the historian Robert Darnton has written, “The marvels of communication technology in the present have produced a false consciousness about the past—even a sense that communication has no history, or had nothing of importance to consider before the days of television and the Internet.” But there is something else at work here, in the outsized enthusiasm for social media. Fifty years after one of the most extraordinary episodes of social upheaval in American history, we seem to have forgotten what activism is.

[div class=attrib]More from theSource here.[end-div]

Commonplaces of technology critique

[div class=attrib]From Eurozine:[end-div]

What is it good for? A passing fad! It makes you stupid! Today’s technology critique is tomorrow’s embarrassing error of judgement, as Katrin Passig shows. Her suggestion: one should try to avoid repeating the most commonplace critiques, particularly in public.

In a 1969 study on colour designations in different cultures, anthropologist Brent Berlin and linguist Paul Kay described how the sequence of levels of observed progression was always the same. Cultures with only two colour concepts distinguish between “light” and “dark” shades. If the culture recognizes three colours, the third will be red. If the language differentiates further, first come green and/or yellow, then blue. All languages with six colour designations distinguish between black, white, red, green, blue and yellow. The next level is brown, then, in varying sequences, orange, pink, purple and/or grey, with light blue appearing last of all.

The reaction to technical innovations, both in the media and in our private lives, follows similarly preconceived paths. The first, entirely knee-jerk dismissal is the “What the hell is it good for?” (Argument No.1) with which IBM engineer Robert Lloyd greeted the microprocessor in 1968. Even practices and techniques that only constitute a variation on the familiar – the electric typewriter as successor to the mechanical version, for instance – are met with distaste in the cultural criticism sector. Inventions like the telephone or the Internet, which open up a whole new world, have it even tougher. If cultural critics had existed at the dawn of life itself, they would have written grumpily in their magazines: “Life – what is it good for? Things were just fine before.”

Because the new throws into confusion processes that people have got used to, it is often perceived not only as useless but as a downright nuisance. The student Friedrich August Köhler wrote in 1790 after a journey on foot from Tübingen to Ulm: “[Signposts] had been put up everywhere following an edict of the local prince, but their existence proved short-lived, since they tended to be destroyed by a boisterous rabble in most places. This was most often the case in areas where the country folk live scattered about on farms, and when going on business to the next city or village more often than not come home inebriated and, knowing the way as they do, consider signposts unnecessary.”

The Parisians seem to have greeted the introduction of street lighting in 1667 under Louis XIV with a similar lack of enthusiasm. Dietmar Kammerer conjectured in the Süddeutsche Zeitung that the regular destruction of these street lamps represented a protest on the part of the citizens against the loss of their private sphere, since it seemed clear to them that here was “a measure introduced by the king to bring the streets under his control”. A simpler explanation would be that citizens tend in the main to react aggressively to unsupervised innovations in their midst. Recently, Deutsche Bahn explained that the initial vandalism of their “bikes for hire” had died down, now that locals had “grown accustomed to the sight of the bicycles”.

When it turns out that the novelty is not as useless as initially assumed, there follows the brief interregnum of Argument No.2: “Who wants it anyway?” “That’s an amazing invention,” gushed US President Rutherford B. Hayes of the telephone, “but who would ever want to use one of them?” And the film studio boss Harry M. Warner is quoted as asking in 1927, “Who the hell wants to hear actors talk?”.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: The Chimney Sweeper

[div class=attrib]By Robert Pinsky for Slate:[end-div]

Here is a pair of poems more familiar than many I’ve presented here in the monthly “Classic Poem” feature—familiar, maybe, yet with an unsettling quality that seems inexhaustible. As in much of William Blake’s writing, what I may think I know, he manages to make me wonder if I really do know.

“Blake’s poetry has the unpleasantness of great poetry,” says T.S. Eliot (who has a way of parodying himself even while making wise observations). The truth in Eliot’s remark, for me, has to do not simply with Blake’s indictment of conventional churches, governments, artists but with his general, metaphysical defiance toward customary ways of understanding the universe.

The “unpleasantness of great poetry,” as exemplified by Blake, is rooted in a seductively beautiful process of unbalancing and disrupting. Great poetry gives us elaborately attractive constructions of architecture or music or landscape—while preventing us from settling comfortably into this new and engaging structure, cadence, or terrain. In his Songs of Innocence and Experience, Shewing the Two Contrary States of the Human Soul, Blake achieves a binary, deceptively simple version of that splendid “unpleasantness.”

In particular, the two poems both titled “The Chimney Sweeper” offer eloquent examples of Blake’s unsettling art. (One “Chimney Sweeper” poem comes from the Songs of Innocence; the other, from the Songs of Experience.) I can think to myself that the poem in Songs of Innocence is more powerful than the one in Songs of Experience, because the Innocence characters—both the “I” who speaks and “little Tom Dacre”—provide, in their heartbreaking extremes of acceptance, the more devastating indictment of social and economic arrangements that sell and buy children, sending them to do crippling, fatal labor.

By that light, the Experience poem entitled “The Chimney Sweeper,” explicit and accusatory, can seem a lesser work of art. The Innocence poem is implicit and ironic. Its delusional or deceptive Angel with a bright key exposes religion as exploiting the credulous children, rather than protecting them or rescuing them. The profoundly, utterly “innocent” speaker provides a subversive drama.

But that judgment is unsettled by second thoughts: Does the irony of the Innocence poem affect me all the more—does it penetrate without seeming heavy?—precisely because I am aware of the Experience poem? Do the explicit lines “They clothed me in the clothes of death,/ And taught me to sing the notes of woe” re-enforce the Innocence poem’s meanings—while pointedly differing from, maybe even criticizing, that counterpart-poem’s ironic method? And doesn’t that, too, bring another, significant note of dramatic outrage?

Or, to put it the question more in terms of subject matter, both poems dramatize the way religion, government, and custom collaborate in social arrangements that impose cruel treatment on some people while enhancing the lives of others (for example, by cleaning their chimneys). Does the naked, declarative quality of the Experience poem sharpen my understanding of the Innocence poem? Does the pairing hold back or forbid my understanding’s tendency to become self-congratulatory or pleasantly resolved? It is in the nature of William Blake’s genius to make such questions not just literary but moral.

“The Chimney Sweeper,” from Songs of Innocence

When my mother died I was very young,
And my father sold me while yet my tongue
Could scarcely cry ” ‘weep! ‘weep! ‘weep! ‘weep!’ ”
So your chimneys I sweep & in soot I sleep.

There’s little Tom Dacre, who cried when his head
That curled like a lamb’s back, was shaved: so I said,
“Hush, Tom! never mind it, for when your head’s bare
You know that the soot cannot spoil your white hair.”

And so he was quiet, & that very night,
As Tom was a-sleeping he had such a sight!
That thousands of sweepers, Dick, Joe, Ned & Jack,
Were all of them locked up in coffins of black.

And by came an Angel who had a bright key,
And he opened the coffins & set them all free;
Then down a green plain, leaping, laughing, they run,
And wash in a river and shine in the Sun.

Then naked & white, all their bags left behind,
They rise upon clouds and sport in the wind.
And the Angel told Tom, if he’d be a good boy,
He’d have God for his father & never want joy.

And so Tom awoke; and we rose in the dark,
And got with our bags & our brushes to work.
Though the morning was cold, Tom was happy & warm;
So if all do their duty, they need not fear harm.

—William Blake

[div class=attrib]More from theSource here.[end-div]

Google’s Earth

[div class=attrib]From The New York Times:[end-div]

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

William Gibson is the author of the forthcoming novel “Zero History.”

[div class=attrib]More from theSource here.[end-div]

Social networking: Failure to connect

[div class=attrib]From the Guardian:[end-div]

The first time I joined Facebook, I had to quit again immediately. It was my first week of university. I was alone, along with thousands of other students, in a sea of club nights and quizzes and tedious conversations about other people’s A-levels. This was back when the site was exclusively for students. I had been told, in no uncertain terms, that joining was mandatory. Failure to do so was a form of social suicide worse even than refusing to drink alcohol. I had no choice. I signed up.

Users of Facebook will know the site has one immutable feature. You don’t have to post a profile picture, or share your likes and dislikes with the world, though both are encouraged. You can avoid the news feed, the apps, the tweet-like status updates. You don’t even have to choose a favourite quote. The one thing you cannot get away from is your friend count. It is how Facebook keeps score.

Five years ago, on probably the loneliest week of my life, my newly created Facebook page looked me square in the eye and announced: “You have 0 friends.” I closed the account.

Facebook is not a good place for a lonely person, and not just because of how precisely it quantifies your isolation. The news feed, the default point of entry to the site, is a constantly updated stream of your every friend’s every activity, opinion and photograph. It is a Twitter feed in glorious technicolour, complete with pictures, polls and videos. It exists to make sure you know exactly how much more popular everyone else is, casually informing you that 14 of your friends were tagged in the album “Fun without Tom Meltzer”. It can be, to say the least, disheartening. Without a real-world social network with which to interact, social networking sites act as proof of the old cliché: you’re never so alone as when you’re in a crowd.

The pressures put on teenagers by sites such as Facebook are well-known. Reports of cyber-bullying, happy-slapping, even self-harm and suicide attempts motivated by social networking sites have become increasingly common in the eight years since Friendster – and then MySpace, Bebo and Facebook – launched. But the subtler side-effects for a generation that has grown up with these sites are only now being felt. In March this year, the NSPCC published a detailed breakdown of calls made to ChildLine in the last five years. Though overall the number of calls from children and teenagers had risen by just 10%, calls about loneliness had nearly tripled, from 1,853 five years ago to 5,525 in 2009. Among boys, the number of calls about loneliness was more than five times higher than it had been in 2004.

This is not just a teenage problem. In May, the Mental Health Foundation released a report called The Lonely Society? Its survey found that 53% of 18-34-year-olds had felt depressed because of loneliness, compared with just 32% of people over 55. The question of why was, in part, answered by another of the report’s findings: nearly a third of young people said they spent too much time communicating online and not enough in person.

[div class=attrib]More from theSource here.[end-div]

What is HTML5

There is much going on in the world on internet and web standards, including the gradual roll-out of IPv6 and HTML5. HTML5 is a much more functional markup language than its predecessors and is better suited for developing richer user interfaces and interactions. Major highlights of HTML from the infographic below.

[div class=attrib]From Focus.com:[end-div]

[div class=attrib]More from theSource here.[end-div]

Sergey Brin’s Search for a Parkinson’s Cure

[div class=attrib]From Wired:[end-div]

Several evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.

Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)

The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”

There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.

Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.

That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.

“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.

Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.

It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.

[div class=attrib]More from theSource here.[end-div]

The internet: Everything you ever need to know

[div class=attrib]From The Observer:[end-div]

In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.

A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.

The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.

Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.

All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?

So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.

But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.

But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”

[div class=attrib]More from theSource here.[end-div]

What Is I.B.M.’s Watson?

[div class=attrib]From The New York Times:[end-div]

“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

[div class=attrib]More from theSource here.[end-div]

Mind Over Mass Media

[div class=attrib]From the New York Times:[end-div]

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Upon Nothing

[div class=attrib]By Robert Pinsky for Slate:[end-div]

The quality of wit, like the Hindu god Shiva, both creates and destroys—sometimes, both at once: The flash of understanding negates a trite or complacent way of thinking, and that stroke of obliteration at the same time creates a new form of insight and a laugh of recognition.

Also like Shiva, wit dances. Leaping gracefully, balancing speed and poise, it can re-embody and refresh old material. Negation itself, for example—verbal play with words like nothing and nobody: In one of the oldest jokes in literature, when the menacing Polyphemus asks Odysseus for his name, Odysseus tricks the monster by giving his name as the Greek equivalent of Nobody.

Another, immensely moving version of that Homeric joke (it may have been old even when Homer used it) is central to the best-known song of the great American comic Bert Williams (1874-1922). You can hear Williams’ funny, heart-rending, subtle rendition of the song (music by Williams, lyrics by Alex Rogers) at the University of California’s Cylinder Preservation and Digitization site.

The lyricist Rogers, I suspect, was aided by Williams’ improvisations as well as his virtuoso delivery. The song’s language is sharp and plain. The plainness, an almost throw-away surface, allows Williams to weave the refrain-word “Nobody” into an intricate fabric of jaunty pathos, savage lament, sly endurance—all in three syllables, with the dialect bent and stretched and released:

When life seems full of clouds and rain,
And I am full of nothing and pain,
Who soothes my thumpin’, bumpin’ brain?
Nobody.

When winter comes with snow and sleet,
And me with hunger, and cold feet—
Who says, “Here’s twenty-five cents
Go ahead and get yourself somethin’ to eat”?
Nobody.

I ain’t never done nothin’ to Nobody.
I ain’t never got nothin’ from Nobody, no time.
And, until I get somethin’ from somebody sometime,
I’ll never do nothin’ for Nobody, no time.

In his poem “Upon Nothing,” John Wilmot (1647-80), also known as the earl of Rochester, deploys wit as a flashing blade of skepticism, slashing away not only at a variety of human behaviors and beliefs, not only at false authorities and hollow reverences, not only at language, but at knowledge—at thought itself:

“Upon Nothing”

………………………1
Nothing, thou elder brother ev’n to Shade
Thou hadst a being ere the world was made,
And, well fixed, art alone of ending not afraid.

………………………2
Ere Time and Place were, Time and Place were not,
When primitive Nothing Something straight begot,
Then all proceeded from the great united What.

………………………3
Something, the general attribute of all,
Severed from thee, its sole original,
Into thy boundless self must undistinguished fall.

………………………4
Yet Something did thy mighty power command,
And from thy fruitful emptiness’s hand
Snatched men, beasts, birds, fire, water, air, and land.

………………………5
Matter, the wicked’st offspring of thy race,
By Form assisted, flew from thy embrace
And rebel Light obscured thy reverend dusky face.

………………………6
With Form and Matter, Time and Place did join,
Body, thy foe, with these did leagues combine
To spoil thy peaceful realm and ruin all thy line.

………………………7
But turncoat Time assists the foe in vain,
And bribed by thee destroys their short-lived reign,
And to thy hungry womb drives back thy slaves again.

………………………8
Though mysteries are barred from laic eyes,
And the divine alone with warrant pries
Into thy bosom, where thy truth in private lies;

………………………9
Yet this of thee the wise may truly say:
Thou from the virtuous nothing doest delay,
And to be part of thee the wicked wisely pray.

………………………10
Great Negative, how vainly would the wise
Enquire, define, distinguish, teach, devise,
Didst thou not stand to point their blind philosophies.

………………………11
Is or Is Not, the two great ends of Fate,
And true or false, the subject of debate,
That perfect or destroy the vast designs of state;

………………………12
When they have racked the politician’s breast,
Within thy bosom most securely rest,
And when reduced to thee are least unsafe, and best.

………………………13
But, Nothing, why does Something still permit
That sacred monarchs should at council sit
With persons highly thought, at best, for nothing fit;

………………………14
Whilst weighty something modestly abstains
From princes’ coffers, and from Statesmen’s brains,
And nothing there, like stately Nothing reigns?

………………………15
Nothing, who dwell’st with fools in grave disguise,
For whom they reverend shapes and forms devise,
Lawn-sleeves, and furs, and gowns, when they like thee look wise.

………………………16
French truth, Dutch prowess, British policy,
Hibernian learning, Scotch civility,
Spaniards’ dispatch, Danes’ wit, are mainly seen in thee.

………………………17
The great man’s gratitude to his best friend,
Kings’ promises, whores’ vows, towards thee they bend,
Flow swiftly into thee, and in thee ever end.

[div class=attrib]More from theSource here.[end-div]

Forget Avatar, the real 3D revolution is coming to your front room

[div class=attrib]From The Guardian:[end-div]

Enjoy eating goulash? Fed up with needing three pieces of cutlery? It could be that I have a solution for you – and not just for you but for picnickers who like a bit of bread with their soup, too. Or indeed for anyone who has dreamed of seeing the spoon and the knife incorporated into one, easy to use, albeit potentially dangerous instrument. Ladies and gentlemen, I would like to introduce you to the Knoon.

The Knoon came to me in a dream – I had a vision of a soup spoon with a knife stuck to its top, blade pointing upwards. Given the potential for lacerating your mouth on the Knoon’s sharp edge, maybe my dream should have stayed just that. But thanks to a technological leap that is revolutionising manufacturing and, some hope, may even change the nature of our consumer society, I now have a Knoon sitting right in front of me. I had the idea, I drew it up and then I printed my cutlery out.

3D is this year’s buzzword in Hollywood. From Avatar to Clash of the Titans, it’s a new take on an old fad that’s coming to save the movie industry. But with less glitz and a degree less fanfare, 3D printing is changing our vision of the world too, and ultimately its effects might prove a degree more special.

Thinglab is a company that specialises in 3D printing. Based in a nondescript office building in east London, its team works mainly with commercial clients to print models that would previously have been assembled by hand. Architects design their buildings in 3D software packages and pass them to Thinglab to print scale models. When mobile phone companies come up with a new handset, they print prototypes first in order to test size, shape and feel. Jewellers not only make prototypes, they use them as a basis for moulds. Sculptors can scan in their original works, adjust the dimensions and rattle off a series of duplicates (signatures can be added later).

All this work is done in the Thinglab basement, a kind of temple to 3D where motion capture suits hang from the wall and a series of next generation TV screens (no need for 3D glasses) sit in the corner. In the middle of the room lurk two hulking 3D printers. Their facades give them the faces of miserable robots.

“We had David Hockney in here recently and he was gobsmacked,” says Robin Thomas, one of Thinglab’s directors, reeling a list of intrigued celebrities who have made a pilgrimage to his basement. “Boy George came in and we took a scan of his face.” Above the printers sit a collection of the models they’ve produced: everything from a car’s suspension system to a rendering of John Cleese’s head. “If a creative person wakes up in the morning with an idea,” says Thomas, “they could have a model by the end of the day. People who would have spent days, weeks months on these type of models can now do it with a printer. If they can think of it, we can make it.”

[div class=attrib]More from theSource here.[end-div]

The Chess Master and the Computer

[div class=attrib]By Gary Kasparov, From the New York Review of Books:[end-div]

In 1985, in Hamburg, I played against thirty-two different chess computers at the same time in what is known as a simultaneous exhibition. I walked from one machine to the next, making my moves over a period of more than five hours. The four leading chess computer manufacturers had sent their top models, including eight named after me from the electronics firm Saitek.

It illustrates the state of computer chess at the time that it didn’t come as much of a surprise when I achieved a perfect 32–0 score, winning every game, although there was an uncomfortable moment. At one point I realized that I was drifting into trouble in a game against one of the “Kasparov” brand models. If this machine scored a win or even a draw, people would be quick to say that I had thrown the game to get PR for the company, so I had to intensify my efforts. Eventually I found a way to trick the machine with a sacrifice it should have refused. From the human perspective, or at least from my perspective, those were the good old days of man vs. machine chess.

Eleven years later I narrowly defeated the supercomputer Deep Blue in a match. Then, in 1997, IBM redoubled its efforts—and doubled Deep Blue’s processing power—and I lost the rematch in an event that made headlines around the world. The result was met with astonishment and grief by those who took it as a symbol of mankind’s submission before the almighty computer. (“The Brain’s Last Stand” read the Newsweek headline.) Others shrugged their shoulders, surprised that humans could still compete at all against the enormous calculating power that, by 1997, sat on just about every desk in the first world.

It was the specialists—the chess players and the programmers and the artificial intelligence enthusiasts—who had a more nuanced appreciation of the result. Grandmasters had already begun to see the implications of the existence of machines that could play—if only, at this point, in a select few types of board configurations—with godlike perfection. The computer chess people were delighted with the conquest of one of the earliest and holiest grails of computer science, in many cases matching the mainstream media’s hyperbole. The 2003 book Deep Blue by Monty Newborn was blurbed as follows: “a rare, pivotal watershed beyond all other triumphs: Orville Wright’s first flight, NASA’s landing on the moon….”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Michelangelo’s Labor Pains

[div class=attrib]By Robert Pinsky for Slate:[end-div]

After a certain point, reverence can become automatic. Our admiration for great works of art can get a bit reflexive, then synthetic, then can harden into a pious coating that repels real attention. Michelangelo’s painted ceiling of the Sistine Chapel in the Vatican might be an example of such automatic reverence. Sometimes, a fresh look or a hosing-down is helpful—if only by restoring the meaning of “work” to the phrase “work of art.”

Michelangelo (1475-1564) himself provides a refreshing dose of reality. A gifted poet as well as a sculptor and painter, he wrote energetically about despair, detailing with relish the unpleasant side of his work on the famous ceiling. The poem, in Italian, is an extended (or “tailed”) sonnet, with a coda of six lines appended to the standard 14. The translation I like best is by the American poet Gail Mazur. Her lines are musical but informal, with a brio conveying that the Italian artist knew well enough that he and his work were great—but that he enjoyed vigorously lamenting his discomfort, pain, and inadequacy to the task. No wonder his artistic ideas are bizarre and no good, says Michelangelo: They must come through the medium of his body, that “crooked blowpipe” (Mazur’s version of “cerbottana torta“). Great artist, great depression, great imaginative expression of it. This is a vibrant, comic, but heartfelt account of the artist’s work:

Michelangelo: To Giovanni da Pistoia
“When the Author Was Painting the Vault of the Sistine Chapel” —1509

I’ve already grown a goiter from this torture,
hunched up here like a cat in Lombardy
(or anywhere else where the stagnant water’s poison).
My stomach’s squashed under my chin, my beard’s
pointing at heaven, my brain’s crushed in a casket,
my breast twists like a harpy’s. My brush,
above me all the time, dribbles paint
so my face makes a fine floor for droppings!

My haunches are grinding into my guts,
my poor ass strains to work as a counterweight,
every gesture I make is blind and aimless.
My skin hangs loose below me, my spine’s
all knotted from folding over itself.
I’m bent taut as a Syrian bow.

Because I’m stuck like this, my thoughts
are crazy, perfidious tripe:
anyone shoots badly through a crooked blowpipe.

My painting is dead.
Defend it for me, Giovanni, protect my honor.
I am not in the right place—I am not a painter.

[div class=attrib]More from theSource here.[end-div]

The meaning of network culture

[div class=attrib]From Eurozine:[end-div]

Whereas in postmodernism, being was left in a free-floating fabric of emotional intensities, in contemporary culture the existence of the self is affirmed through the network. Kazys Varnelis discusses what this means for the democratic public sphere.

Not all at once but rather slowly, in fits and starts, a new societal condition is emerging: network culture. As digital computing matures and meshes with increasingly mobile networking technology, society is also changing, undergoing a cultural shift. Just as modernism and postmodernism served as crucial heuristic devices in their day, studying network culture as a historical phenomenon allows us to better understand broader sociocultural trends and structures, to give duration and temporality to our own, ahistorical time.

If more subtle than the much-talked about economic collapse of fall 2008, this shift in society is real and far more radical, underscoring even the logic of that collapse. During the space of a decade, the network has become the dominant cultural logic. Our economy, public sphere, culture, even our subjectivity are mutating rapidly and show little evidence of slowing down the pace of their evolution. The global economic crisis only demonstrated our faith in the network and its dangers. Over the last two decades, markets and regulators had increasingly placed their faith in the efficient market hypothesis, which posited that investors were fundamentally rational and, fed information by highly efficient data networks, would always make the right decision. The failure came when key parts of the network – the investors, regulators, and the finance industry – failed to think through the consequences of their actions and placed their trust in each other.

The collapse of the markets seems to have been sudden, but it was actually a long-term process, beginning with bad decisions made longer before the collapse. Most of the changes in network culture are subtle and only appear radical in retrospect. Take our relationship with the press. One morning you noted with interest that your daily newspaper had established a website. Another day you decided to stop buying the paper and just read it online. Then you started reading it on a mobile Internet platform, or began listening to a podcast of your favourite column while riding a train. Perhaps you dispensed with official news entirely, preferring a collection of blogs and amateur content. Eventually the paper may well be distributed only on the net, directly incorporating user comments and feedback. Or take the way cell phones have changed our lives. When you first bought a mobile phone, were you aware of how profoundly it would alter your life? Soon, however, you found yourself abandoning the tedium of scheduling dinner plans with friends in advance, instead coordinating with them en route to a particular neighbourhood. Or if your friends or family moved away to university or a new career, you found that through a social networking site like Facebook and through the every-present telematic links of the mobile phone, you did not lose touch with them.

If it is difficult to realize the radical impact of the contemporary, this is in part due to the hype about the near-future impact of computing on society in the 1990s. The failure of the near-future to be realized immediately, due to the limits of the technology of the day, made us jaded. The dot.com crash only reinforced that sense. But slowly, technology advanced and society changed, finding new uses for it, in turn spurring more change. Network culture crept up on us. Its impact on us today is radical and undeniable.

[div class=attrib]More from theSource here.[end-div]

The Madness of Crowds and an Internet Delusion

[div class=attrib]From The New York Times:[end-div]

RETHINKING THE WEB Jaron Lanier, pictured here in 1999, was an early proponent of the Internet’s open culture. His new book examines the downsides.

In the 1990s, Jaron Lanier was one of the digital pioneers hailing the wonderful possibilities that would be realized once the Internet allowed musicians, artists, scientists and engineers around the world to instantly share their work. Now, like a lot of us, he is having second thoughts.

Mr. Lanier, a musician and avant-garde computer scientist — he popularized the term “virtual reality” — wonders if the Web’s structure and ideology are fostering nasty group dynamics and mediocre collaborations. His new book, “You Are Not a Gadget,” is a manifesto against “hive thinking” and “digital Maoism,” by which he means the glorification of open-source software, free information and collective work at the expense of individual creativity.

He blames the Web’s tradition of “drive-by anonymity” for fostering vicious pack behavior on blogs, forums and social networks. He acknowledges the examples of generous collaboration, like Wikipedia, but argues that the mantras of “open culture” and “information wants to be free” have produced a destructive new social contract.

“The basic idea of this contract,” he writes, “is that authors, journalists, musicians and artists are encouraged to treat the fruits of their intellects and imaginations as fragments to be given without pay to the hive mind. Reciprocity takes the form of self-promotion. Culture is to become precisely nothing but advertising.”

I find his critique intriguing, partly because Mr. Lanier isn’t your ordinary Luddite crank, and partly because I’ve felt the same kind of disappointment with the Web. In the 1990s, when I was writing paeans to the dawning spirit of digital collaboration, it didn’t occur to me that the Web’s “gift culture,” as anthropologists called it, could turn into a mandatory potlatch for so many professions — including my own.

So I have selfish reasons for appreciating Mr. Lanier’s complaints about masses of “digital peasants” being forced to provide free material to a few “lords of the clouds” like Google and YouTube. But I’m not sure Mr. Lanier has correctly diagnosed the causes of our discontent, particularly when he blames software design for leading to what he calls exploitative monopolies on the Web like Google.

He argues that old — and bad — digital systems tend to get locked in place because it’s too difficult and expensive for everyone to switch to a new one. That basic problem, known to economists as lock-in, has long been blamed for stifling the rise of superior technologies like the Dvorak typewriter keyboard and Betamax videotapes, and for perpetuating duds like the Windows operating system.

It can sound plausible enough in theory — particularly if your Windows computer has just crashed. In practice, though, better products win out, according to the economists Stan Liebowitz and Stephen Margolis. After reviewing battles like Dvorak-qwerty and Betamax-VHS, they concluded that consumers had good reasons for preferring qwerty keyboards and VHS tapes, and that sellers of superior technologies generally don’t get locked out. “Although software is often brought up as locking in people,” Dr. Liebowitz told me, “we have made a careful examination of that issue and find that the winning products are almost always the ones thought to be better by reviewers.” When a better new product appears, he said, the challenger can take over the software market relatively quickly by comparison with other industries.

Dr. Liebowitz, a professor at the University of Texas at Dallas, said the problem on the Web today has less to do with monopolies or software design than with intellectual piracy, which he has also studied extensively. In fact, Dr. Liebowitz used to be a favorite of the “information-wants-to-be-free” faction.

In the 1980s he asserted that photocopying actually helped copyright owners by exposing more people to their work, and he later reported that audio and video taping technologies offered large benefits to consumers without causing much harm to copyright owners in Hollywood and the music and television industries.

But when Napster and other music-sharing Web sites started becoming popular, Dr. Liebowitz correctly predicted that the music industry would be seriously hurt because it was so cheap and easy to make perfect copies and distribute them. Today he sees similar harm to other industries like publishing and television (and he is serving as a paid adviser to Viacom in its lawsuit seeking damages from Google for allowing Viacom’s videos to be posted on YouTube).

Trying to charge for songs and other digital content is sometimes dismissed as a losing cause because hackers can crack any copy-protection technology. But as Mr. Lanier notes in his book, any lock on a car or a home can be broken, yet few people do so — or condone break-ins.

“An intelligent person feels guilty for downloading music without paying the musician, but they use this free-open-culture ideology to cover it,” Mr. Lanier told me. In the book he disputes the assertion that there’s no harm in copying a digital music file because you haven’t damaged the original file.

“The same thing could be said if you hacked into a bank and just added money to your online account,” he writes. “The problem in each case is not that you stole from a specific person but that you undermined the artificial scarcities that allow the economy to function.”

Mr. Lanier was once an advocate himself for piracy, arguing that his fellow musicians would make up for the lost revenue in other ways. Sure enough, some musicians have done well selling T-shirts and concert tickets, but it is striking how many of the top-grossing acts began in the predigital era, and how much of today’s music is a mash-up of the old.

“It’s as if culture froze just before it became digitally open, and all we can do now is mine the past like salvagers picking over a garbage dump,” Mr. Lanier writes. Or, to use another of his grim metaphors: “Creative people — the new peasants — come to resemble animals converging on shrinking oases of old media in a depleted desert.”

To save those endangered species, Mr. Lanier proposes rethinking the Web’s ideology, revising its software structure and introducing innovations like a universal system of micropayments. (To debate reforms, go to Tierney Lab at nytimes.com/tierneylab.

Dr. Liebowitz suggests a more traditional reform for cyberspace: punishing thieves. The big difference between Web piracy and house burglary, he says, is that the penalties for piracy are tiny and rarely enforced. He expects people to keep pilfering (and rationalizing their thefts) as long as the benefits of piracy greatly exceed the costs.

In theory, public officials could deter piracy by stiffening the penalties, but they’re aware of another crucial distinction between online piracy and house burglary: There are a lot more homeowners than burglars, but there are a lot more consumers of digital content than producers of it.

The result is a problem a bit like trying to stop a mob of looters. When the majority of people feel entitled to someone’s property, who’s going to stand in their way?

[div class=attrib]More from theSource here.[end-div]

For Expatriates in China, Creative Lives of Plenty

[div class=attrib]From The New York Times:[end-div]

THERE was a chill in the morning air in 2005 when dozens of artists from China, Europe and North America emerged from their red-brick studios here to find the police blocking the gates to Suojiacun, their compound on the city’s outskirts. They were told that the village of about 100 illegally built structures was to be demolished, and were given two hours to pack.

By noon bulldozers were smashing the walls of several studios, revealing ripped-apart canvases and half-glazed clay vases lying in the rubble. But then the machines ceased their pulverizing, and the police dispersed, leaving most of the buildings unscathed. It was not the first time the authorities had threatened to evict these artists, nor would it be the last. But it was still frightening.

“I had invested everything in my studio,” said Alessandro Rolandi, a sculptor and performance artist originally from Italy who had removed his belongings before the destruction commenced. “I was really worried about my work being destroyed.”

He eventually left Suojiacun, but he has remained in China. Like the artists’ colony, the country offers challenges, but expatriates here say that the rewards outweigh the hardships. Mr. Rolandi is one of many artists (five are profiled here) who have left the United States and Europe for China, seeking respite from tiny apartments, an insular art world and nagging doubts about whether it’s best to forgo art for a reliable office job. They have discovered a land of vast creative possibility, where scale is virtually limitless and costs are comically low. They can rent airy studios, hire assistants, experiment in costly mediums like bronze and fiberglass.

“Today China has become one of the most important places to create and invent,” said Jérôme Sans, director of the Ullens Center for Contemporary Art in Beijing. “A lot of Western artists are coming here to live the dynamism and make especially crazy work they could never do anywhere else in the world.”

Rania Ho

A major challenge for foreigners, no matter how fluent or familiar with life here, is that even if they look like locals, it is virtually impossible to feel truly of this culture. For seven years Rania Ho, the daughter of Chinese immigrants born and raised in San Francisco, has lived in Beijing, where she runs a small gallery in a hutong, or alley, near one of the city’s main temples. “Being Chinese-American makes it easier to be an observer of what’s really happening because I’m camouflaged,” she said. “But it doesn’t mean I understand any more what people are thinking.”

Still, Ms. Ho, 40, revels in her role as outsider in a society that she says is blindly enthusiastic about remaking itself. She creates and exhibits work by both foreign and Chinese artists that often plays with China’s fetishization of mechanized modernity.

Because she lives so close to military parades and futuristic architecture, she said that her own pieces — like a water fountain gushing on the roof of her gallery and a cardboard table that levitates a Ping-Pong ball — chuckle at the “hypnotic properties of unceasing labor.” She said they are futile responses to the absurd experiences she shares with her neighbors, who are constantly seeing their world transform before their eyes. “Being in China forces one to reassess everything,” she said, “which is at times difficult and exhausting, but for a majority of the time it’s all very amusing and enlightening.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Adam’s Curse

[div class=attrib]By Robert Pinsky for Slate:[end-div]

Poetry can resemble incantation, but sometimes it also resembles conversation. Certain poems combine the two—the cadences of speech intertwined with the forms of song in a varying way that heightens the feeling. As in a screenplay or in fiction, the things that people in a poem say can seem natural, even spontaneous, yet also work to propel the emotional action along its arc.

The casual surface of speech and the inward energy of art have a clear relation in “Adam’s Curse” by William Butler Yeats (1865-1939). A couple and their friend are together at the end of a summer day. In the poem, two of them speak, first about poetry and then about love. All of the poem’s distinct narrative parts—the setting, the dialogue, the stunning and unspoken conclusion—are conveyed in the strict form of rhymed couplets throughout. I have read the poem many times, for many years, and every time, something in me is hypnotized by the dance of sentence and rhyme. Always, in a certain way, the conclusion startles me. How can the familiar be somehow surprising? It seems to be a principle of art; and in this case, the masterful, unshowy rhyming seems to be a part of it. The couplet rhyme profoundly drives and tempers the gradually gathering emotional force of the poem in ways beyond analysis.

Yeats’ dialogue creates many nuances of tone. It is even a little funny at times: The poet’s self-conscious self-pity about how hard he works (he does most of the talking) is exaggerated with a smile, and his categories for the nonpoet or nonmartyr “world” have a similar, mildly absurd sweeping quality: bankers, schoolmasters, clergymen … This is not wit, exactly, but the slightly comical tone friends might use sitting together on a summer evening. I hear the same lightness of touch when the woman says, “Although they do not talk of it at school.” The smile comes closest to laughter when the poet in effect mocks himself gently, speaking of those lovers who “sigh and quote with learned looks/ Precedents out of beautiful old books.” The plain monosyllables of “old books” are droll in the context of these lovers. (Yeats may feel that he has been such a lover in his day.)

The plainest, most straightforward language in the poem, in some ways, comes at the very end—final words, not uttered in the conversation, are more private and more urgent than what has come before. After the almost florid, almost conventionally poetic description of the sunset, the courtly hint of a love triangle falls away. The descriptive language of the summer twilight falls away. The dialogue itself falls away—all yielding to the idea that this concluding thought is “only for your ears.” That closing passage of interior thoughts, what in fiction might be called “omniscient narration,” makes the poem feel, to me, as though not simply heard but overheard.

“Adam’s Curse”

We sat together at one summer’s end,
That beautiful mild woman, your close friend,
And you and I, and talked of poetry.
I said, “A line will take us hours maybe;
Yet if it does not seem a moment’s thought,
Our stitching and unstitching has been naught.
Better go down upon your marrow-bones
And scrub a kitchen pavement, or break stones
Like an old pauper, in all kinds of weather;
For to articulate sweet sounds together
Is to work harder than all these, and yet
Be thought an idler by the noisy set
Of bankers, schoolmasters, and clergymen
The martyrs call the world.”

And thereupon
That beautiful mild woman for whose sake
There’s many a one shall find out all heartache
On finding that her voice is sweet and low
Replied, “To be born woman is to know—
Although they do not talk of it at school—
That we must labour to be beautiful.”
I said, “It’s certain there is no fine thing
Since Adam’s fall but needs much labouring.
There have been lovers who thought love should be
So much compounded of high courtesy
That they would sigh and quote with learned looks
Precedents out of beautiful old books;
Yet now it seems an idle trade enough.”

We sat grown quiet at the name of love;
We saw the last embers of daylight die,
And in the trembling blue-green of the sky
A moon, worn as if it had been a shell
Washed by time’s waters as they rose and fell
About the stars and broke in days and years.

I had a thought for no one’s but your ears:
That you were beautiful, and that I strove
To love you in the old high way of love;
That it had all seemed happy, and yet we’d grown
As weary-hearted as that hollow moon.

[div class=atrrib]More from theSource here.[end-div]

CERN celebrates 20th anniversary of World Wide Web

theDiagonal doesn’t normally post “newsy” items. So, we are making an exception in this case for two reasons: first, the “web” wasn’t around in 1989 so we wouldn’t have been able to post a news release on our blog announcing its birth; second, in 1989 Tim Berners-Lee’s then manager waved off his proposal with a “Vague, but exciting” annotation, so without the benefit of the hindsight we now have and lacking in foresight that we so desire, we may just have dismissed it. The rest, as they say “is history”.

[div class=attrib]From Interactions.org:[end-div]

Web inventor Tim Berners-Lee today returned to the birthplace of his brainchild, 20 years after submitting his paper ‘Information Management: A Proposal’ to his manager Mike Sendall in March 1989. By writing the words ‘Vague, but exciting’ on the document’s cover, and giving Berners-Lee the go-ahead to continue, Sendall signed into existence the information revolution of our time: the World Wide Web. In September the following year, Berners-Lee took delivery of a computer called a NeXT cube, and by December 1990 the Web was up and running, albeit between just a couple of computers at CERN*.

Today’s event takes a look back at some of the early history, and pre-history, of the World Wide Web at CERN, includes a keynote speech from Tim Berners-Lee, and concludes with a series of talks from some of today’s Web pioneers.

“It’s a pleasure to be back at CERN today,” said Berners-Lee. “CERN has come a long way since 1989, and so has the Web, but its roots will always be here.”

The World Wide Web is undoubtedly the most well known spin-off from CERN, but it’s not the only one. Technologies developed at CERN have found applications in domains as varied as solar energy collection and medical imaging.

“When CERN scientists find a technological hurdle in the way of their ambitions, they have a tendency to solve it,” said CERN Director General Rolf Heuer. “I’m pleased to say that the spirit of innovation that allowed Tim Berners-Lee to invent the Web at CERN, and allowed CERN to nurture it, is alive and well today.”

[div class=attrib]More from theSource here.[end-div]

Why has manga become a global cultural product?

[div class=attrib]From Eurozine:[end-div]

In the West, manga has become a key part of the cultural accompaniment to economic globalization. No mere side-effect of Japan’s economic power, writes Jean-Marie Bouissou, manga is ideally suited to the cultural obsessions of the early twenty-first century.

Multiple paradoxes

Paradox surrounds the growth of manga in western countries such as France, Italy and the USA since the 1970s, and of genres descended from it: anime (cartoons), television serials and video games. The first parodox is that, whereas western countries have always imagined their culture and values as universal and sought to spread them (if only as cover for their imperial ambitions), Japan has historically been sceptical about sharing its culture with the world. The Shinto religion, for example, is perhaps unique in being strictly “national”: the very idea of a “Shintoist” foreigner would strike the Japanese as absurd.

The second paradox is that manga, in the form it has taken since 1945, is shot through with a uniquely Japanese historical experience. It depicts the trauma of a nation opened at gunpoint in 1853 by the “black ships” of Commodore Matthew Perry in 1853, frog-marched into modernity, and dragged into a contest with the West which ended in the holocaust of Hiroshima. It was this nation’s children – call them “Generation Tezuka” – who became the first generation of mangaka [manga creators]. They had seen their towns flattened by US bombers, their fathers defeated, their emperor stripped of his divinity, and their schoolbooks and the value-system they contained cast into the dustbin of history.

This defeated nation rebuilt itself through self-sacrificing effort and scarcely twenty years later had become the second economic power of the free world. Yet it received neither recognition (the 1980s were the years of “Japan-bashing” in the West), nor the security to which it aspired, before its newly-regained pride was crushed once more by the long crisis of the 1990s. Such a trajectory – unique, convulsive, dramatic, overshadowed by racial discrimination – differs radically from that of the old European powers, or that of young, triumphant America. Hence, it is all the more stunning that its collective imagination has spawned a popular culture capable of attaining “universality”.

At the start of the twenty-first century, Japan has become the world’s second largest exporter of cultural products. Manga has conquered 45 per cent of the French comic market, and Shonen Jump – the most important manga weekly for Japanese teenagers, whose circulation reached 6 million during the mid-1990s – has begun appearing in an American version. Manga, long considered fit only for children or poorly-educated youths, is starting to seduce a sophisticated generation of French thirty-somethings. This deserves an explanation.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of readbestmanga.[end-div]

Sex appeal

[div class=attrib]From Eurozine:[end-div]

Having condemned hyper-sexualized culture, the American religious Right is now wildly pro-sex, as long as it is marital sex. By replacing the language of morality with the secular notion of self-esteem, repression has found its way back onto school curricula – to the detriment of girls and women in particular. “We are living through an assault on female sexual independence”, writes Dagmar Herzog.

“Waves of pleasure flow over me; it feels like sliding down a mountain waterfall,” rhapsodises one delighted woman. Another recalls: “It’s like having a million tiny pleasure balloons explode inside of me all at once.”

These descriptions come not from Cosmopolitan, not from an erotic website, not from a Black Lace novel and certainly not from a porn channel. They are, believe it or not, part of the new philosophy of the Religious Right in America. We’ve always known that sex sells. Well, now it’s being used to sell both God and the Republicans in one extremely suggestive package. And in dressing up the old repressive values in fishnet stockings and flouncy lingerie, the forces of conservatism have beaten the liberals at their own game.

Choose almost any sex-related issue. From pornography and sex education to reproductive rights and treatment for sexually transmitted diseases, Americans have allowed a conservative religious movement not only to dictate the terms of conversation but also to change the nation’s laws and public health policies. And meanwhile American liberals have remained defensive and tongue-tied.

So how did the Religious Right – that avid and vocal movement of politicised conservative evangelical Protestants (joined together also with a growing number of conservative Catholics) – manage so effectively to harness what has traditionally been the province of the permissive left?

Quite simply, it has changed tactics and is now going out of its way to assert, loudly and enthusiastically, that, in contrast to what is generally believed, it is far from being sexually uptight. On the contrary, it is wildly pro-sex, provided it’s marital sex. Evangelical conservatives in particular have begun not only to rail against the evils of sexual misery within marriage (and the way far too many wives feel like not much more than sperm depots for insensitive, emotionally absent husbands), but also, in the most graphically detailed, explicit terms, to eulogise about the prospect of ecstasy.

[div class=attrib]More from theSource here.[end-div]

The society of the query and the Googlization of our lives

[div class=attrib]From Eurozine:[end-div]

“There is only one way to turn signals into information, through interpretation”, wrote the computer critic Joseph Weizenbaum. As Google’s hegemony over online content increases, argues Geert Lovink, we should stop searching and start questioning.

A spectre haunts the world’s intellectual elites: information overload. Ordinary people have hijacked strategic resources and are clogging up once carefully policed media channels. Before the Internet, the mandarin classes rested on the idea that they could separate “idle talk” from “knowledge”. With the rise of Internet search engines it is no longer possible to distinguish between patrician insights and plebeian gossip. The distinction between high and low, and their co-mingling on occasions of carnival, belong to a bygone era and should no longer concern us. Nowadays an altogether new phenomenon is causing alarm: search engines rank according to popularity, not truth. Search is the way we now live. With the dramatic increase of accessed information, we have become hooked on retrieval tools. We look for telephone numbers, addresses, opening times, a person’s name, flight details, best deals and in a frantic mood declare the ever growing pile of grey matter “data trash”. Soon we will search and only get lost. Old hierarchies of communication have not only imploded, communication itself has assumed the status of cerebral assault. Not only has popular noise risen to unbearable levels, we can no longer stand yet another request from colleagues and even a benign greeting from friends and family has acquired the status of a chore with the expectation of reply. The educated class deplores that fact that chatter has entered the hitherto protected domain of science and philosophy, when instead they should be worrying about who is going to control the increasingly centralized computing grid.

What today’s administrators of noble simplicity and quiet grandeur cannot express, we should say for them: there is a growing discontent with Google and the way the Internet organizes information retrieval. The scientific establishment has lost control over one of its key research projects – the design and ownership of computer networks, now used by billions of people. How did so many people end up being that dependent on a single search engine? Why are we repeating the Microsoft saga once again? It seems boring to complain about a monopoly in the making when average Internet users have such a multitude of tools at their disposal to distribute power. One possible way to overcome this predicament would be to positively redefine Heidegger’s Gerede. Instead of a culture of complaint that dreams of an undisturbed offline life and radical measures to filter out the noise, it is time to openly confront the trivial forms of Dasein today found in blogs, text messages and computer games. Intellectuals should no longer portray Internet users as secondary amateurs, cut off from a primary and primordial relationship with the world. There is a greater issue at stake and it requires venturing into the politics of informatic life. It is time to address the emergence of a new type of corporation that is rapidly transcending the Internet: Google.

The World Wide Web, which should have realized the infinite library Borges described in his short story The Library of Babel (1941), is seen by many of its critics as nothing but a variation of Orwell’s Big Brother (1948). The ruler, in this case, has turned from an evil monster into a collection of cool youngsters whose corporate responsibility slogan is “Don’t be evil”. Guided by a much older and experienced generation of IT gurus (Eric Schmidt), Internet pioneers (Vint Cerf) and economists (Hal Varian), Google has expanded so fast, and in such a wide variety of fields, that there is virtually no critic, academic or business journalist who has been able to keep up with the scope and speed with which Google developed in recent years. New applications and services pile up like unwanted Christmas presents. Just add Google’s free email service Gmail, the video sharing platform YouTube, the social networking site Orkut, GoogleMaps and GoogleEarth, its main revenue service AdWords with the Pay-Per-Click advertisements, office applications such as Calendar, Talks and Docs. Google not only competes with Microsoft and Yahoo, but also with entertainment firms, public libraries (through its massive book scanning program) and even telecom firms. Believe it or not, the Google Phone is coming soon. I recently heard a less geeky family member saying that she had heard that Google was much better and easier to use than the Internet. It sounded cute, but she was right. Not only has Google become the better Internet, it is taking over software tasks from your own computer so that you can access these data from any terminal or handheld device. Apple’s MacBook Air is a further indication of the migration of data to privately controlled storage bunkers. Security and privacy of information are rapidly becoming the new economy and technology of control. And the majority of users, and indeed companies, are happily abandoning the power to self-govern their informational resources.

[div class=attrib]More from theSource here.[end-div]

Manufactured scarcity

[div class=attrib]From Eurozine:[end-div]

“Manufacturing scarcity” is the new watchword in “Green capitalism”. James Heartfield explains how for the energy sector, it has become a license to print money. Increasing profits by cutting output was pioneered by Enron in the 1990s; now the model of restricted supply together with domestic energy generation is promoted worldwide.

The corporate raiders of the 1980s first worked out that you might be able to make more money downsizing, or even breaking up industry than building it up. It is a perverse result of the profit motive that private gain should grow out of public decay. But even the corporate raiders never dreamt of making deindustrialisation into an avowed policy goal which the rest of us would pay for.

What some of the cannier Green Capitalists realised is that scarcity increases price, and manufacturing scarcity can increase returns. What could be more old hat, they said, than trying to make money by making things cheaper? Entrepreneurs disdained the “fast moving consumer goods” market.

Of course there is a point to all this. If labour gets too efficient the chances of wringing more profits from industry get less. The more productive labour is, the lower, in the end, will be the rate of return on investments. That is because the source of new value is living labour; but greater investment in new technologies tends to replace living labour with machines, which produce no additional value of their own.[2] Over time the rate of return must fall. Business theory calls this the diminishing rate of return.[3] Businessmen know it as the “race for the bottom” – the competitive pressure to make goods cheaper and cheaper, making it that much harder to sell enough to make a profit. Super efficient labour would make the capitalistic organisation of industry redundant. Manufacturing scarcity, restricting output and so driving up prices is one short-term way to secure profits and maybe even the profit-system. Of course that would also mean abandoning the historic justification for capitalism, that it increased output and living standards. Environmentalism might turn out to be the way to save capitalism, just at the point when industrial development had shown it to be redundant.

[div class=attrib]More from theSource here.[end-div]

Shopping town USA

[div class=attrib]From Eurozine:[end-div]
In the course of his life, Victor Gruen completed major urban interventions in the US and western Europe that fundamentally altered the course of western urban development. Anette Baldauf describes how Gruen’s fame rests mostly on the insertion of commercial machines into the decentred US suburbs. These so-called “shopping towns” were supposed to strengthen civic life and structure the amorphous, mono-functional agglomerations of suburban sprawl. Yet within a decade, Gruen’s designs had become the architectural extension of the policies of racial and gender segregation underlying the US postwar consumer utopia.

In 1943, the US American magazine Architectural Forum invited Victor Gruen and his wife Elsie Krummeck to take part in an exchange of visions for the architechtonic shaping of the postwar period. The editors of the issue, entitled Architecture 194x, appealed to recognised modernists such as Mies van der Rohe and Charles Eames to design parts of a model town for the year “194x”, in other words for an unspecified year, by which time the Second World War would have ended. The architects Gruen & Krummeck partnership were to design a prototype for a “regional shopping centre”. The editors specified that the shopping centre was to be situated on the outskirts of the city, on traffic island between two highways and would supplement the pedestrian zone down town. “How can shopping be made more inviting?”, the editors asked Gruen & Krummeck, who, at the time of the competition, were famous for their spectacular glass designs for boutiques on Fifth Avenue and for national department store chains on the outskirts of US cities.

The two architects responded to the commission to build a “small neighbourhood shopping centre” with a design that far exceeded the specified size and function of the centre. Gruen later explained that the project reflected the couple’s dissatisfaction with Los Angeles, where long distances between shops, regular traffic jams, and an absence of pedestrian zones made shopping tiresome work. Gruen and Krummeck saw in Los Angeles the blueprint of an “an automotive-rich postwar America”. Their counter-design was oriented towards the traditional main squares of European cities. Hence, they suggested two central structural interventions: first, the automobile and the shopper were to be assigned two distinct spatial units, and second, space for consumption and civic space were to be merged. Working to this premise, Gruen and Krummeck designed a centre that was organised around a spacious green square – with garden restaurants, milk bars, and music stands. The design integrated 28 shops and 13 public facilities; among the latter were a library, a post office, a theatre, a lecture hall, a night club, a nursery, a play room, and a pony stable.

The editors of Architectural Forum rejected Gruen’s and Krummeck’s design. They insisted upon a reduced “regional shopping centre” and urged the architects to rework their submission along these lines. Gruen and Krummeck responded with an adjustment that would later prove crucial: they abandoned the idea of a green square in the centre of the complex and suggested building a closed, round building made of glass. They surrounded the inwardly directed shopping complex with two rings. The first ring was to serve as a pedestrian zone, the second as a car park. This design also failed to please. George Nelson, the editor-in-chief, was scandalised and argued that by removing the central square, the space for sitting around and strolling was lost. For him, the shopping centre as closed space was inconceivable. Eventually, Gruen and Krummeck submitted a design for a conventional shopping centre with shops arranged in a “U” shape around a courtyard. Clearly, those that would celebrate the closed shopping centre a few years later were not yet active. It was only a decade later that Gruen was able to convince two leading department-store owners of the profitability of a self-enclosed shopping centre. Excluding cars, street traders, animals, and other potential disturbances, and supported by surveillance technology, the shopping mall would embody the ideal, typical values of suburban lifestyles – order, cleanliness, and safety. Public judgement of Gruen’s “architecture of introversion” fundamentally changed, then, in the course of the 1950s. What was it, exactly, that led to this revised evaluation of a closed, inwardly directed space of consumption?

[div class=attrib]More from theSource here.[end-div]

A Solar Grand Plan

[div class=attrib]From Scientific American:[end-div]

By 2050 solar power could end U.S. dependence on foreign oil and slash greenhouse gas emissions.

High prices for gasoline and home heating oil are here to stay. The U.S. is at war in the Middle East at least in part to protect its foreign oil interests. And as China, India and other nations rapidly increase their demand for fossil fuels, future fighting over energy looms large. In the meantime, power plants that burn coal, oil and natural gas, as well as vehicles everywhere, continue to pour millions of tons of pollutants and greenhouse gases into the atmosphere annually, threatening the planet.

Well-meaning scientists, engineers, economists and politicians have proposed various steps that could slightly reduce fossil-fuel use and emissions. These steps are not enough. The U.S. needs a bold plan to free itself from fossil fuels. Our analysis convinces us that a massive switch to solar power is the logical answer.

  • A massive switch from coal, oil, natural gas and nuclear power plants to solar power plants could supply 69 percent of the U.S.’s electricity and 35 percent of its total energy by 2050.
  • A vast area of photovoltaic cells would have to be erected in the Southwest. Excess daytime energy would be stored as compressed air in underground caverns to be tapped during nighttime hours.
  • Large solar concentrator power plants would be built as well.
  • A new direct-current power transmission backbone would deliver solar electricity across the country.
  • But $420 billion in subsidies from 2011 to 2050 would be required to fund the infrastructure and make it cost-competitive.

[div class=attrib]More from theSource here.[end-div]

France: return to Babel

[div class=attrib]From Eurozine:[end-div]

Each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language, says Marc Hatzfeld. The language is then guarded by men of letters, by strict rules, not allowing for variety of expression. Against this backdrop, immigrants from ever more distant shores have arrived in France, bringing with them a different style of expression and another, more fluid, concept of language.

Today more than ever, the language issue, which might at one time have segued gracefully between pleasure in sense and sensual pleasure, is being seized on and exploited for political ends. Much of this we can put down to the concept of the nation-state, that symbolic and once radical item that was assigned the task of consolidating the fragmented political power of the time. During the long centuries from the end of the Middle Ages to the close of the Ancien Régime, this triumphant political logic sought to bind together nation, language and religion. East of the Rhine, for instance, this was particularly true of the links between nation and religion; West of the Rhine, it focused more on language. From Villers-Cotterêts[1] on, language – operating almost coercively – served as an instrument of political unification. The periodic alternation between an imperial style that was both permissive and varied when it came to customary practise, and the homogeneous and monolithic style adopted on the national front, led to constant comings and goings in the relationship between language and political power.

In France, the revocation of the Edict of Nantes by Louis XIV in 1685 resolved the relationship between nation and religion and gave language a more prominent role in defining nationality. Not long after, the language itself – by now regarded as public property – became a ward of state entitled to public protection. Taking things one step further, the eighteenth century philosophers of the Enlightenment conceived the idea of a coherent body of subject people and skilfully exploited this to clip the wings of a fabled absolute monarch in the name of another, equally mythical, form of sovereignty. All that remained was to organise the country institutionally. Henceforth, the idea that the allied forces of people, nation and language together made up the same collective history was pursued with zeal.

What we see as a result is this curious emergence of language itself as a concept. Making use of a fiction that reached down from a great height to penetrate a cultural reality that was infinitely more subtle and flexible, each nation establishes its borders, sometimes defines itself, certainly organises itself, and always affirms itself around its language. While we in Europe enjoy as many ways of speaking as there are localities and occupations, there are administrative and symbolic demands to fabricate the fantasy of a language that clerics and men of letters would appropriate to themselves. It is these who, in the wake of the politicians, help to eliminate the variety of ways people have of expressing themselves and of understanding one another. Some scholars, falling into what they fail to see is a highly politicised trap, complete this process by coming up with a scientific construct heavily dependent on the influence of mathematical theories such as those of de Saussure and, above all, of Jakobson. Paradoxically, this body of work relies on a highly malleable, mobile, elastic reality to develop the tight, highly structured concept that is “language” (Jacques Lacan). And from that point, language itself becomes a prisoner of Lacan’s own system – linguistics.
[div class=attrib]From theSource here.[end-div]

The Great Cosmic Roller-Coaster Ride

[div class=attrib]From Scientific American:[end-div]

Could cosmic inflation be a sign that our universe is embedded in a far vaster realm

You might not think that cosmologists could feel claustrophobic in a universe that is 46 billion light-years in radius and filled with sextillions of stars. But one of the emerging themes of 21st-century cosmology is that the known universe, the sum of all we can see, may just be a tiny region in the full extent of space. Various types of parallel universes that make up a grand “multiverse” often arise as side effects of cosmological theories. We have little hope of ever directly observing those other universes, though, because they are either too far away or somehow detached from our own universe.

Some parallel universes, however, could be separate from but still able to interact with ours, in which case we could detect their direct effects. The possibility of these worlds came to cosmologists’ attention by way of string theory, the leading candidate for the foundational laws of nature. Although the eponymous strings of string theory are extremely small, the principles governing their properties also predict new kinds of larger membranelike objects—“branes,” for short. In particular, our universe may be a three-dimensional brane in its own right, living inside a nine-dimensional space. The reshaping of higher-dimensional space and collisions between different universes may have led to some of the features that astronomers observe today.

[div class=attrib]More from theSource here.[end-div]

Windows on the Mind

[div class=attrib]From Scientific American:[end-div]

Once scorned as nervous tics, certain tiny, unconscious flicks of the eyes now turn out to underpin much of our ability to see. These movements may even reveal subliminal thoughts.

As you read this, your eyes are rapidly flicking from left to right in small hops, bringing each word sequentially into focus. When you stare at a person’s face, your eyes will similarly dart here and there, resting momentarily on one eye, the other eye, nose, mouth and other features. With a little introspection, you can detect this frequent flexing of your eye muscles as you scan a page, face or scene.

But these large voluntary eye movements, called saccades, turn out to be just a small part of the daily workout your eye muscles get. Your eyes never stop moving, even when they are apparently settled, say, on a person’s nose or a sailboat bobbing on the horizon. When the eyes fixate on something, as they do for 80 percent of your waking hours, they still jump and jiggle imperceptibly in ways that turn out to be essential for seeing. If you could somehow halt these miniature motions while fixing your gaze, a static scene would simply fade from view.

[div class=attrib]More from theSource here.[end-div]