A radical pessimist’s guide to the next 10 years

The Globe and Mail:

The iconic writer reveals the shape of things to come, with 45 tips for survival and a matching glossary of the new words you’ll need to talk about your messed-up future.

1) It’s going to get worse

No silver linings and no lemonade. The elevator only goes down. The bright note is that the elevator will, at some point, stop.

2) The future isn’t going to feel futuristic

It’s simply going to feel weird and out-of-control-ish, the way it does now, because too many things are changing too quickly. The reason the future feels odd is because of its unpredictability. If the future didn’t feel weirdly unexpected, then something would be wrong.

3) The future is going to happen no matter what we do. The future will feel even faster than it does now

The next sets of triumphing technologies are going to happen, no matter who invents them or where or how. Not that technology alone dictates the future, but in the end it always leaves its mark. The only unknown factor is the pace at which new technologies will appear. This technological determinism, with its sense of constantly awaiting a new era-changing technology every day, is one of the hallmarks of the next decade.

4)Move to Vancouver, San Diego, Shannon or Liverpool

There’ll be just as much freaky extreme weather in these west-coast cities, but at least the west coasts won’t be broiling hot and cryogenically cold.

5) You’ll spend a lot of your time feeling like a dog leashed to a pole outside the grocery store – separation anxiety will become your permanent state

6) The middle class is over. It’s not coming back

Remember travel agents? Remember how they just kind of vanished one day?

That’s where all the other jobs that once made us middle-class are going – to that same, magical, class-killing, job-sucking wormhole into which travel-agency jobs vanished, never to return. However, this won’t stop people from self-identifying as middle-class, and as the years pass we’ll be entering a replay of the antebellum South, when people defined themselves by the social status of their ancestors three generations back. Enjoy the new monoclass!

7) Retail will start to resemble Mexican drugstores

In Mexico, if one wishes to buy a toothbrush, one goes to a drugstore where one of every item for sale is on display inside a glass display case that circles the store. One selects the toothbrush and one of an obvious surplus of staff runs to the back to fetch the toothbrush. It’s not very efficient, but it does offer otherwise unemployed people something to do during the day.

8) Try to live near a subway entrance

In a world of crazy-expensive oil, it’s the only real estate that will hold its value, if not increase.

9) The suburbs are doomed, especially thoseE.T. , California-style suburbs

This is a no-brainer, but the former homes will make amazing hangouts for gangs, weirdoes and people performing illegal activities. The pretend gates at the entranceways to gated communities will become real, and the charred stubs of previous white-collar homes will serve only to make the still-standing structures creepier and more exotic.

10) In the same way you can never go backward to a slower computer, you can never go backward to a lessened state of connectedness

11) Old people won’t be quite so clueless

No more “the Google,” because they’ll be just that little bit younger.

12) Expect less

Not zero, just less.

13) Enjoy lettuce while you still can

And anything else that arrives in your life from a truck, for that matter. For vegetables, get used to whatever it is they served in railway hotels in the 1890s. Jams. Preserves. Pickled everything.

14) Something smarter than us is going to emerge

Thank you, algorithms and cloud computing.

15) Make sure you’ve got someone to change your diaper

Sponsor a Class of 2112 med student. Adopt up a storm around the age of 50.

16) “You” will be turning into a cloud of data that circles the planet like a thin gauze

While it’s already hard enough to tell how others perceive us physically, your global, phantom, information-self will prove equally vexing to you: your shopping trends, blog residues, CCTV appearances – it all works in tandem to create a virtual being that you may neither like nor recognize.

17) You may well burn out on the effort of being an individual

You’ve become a notch in the Internet’s belt. Don’t try to delude yourself that you’re a romantic lone individual. To the new order, you’re just a node. There is no escape

18) Untombed landfills will glut the market with 20th-century artifacts

19) The Arctic will become like Antarctica – an everyone/no one space

Who owns Antarctica? Everyone and no one. It’s pie-sliced into unenforceable wedges. And before getting huffy, ask yourself, if you’re a Canadian: Could you draw an even remotely convincing map of all those islands in Nunavut and the Northwest Territories? Quick, draw Ellesmere Island.

20)

North America can easily fragment quickly as did the Eastern Bloc in 1989

Quebec will decide to quietly and quite pleasantly leave Canada. California contemplates splitting into two states, fiscal and non-fiscal. Cuba becomes a Club Med with weapons. The Hate States will form a coalition.

21) We will still be annoyed by people who pun, but we will be able to show them mercy because punning will be revealed to be some sort of connectopathic glitch: The punner, like someone with Tourette’s, has no medical ability not to pun

22) Your sense of time will continue to shred. Years will feel like hours

23) Everyone will be feeling the same way as you

There’s some comfort to be found there.

24) It is going to become much easier to explain why you are the way you are

Much of what we now consider “personality” will be explained away as structural and chemical functions of the brain.

25) Dreams will get better

26)

Being alone will become easier

27)Hooking up will become ever more mechanical and binary

28) It will become harder to view your life as “a story”

The way we define our sense of self will continue to morph via new ways of socializing. The notion of your life needing to be a story will seem slightly corny and dated. Your life becomes however many friends you have online.

29) You will have more say in how long or short you wish your life to feel

Time perception is very much about how you sequence your activities, how many activities you layer overtop of others, and the types of gaps, if any, you leave in between activities.

30) Some existing medical conditions will be seen as sequencing malfunctions

The ability to create and remember sequences is an almost entirely human ability (some crows have been shown to sequence). Dogs, while highly intelligent, still cannot form sequences; it’s the reason why well-trained dogs at shows are still led from station to station by handlers instead of completing the course themselves.

Dysfunctional mental states stem from malfunctions in the brain’s sequencing capacity. One commonly known short-term sequencing dysfunction is dyslexia. People unable to sequence over a slightly longer term might be “not good with directions.” The ultimate sequencing dysfunction is the inability to look at one’s life as a meaningful sequence or story.

31) The built world will continue looking more and more like Microsoft packaging

“We were flying over Phoenix, and it looked like the crumpled-up packaging from a 2006 MS Digital Image Suite.”

32) Musical appreciation will shed all age barriers

33) People who shun new technologies will be viewed as passive-aggressive control freaks trying to rope people into their world, much like vegetarian teenage girls in the early 1980s

1980: “We can’t go to that restaurant. Karen’s vegetarian and it doesn’t have anything for her.”

2010: “What restaurant are we going to? I don’t know. Karen was supposed to tell me, but she doesn’t have a cell, so I can’t ask her. I’m sick of her crazy control-freak behaviour. Let’s go someplace else and not tell her where.”

34) You’re going to miss the 1990s more than you ever thought

35) Stupid people will be in charge, only to be replaced by ever-stupider people. You will live in a world without kings, only princes in whom our faith is shattered

36) Metaphor drift will become pandemic

Words adopted by technology will increasingly drift into new realms to the point where they observe different grammatical laws, e.g., “one mouse”/“three mouses;” “memory hog”/“delete the spam.”

37) People will stop caring how they appear to others

The number of tribal categories one can belong to will become infinite. To use a high-school analogy, 40 years ago you had jocks and nerds. Nowadays, there are Goths, emos, punks, metal-heads, geeks and so forth.

38)Knowing everything will become dull

It all started out so graciously: At a dinner for six, a question arises about, say, that Japanese movie you saw in 1997 (Tampopo), or whether or not Joey Bishop is still alive (no). And before long, you know the answer to everything.

39) IKEA will become an ever-more-spiritual sanctuary

40) We will become more matter-of-fact, in general, about our bodies

41) The future of politics is the careful and effective implanting into the minds of voters images that can never be removed

42) You’ll spend a lot of time shopping online from your jail cell

Over-criminalization of the populace, paired with the triumph of shopping as a dominant cultural activity, will create a world where the two poles of society are shopping and jail.

43) Getting to work will provide vibrant and fun new challenges

Gravel roads, potholes, outhouses, overcrowded buses, short-term hired bodyguards, highwaymen, kidnapping, overnight camping in fields, snaggle-toothed crazy ladies casting spells on you, frightened villagers, organ thieves, exhibitionists and lots of healthy fresh air.

44) Your dream life will increasingly look like Google Street View

45) We will accept the obvious truth that we brought this upon ourselves

Douglas Coupland is a writer and artist based in Vancouver, where he will deliver the first of five CBC Massey Lectures – a ‘novel in five hours’ about the future – on Tuesday.

More from theSource here.

Send to Kindle

Contain this!

From Eurozine:

WikiLeaks’ series of exposés is causing a very different news and informational landscape to emerge. Whilst acknowledging the structural leakiness of networked organisations, Felix Stalder finds deeper reasons for the crisis of information security and the new distribution of investigative journalism.

WikiLeaks is one of the defining stories of the Internet, which means by now, one of the defining stories of the present, period. At least four large-scale trends which permeate our societies as a whole are fused here into an explosive mixture whose fall-out is far from clear. First is a change in the materiality of communication. Communication becomes more extensive, more recorded, and the records become more mobile. Second is a crisis of institutions, particularly in western democracies, where moralistic rhetoric and the ugliness of daily practice are diverging ever more at the very moment when institutional personnel are being encouraged to think more for themselves. Third is the rise of new actors, “super-empowered” individuals, capable of intervening into historical developments at a systemic level. Finally, fourth is a structural transformation of the public sphere (through media consolidation at one pole, and the explosion of non-institutional publishers at the other), to an extent that rivals the one described by Habermas with the rise of mass media at the turn of the twentieth century.

Leaky containers

Imagine dumping nearly 400 000 paper documents into a dead drop located discreetly on the hard shoulder of a road. Impossible. Now imagine the same thing with digital records on a USB stick, or as an upload from any networked computer. No problem at all. Yet, the material differences between paper and digital records go much further than mere bulk. Digital records are the impulses travelling through the nervous systems of dynamic, distributed organisations of all sizes. They are intended, from the beginning, to circulate with ease. Otherwise such organisations would fall apart and dynamism would grind to a halt. The more flexible and distributed organisations become, the more records they need to produce and the faster these need to circulate. Due to their distributed aspect and the pressure for cross-organisational cooperation, it is increasingly difficult to keep records within particular organisations whose boundaries are blurring anyway. Surveillance researchers such as David Lyon have long been writing about the leakiness of “containers”, meaning the tendency for sensitive digital records to cross the boundaries of the institutions which produce them. This leakiness is often driven by commercial considerations (private data being sold), but it happens also out of incompetence (systems being secured insufficiently), or because insiders deliberately violate organisational policies for their own purposes. Either they are whistle-blowers motivated by conscience, as in the case of WikiLeaks, or individuals selling information for private gain, as in the case of the numerous employees of Swiss banks who recently copied the details of private accounts and sold them to tax authorities across Europe. Within certain organisation such as banks and the military, virtually everything is classified and large number of people have access to this data, not least mid-level staff who handle the streams of raw data such as individuals’ records produced as part of daily procedure.

More from theSource here.

Send to Kindle

Map of the World’s Countries Rearranged by Population

From Frank Jacobs / BigThink:

What if the world were rearranged so that the inhabitants of the country with the largest population would move to the country with the largest area? And the second-largest population would migrate to the second-largest country, and so on?

The result would be this disconcerting, disorienting map. In the world described by it, the differences in population density between countries would be less extreme than they are today. The world’s most densely populated country currently is Monaco, with 43,830 inhabitants/mi² (16,923 per km²) (1). On the other end of the scale is Mongolia, which is less densely populated by a factor of almost exactly 10,000, with a mere 4.4 inhabitants/mi² (1.7 per km²).

The averages per country would more closely resemble the global average of 34 per mi² (13 per km²). But those evened-out statistics would describe a very strange world indeed. The global population realignment would involve massive migrations, lead to a heap of painful demotions and triumphant promotions, and produce a few very weird new neighbourhoods.

Take the world’s largest country: Russia. It would be taken over by its Asian neighbour and rival China, the country with the world’s largest population. Overcrowded China would not just occupy underpopulated Siberia – a long-time Russian fear – but also fan out all the way across the Urals to Russia’s westernmost borders. China would thus become a major European power. Russia itself would be relegated to Kazakhstan, which still is the largest landlocked country in the world, but with few hopes of a role on the world stage commensurate with Russia’s clout, which in no small part derives from its sheer size.

Canada, the world’s second-largest country, would be transformed into an Arctic, or at least quite chilly version of India, the country with the world’s second-largest population. The country would no longer be a thinly populated northern afterthought of the US. The billion Indians north of the Great Lakes would make Canada a very distinct, very powerful global player.

Strangely enough, the US itself would not have to swap its population with another country. With 310 million inhabitants, it is the third most populous nation in the world. And with an area of just over 3.7 million mi² (slightly more than 9.6 million km²), it is also the world’s third largest country (2). Brazil, at number five in both lists, is in the same situation. Other non-movers are Yemen and Ireland. Every other country moves house. A few interesting swaps:

  • Countries with relatively high population densities move to more spacious environments. This increases their visibility. Look at those 94 million Filipinos, for example, no longer confined to that small archipelago just south of China. They now occupy the sprawling Democratic Republic of the Congo, the 12th largest country in the world, and slap bang in the middle of Africa too.
  • The reverse is also true. Mongolia, that large, sparsely populated chunk of a country between Russia and China, is relegated to tiny Belgium, whose even tinier neighbour Luxembourg is populated by 320,000 Icelanders, no longer enjoying the instant recognition provided by their distinctly shaped North Atlantic island home.
  • Australia’s 22.5 million inhabitants would move to Spain, the world’s 51st largest country. This would probably be the furthest migration, as both countries are almost exactly antipodean to each other. But Australians would not have to adapt too much to the mainly hot and dry Spanish climate.
  • But spare a thought for those unfortunate Vietnamese. Used to a lush, tropical climate, the 85 million inhabitants of Vietnam would be shipped off to icy Greenland. Even though that Arctic dependency of Denmark has warmed up a bit due to recent climate changes, it would still be mainly snowy, empty and freezing. One imagines a giant group huddle, just to keep warm.
  • Jamaica would still be island-shaped – but landlocked, as the Jamaicans would move to Lesotho, an independent enclave completely surrounded by South Africa – or rather, in this strange new world, South Korea. Those South Koreans probably couldn’t believe their bad luck. Of all the potential new friends in the world, who gets to be their northern neighbour but their wacky cousin, North Korea? It seems the heavily militarised DMZ will move from the Korean peninsula to the South African-Botswanan border.
  • The UK migrates from its strategically advantageous island position off Europe’s western edge to a place smack in the middle of the Sahara desert, to one of those countries the name of which one always has to look up (3). No longer splendidly isolated, it will have to share the neighbourhood with such upstarts as Mexico, Myanmar, Thailand and – good heavens – Iran. Back home, its sceptered isles are taken over by the Tunisians. Even Enoch Powell didn’t see that one coming.
  • Some countries only move a few doors down, so to speak. El Salvador gets Guatemala, Honduras takes over Nicaragua, Nepal occupies Birma/Myanmar and Turkey sets up house in Iran. Others wake up in a whole new environment. Dusty, landlocked Central African Republic is moving to the luscious island of Sri Lanka, with its pristine, ocean-lapped shores. The mountain-dwelling Swiss will have to adapt to life in the flood-infested river delta of Bangladesh.
  • Geography, they say, is destiny (4). Some countries are plagued or blessed by their present location. How would they fare elsewhere? Take Iraq, brought down by wars both of the civil and the other kind, and burdened with enough oil to finance lavish dictatorships and arouse the avidity of superpowers. What if the 31.5 million Iraqis moved to the somewhat larger, equally sunny country of Zambia – getting a lot of nice, non-threatening neighbours in the process?

Rearranged maps that switch the labels of the countries depicted, as if in some parlour game, to represent some type of statistical data, are an interesting subcategory of curious cartography. The most popular example discussed on this blog is the map of the US, with the states’ names replaced by that of countries with an equivalent GDP (see #131). Somewhat related, if by topic rather than technique, is the cartogram discussed in blog post #96, showing the world’s countries shrunk or inflated to reflect the size of their population.

Many thanks to all who sent in this map: Matt Chisholm, Criggie, Roel Damiaans, Sebastian Dinjens, Irwin Hébert, Allard H., Olivier Muzerelle, Rodrigo Oliva, Rich Sturges, and John Thorne. The map is referenced on half a dozen websites where it can be seen in full resolution (this one among them), but it is unclear where it first originated, and who produced it (the map is signed, in the bottom right hand corner, by JPALMZ).

—–

(1) Most (dependent) territories and countries in the top 20 of Wikipedia’s population density ranking have tiny areas, with populations that are, in relation to those of other countries, quite negligeable. The first country on the list with both a substantial surface and population is Bangladesh, in 9th place with a total population of over 162 million and a density of 1,126 inhabitants/mi² (56 per km²).

(2) Actually, the US contends third place with China. Both countries have almost the same size, and varying definitions of how large they are. Depending on whether or not you include Taiwan and (other) disputed areas in China, and overseas territories in the US, either country  can be third of fourth on the list.

(3) Niger, not to be confused with nearby Nigeria. Nor with neighbouring Burkina Faso, which used to be Upper Volta (even though there never was a Lower Volta except, perhaps, Niger. Or Nigeria).

(4) The same is said of demography. And of a bunch of other stuff.

More from theSource here.

Send to Kindle

The Top Ten Daily Consequences of Having Evolved

From Smithsonian.com:

Natural selection acts by winnowing the individuals of each generation, sometimes clumsily, as old parts and genes are co-opted for new roles. As a result, all species inhabit bodies imperfect for the lives they live. Our own bodies are worse off than most simply because of the many differences between the wilderness in which we evolved and the modern world in which we live. We feel the consequences every day. Here are ten.

1. Our cells are weird chimeras
Perhaps a billion years ago, a single-celled organism arose that would ultimately give rise to all of the plants and animals on Earth, including us. This ancestor was the result of a merging: one cell swallowed, imperfectly, another cell. The predator provided the outsides, the nucleus and most of the rest of the chimera. The prey became the mitochondrion, the cellular organ that produces energy. Most of the time, this ancient symbiosis proceeds amicably. But every so often, our mitochondria and their surrounding cells fight. The result is diseases, such as mitochondrial myopathies (a range of muscle diseases) or Leigh’s disease (which affects the central nervous system).

2. Hiccups
The first air-breathing fish and amphibians extracted oxygen using gills when in the water and primitive lungs when on land—and to do so, they had to be able to close the glottis, or entryway to the lungs, when underwater. Importantly, the entryway (or glottis) to the lungs could be closed. When underwater, the animals pushed water past their gills while simultaneously pushing the glottis down. We descendants of these animals were left with vestiges of their history, including the hiccup. In hiccupping, we use ancient muscles to quickly close the glottis while sucking in (albeit air, not water). Hiccups no longer serve a function, but they persist without causing us harm—aside from frustration and occasional embarrassment. One of the reasons it is so difficult to stop hiccupping is that the entire process is controlled by a part of our brain that evolved long before consciousness, and so try as you might, you cannot think hiccups away.

3. Backaches
The backs of vertebrates evolved as a kind of horizontal pole under which guts were slung. It was arched in the way a bridge might be arched, to support weight. Then, for reasons anthropologists debate long into the night, our hominid ancestors stood upright, which was the bodily equivalent of tipping a bridge on end. Standing on hind legs offered advantages—seeing long distances, for one, or freeing the hands to do other things—but it also turned our backs from an arched bridge to an S shape. The letter S, for all its beauty, is not meant to support weight and so our backs fail, consistently and painfully.

More from theSource here.

Send to Kindle

MondayPoem: The Lie

By Robert Pinsky for Slate:

Denunciation abounds, in its many forms: snark (was that word invented or fostered in a poem, Lewis Carroll’s “The Hunting of the Snark“?), ranking-out, calling-out, bringing-down, blowing-up, flaming, scorching, trashing, negative campaigning, skepticism, exposure, nailing, shafting, finishing, diminishing, down-blogging. Aggressive moral denunciation—performed with varying degrees of justice and skill in life, in print, on the Web, in politics, on television and radio, in book-reviewing, in sports, in courtrooms and committee meetings—generates dismay and glee in its audience. Sometimes, for many of us, dismay and glee simultaneously, in an uneasy combination.

A basic form of denunciation is indicated by the slightly archaic but useful expression giving the lie.

No one has ever given the lie more memorably, explicitly, and universally than Sir Walter Raleigh (1552-1618) in “The Lie.” The poem, among other things, demonstrates the power of repetition and refrain. The power, too, of plain rather than fancy or arcane words—for example, blabbing.

I remember being enchanted—a bit excessively, I now think—when I first read “The Lie” by a single wonderful image early on: “Say to the court it glows/ And shines like rotten wood.” The mental picture of an opalescent, greenish glow on a moldy softwood plank—that phosphorescent decay—knocked me out (to use an expression from those student days). It was a period when images were highly prized, and my teachers encouraged me to prize images, the deeper the better. Well, though I may have been unreflectingly guided by fashion, at least I had the brains to appreciate this great image of Raleigh’s.

But now that superb rotten wood feels like an incidental or ancillary beauty to me, one moment in a larger force. What propels this poem is not its images but its masterful breaking down of an idea into social and moral components: the brilliant, considered division into hammer-blows of example and refrain while the pace and content vary around that central pulse. “Driving home the point” could not have a more apt demonstration.

Raleigh’s manic, extended thoroughness; his resourceful rhyming; his relentless, wide gaze that takes in love and zeal, wit and wisdom, and, ultimately, also includes his own soul’s “blabbing”—this is form as audible conviction: conviction of a degree and kind attainable only by a poem.

“The Lie”

Go, soul, the body’s guest,
….Upon a thankless arrant;
Fear not to touch the best;
….The truth shall be thy warrant:
….….Go, since I needs must die,
….….And give the world the lie.

Say to the court it glows
….And shines like rotten wood,
Say to the church it shows
….What’s good, and doth no good:
….….If church and court reply,
….….Then give them both the lie.

Tell potentates, they live
….Acting, by others’ action;
Not lov’d unless they give;
….Not strong, but by affection.
….….If potentates reply,
….….Give potentates the lie.

Tell men of high condition,
….That manage the estate,
Their purpose is ambition;
….Their practice only hate.
….….And if they once reply,
….….Then give them all the lie.

Tell them that brave it most,
….They beg for more by spending,
Who in their greatest cost
….Like nothing but commending.
….….And if they make reply,
….….Then give them all the lie.

Tell zeal it wants devotion;
….Tell love it is but lust;
Tell time it meets but motion;
….Tell flesh it is but dust:
….….And wish them not reply,
….….For thou must give the lie.

Tell age it daily wasteth;
….Tell honour how it alters;
Tell beauty how she blasteth;
….Tell favour how it falters:
….….And as they shall reply,
….….Give every one the lie.

Tell wit how much it wrangles
….In tickle points of niceness;
Tell wisdom she entangles
….Herself in over-wiseness:
….….And when they do reply,
….….Straight give them both the lie.

Tell physic of her boldness;
….Tell skill it is prevention;
Tell charity of coldness;
….Tell law it is contention:
….….And as they do reply,
….….So give them still the lie.

Tell fortune of her blindness;
….Tell nature of decay;
Tell friendship of unkindness;
….Tell justice of delay:
….….And if they will reply,
….….Then give them all the lie.

Tell arts they have no soundness,
….But vary by esteeming;
Tell schools they want profoundness,
….And stand too much on seeming.
….….If arts and schools reply,
….….Give arts and schools the lie.

Tell faith it’s fled the city;
….Tell how the country erreth;
Tell manhood, shakes off pity;
….Tell virtue, least preferreth.
….….And if they do reply,
….….Spare not to give the lie.

So when thou hast, as I
….Commanded thee, done blabbing;
Because to give the lie
….Deserves no less than stabbing:
….….Stab at thee, he that will,
….….No stab thy soul can kill!

—Sir Walter Raleigh

More from theSource here.

Send to Kindle

Search Engine History

It’s hard to believe that internet based search engines have been in the mainstream consciousness for around twenty years now. It seems not too long ago that we were all playing Pong and searching index cards at the local library. Infographics Labs puts the last twenty years of search in summary for us below.

From Infographic Labs:

Search Engine History

Infographic: Search Engine History by Infographiclabs

Send to Kindle

Andre Geim: in praise of graphene

From Nature:

Nobel laureate explains why the carbon sheets deserved to win this year’s prize.

This year’s Nobel Prize in Physics went to the discoverers of the one-atom-thick sheets of carbon known as graphene. Andre Geim of the University of Manchester, UK, who shared the award with his colleague Konstantin Novoselov, tells Nature why graphene deserves the prize, and why he hasn’t patented it.

In one sentence, what is graphene?

Graphene is a single plane of graphite that has to be pulled out of bulk graphite to show its amazing properties.

What are these properties?

It’s the thinnest possible material you can imagine. It also has the largest surface-to-weight ratio: with one gram of graphene you can cover several football pitches (in Manchester, you know, we measure surface area in football pitches). It’s also the strongest material ever measured; it’s the stiffest material we know; it’s the most stretchable crystal. That’s not the full list of superlatives, but it’s pretty impressive.

A lot of people expected you to win, but not so soon after the discovery in 2004. Were you expecting it?

I didn’t think it would happen this year. I was thinking about next year or maybe 2014. I slept quite soundly without much expectation. Yeah, it’s good, it’s good.

Graphene has won, but not that much has actually been done with it yet. Do you think it was too soon?

No. The prize, if you read the citation, was given for the properties of graphene; it wasn’t given for expectations that have not yet been realized. Ernest Rutherford’s 1908 Nobel Prize in Chemistry wasn’t given for the nuclear power station — he wouldn’t have survived that long — it was given for showing how interesting atomic physics could be. I believe the Nobel prize committee did a good job.

Do you think that carbon nanotubes were unfairly overlooked?

It’s difficult to judge; I’m a little afraid of being biased. If the prize had been given for bringing graphene to the attention of the community, then it would have been unfair to take it away from carbon nanotubes. But it was given for graphene’s properties, and I think carbon nanotubes did not deliver that range of properties. Everyone knows that — in terms of physics, not applications — carbon nanotubes were not as successful as graphene.

Why do you think graphene has become so popular in the physics community?

I would say there are three important things about graphene. It’s two-dimensional, which is the best possible number for studying fundamental physics. The second thing is the quality of graphene, which stems from its extremely strong carbon–carbon bonds. And finally, the system is also metallic.

What do you think graphene will be used for first?

Two or three months ago, I was in South Korea, and I was shown a graphene roadmap, compiled by Samsung. On this roadmap were approximately 50 dots, corresponding to particular applications. One of the closest applications with a reasonable market value was a flexible touch screen. Samsung expects something within two to three years.

More from theSource here.

Send to Kindle

Small Change. Why the Revolution will Not be Tweeted

From The New Yorker:

At four-thirty in the afternoon on Monday, February 1, 1960, four college students sat down at the lunch counter at the Woolworth’s in downtown Greensboro, North Carolina. They were freshmen at North Carolina A. & T., a black college a mile or so away.

“I’d like a cup of coffee, please,” one of the four, Ezell Blair, said to the waitress.

“We don’t serve Negroes here,” she replied.

The Woolworth’s lunch counter was a long L-shaped bar that could seat sixty-six people, with a standup snack bar at one end. The seats were for whites. The snack bar was for blacks. Another employee, a black woman who worked at the steam table, approached the students and tried to warn them away. “You’re acting stupid, ignorant!” she said. They didn’t move. Around five-thirty, the front doors to the store were locked. The four still didn’t move. Finally, they left by a side door. Outside, a small crowd had gathered, including a photographer from the Greensboro Record. “I’ll be back tomorrow with A. & T. College,” one of the students said.

By next morning, the protest had grown to twenty-seven men and four women, most from the same dormitory as the original four. The men were dressed in suits and ties. The students had brought their schoolwork, and studied as they sat at the counter. On Wednesday, students from Greensboro’s “Negro” secondary school, Dudley High, joined in, and the number of protesters swelled to eighty. By Thursday, the protesters numbered three hundred, including three white women, from the Greensboro campus of the University of North Carolina. By Saturday, the sit-in had reached six hundred. People spilled out onto the street. White teen-agers waved Confederate flags. Someone threw a firecracker. At noon, the A. & T. football team arrived. “Here comes the wrecking crew,” one of the white students shouted.

By the following Monday, sit-ins had spread to Winston-Salem, twenty-five miles away, and Durham, fifty miles away. The day after that, students at Fayetteville State Teachers College and at Johnson C. Smith College, in Charlotte, joined in, followed on Wednesday by students at St. Augustine’s College and Shaw University, in Raleigh. On Thursday and Friday, the protest crossed state lines, surfacing in Hampton and Portsmouth, Virginia, in Rock Hill, South Carolina, and in Chattanooga, Tennessee. By the end of the month, there were sit-ins throughout the South, as far west as Texas. “I asked every student I met what the first day of the sitdowns had been like on his campus,” the political theorist Michael Walzer wrote in Dissent. “The answer was always the same: ‘It was like a fever. Everyone wanted to go.’ ” Some seventy thousand students eventually took part. Thousands were arrested and untold thousands more radicalized. These events in the early sixties became a civil-rights war that engulfed the South for the rest of the decade—and it happened without e-mail, texting, Facebook, or Twitter.

The world, we are told, is in the midst of a revolution. The new tools of social media have reinvented social activism. With Facebook and Twitter and the like, the traditional relationship between political authority and popular will has been upended, making it easier for the powerless to collaborate, coördinate, and give voice to their concerns. When ten thousand protesters took to the streets in Moldova in the spring of 2009 to protest against their country’s Communist government, the action was dubbed the Twitter Revolution, because of the means by which the demonstrators had been brought together. A few months after that, when student protests rocked Tehran, the State Department took the unusual step of asking Twitter to suspend scheduled maintenance of its Web site, because the Administration didn’t want such a critical organizing tool out of service at the height of the demonstrations. “Without Twitter the people of Iran would not have felt empowered and confident to stand up for freedom and democracy,” Mark Pfeifle, a former national-security adviser, later wrote, calling for Twitter to be nominated for the Nobel Peace Prize. Where activists were once defined by their causes, they are now defined by their tools. Facebook warriors go online to push for change. “You are the best hope for us all,” James K. Glassman, a former senior State Department official, told a crowd of cyber activists at a recent conference sponsored by Facebook, A. T. & T., Howcast, MTV, and Google. Sites like Facebook, Glassman said, “give the U.S. a significant competitive advantage over terrorists. Some time ago, I said that Al Qaeda was ‘eating our lunch on the Internet.’ That is no longer the case. Al Qaeda is stuck in Web 1.0. The Internet is now about interactivity and conversation.”

These are strong, and puzzling, claims. Why does it matter who is eating whose lunch on the Internet? Are people who log on to their Facebook page really the best hope for us all? As for Moldova’s so-called Twitter Revolution, Evgeny Morozov, a scholar at Stanford who has been the most persistent of digital evangelism’s critics, points out that Twitter had scant internal significance in Moldova, a country where very few Twitter accounts exist. Nor does it seem to have been a revolution, not least because the protests—as Anne Applebaum suggested in the Washington Post—may well have been a bit of stagecraft cooked up by the government. (In a country paranoid about Romanian revanchism, the protesters flew a Romanian flag over the Parliament building.) In the Iranian case, meanwhile, the people tweeting about the demonstrations were almost all in the West. “It is time to get Twitter’s role in the events in Iran right,” Golnaz Esfandiari wrote, this past summer, in Foreign Policy. “Simply put: There was no Twitter Revolution inside Iran.” The cadre of prominent bloggers, like Andrew Sullivan, who championed the role of social media in Iran, Esfandiari continued, misunderstood the situation. “Western journalists who couldn’t reach—or didn’t bother reaching?—people on the ground in Iran simply scrolled through the English-language tweets post with tag #iranelection,” she wrote. “Through it all, no one seemed to wonder why people trying to coordinate protests in Iran would be writing in any language other than Farsi.”

Some of this grandiosity is to be expected. Innovators tend to be solipsists. They often want to cram every stray fact and experience into their new model. As the historian Robert Darnton has written, “The marvels of communication technology in the present have produced a false consciousness about the past—even a sense that communication has no history, or had nothing of importance to consider before the days of television and the Internet.” But there is something else at work here, in the outsized enthusiasm for social media. Fifty years after one of the most extraordinary episodes of social upheaval in American history, we seem to have forgotten what activism is.

More from theSource here.

Send to Kindle

Art. Does it have to be BOLD to be good?

The lengthy corridors of art history over the last five hundred years are decorated with numerous bold and monumental works. Just to name a handful of memorable favorites you’ll see a pattern emerge: Guernica (Pablo Picasso), The Persistence of Memory (Salvador Dali), The Dance (Henri Matisse), The Garden of Earthly Delights (Heironymous Bosch). Yes, these works are bold. They’re bold in the sense that they represented a fundamental shift from the artistic sensibilities and ideas of their times. These works stirred the salons and caused commotion among the “cognosenti” and the chattering classes. They implored (or decried) the establishment to take notice of new forms, new messages, new perspectives.

And, now here we are in the 21st century, floating in a bottomless bowl of a bold media soup; 24-hour opinion and hyperbole; oversized interactive billboards, explosive 3D movies, voyeuristic reality TV, garish commercials, sexually charged headlines and suggestive mainstream magazines. The provocative images, the loudness, the vividness, the anger – it’s all bold and it’s vying for your increasingly fragmented and desensitized attention. But, this contemporary boldness seems more aligned with surface brightness and bigness than it is with depth of meaning. The boldness of works by earlier artists such as Picasso, Dali, Bosch came from depth of meaning rather than use of neon paints or other bold visual noise.

So, what of contemporary art over the last couple of decades? Well, a pseudo-scientific tour of half-a-dozen art galleries featuring the in-the-moment works of art may well tell you the same story – it’s mostly bold as well. What’s been selling at the top art auction houses? Bold. What’s been making headlines in the art world? Bold.

The trend is and has been set for a while: it has to be brighter, louder, bigger. Indeed, a recent feature article in the New York Times on the 25th Paris Biennale seems to confirm this trend in Western art. (Background: The Biennale is home to around a hundred of the world’s most exclusive art galleries, those that purport to set the art world’s trends, make or break emerging artists and most importantly (for them) set “market” prices.) The article’s author, Souren Melikian, states:

Perception is changing. Interest in subtle nuances is receding as our attention span shortens. Awareness of this trend probably accounts for the recent art trade emphasis on clarity and monumentality and the striking progression of 20th-century modernity.

Well, I certainly take no issue with the observation that “commercial” art has become much more monumental and less subtle, especially over the last 40 years. By it’s very nature for most art to be successful in today’s market overflowing with noise, distraction and mediocrity it must draw someone’s fragmented and limited attention, and sadly, it does this by being bold, bright or big! However, I strongly disagree that “clarity” is a direct result of this new trend in boldness. I could recite a list as long as my arm of paintings and other art works that show remarkable clarity even though they are merely subtle.

Perhaps paradoxically, brokers and buyers of bold seem exclusively to associate boldness with a statement of modernity, compositional complexity, and layered meaning. The galleries at the Biennale seem to be confusing subtlety with dullness, simplicity and shallowness. Yet, the world is full of an equal number of works that exhibit just as much richness, depth and emotion as their bolder counterparts despite their surface subtlety. There is room for reflection and nuanced mood; there is room for complexity and depth in meaning from simple composition; there is room for pastels in this over-saturated, bold neon world.

As Bob Duggan eloquently states, at BigThink:

The meek, such as 2009 Turner Prize winner Richard Wright (reviewed recently by me here) may yet inherit the earth, but only in a characteristically quiet way. Hirst’s jewel-encrusted skulls will always grab headlines, but Wright’s simpler, pensive work can engage hearts and minds in a more fulfilling way. And why is it important that the right thing happens and the Wrights win out over the Hirsts? Because art remains one of the few havens for thought in our noise- and light-polluted world.

So, I’m encouraged to see that I am not yet a lost and lone voice in this noisy wilderness of bold brashness. Oh, and in case you’re wondering what a meaningfully complex yet subtle painting looks like, gaze at Half Light by Dana Blanchard above.

Send to Kindle

Commonplaces of technology critique

From Eurozine:

What is it good for? A passing fad! It makes you stupid! Today’s technology critique is tomorrow’s embarrassing error of judgement, as Katrin Passig shows. Her suggestion: one should try to avoid repeating the most commonplace critiques, particularly in public.

In a 1969 study on colour designations in different cultures, anthropologist Brent Berlin and linguist Paul Kay described how the sequence of levels of observed progression was always the same. Cultures with only two colour concepts distinguish between “light” and “dark” shades. If the culture recognizes three colours, the third will be red. If the language differentiates further, first come green and/or yellow, then blue. All languages with six colour designations distinguish between black, white, red, green, blue and yellow. The next level is brown, then, in varying sequences, orange, pink, purple and/or grey, with light blue appearing last of all.

The reaction to technical innovations, both in the media and in our private lives, follows similarly preconceived paths. The first, entirely knee-jerk dismissal is the “What the hell is it good for?” (Argument No.1) with which IBM engineer Robert Lloyd greeted the microprocessor in 1968. Even practices and techniques that only constitute a variation on the familiar – the electric typewriter as successor to the mechanical version, for instance – are met with distaste in the cultural criticism sector. Inventions like the telephone or the Internet, which open up a whole new world, have it even tougher. If cultural critics had existed at the dawn of life itself, they would have written grumpily in their magazines: “Life – what is it good for? Things were just fine before.”

Because the new throws into confusion processes that people have got used to, it is often perceived not only as useless but as a downright nuisance. The student Friedrich August Köhler wrote in 1790 after a journey on foot from Tübingen to Ulm: “[Signposts] had been put up everywhere following an edict of the local prince, but their existence proved short-lived, since they tended to be destroyed by a boisterous rabble in most places. This was most often the case in areas where the country folk live scattered about on farms, and when going on business to the next city or village more often than not come home inebriated and, knowing the way as they do, consider signposts unnecessary.”

The Parisians seem to have greeted the introduction of street lighting in 1667 under Louis XIV with a similar lack of enthusiasm. Dietmar Kammerer conjectured in the Süddeutsche Zeitung that the regular destruction of these street lamps represented a protest on the part of the citizens against the loss of their private sphere, since it seemed clear to them that here was “a measure introduced by the king to bring the streets under his control”. A simpler explanation would be that citizens tend in the main to react aggressively to unsupervised innovations in their midst. Recently, Deutsche Bahn explained that the initial vandalism of their “bikes for hire” had died down, now that locals had “grown accustomed to the sight of the bicycles”.

When it turns out that the novelty is not as useless as initially assumed, there follows the brief interregnum of Argument No.2: “Who wants it anyway?” “That’s an amazing invention,” gushed US President Rutherford B. Hayes of the telephone, “but who would ever want to use one of them?” And the film studio boss Harry M. Warner is quoted as asking in 1927, “Who the hell wants to hear actors talk?”.

More from theSource here.

Send to Kindle

MondayPoem: The Chimney Sweeper

By Robert Pinsky for Slate:

Here is a pair of poems more familiar than many I’ve presented here in the monthly “Classic Poem” feature—familiar, maybe, yet with an unsettling quality that seems inexhaustible. As in much of William Blake’s writing, what I may think I know, he manages to make me wonder if I really do know.

“Blake’s poetry has the unpleasantness of great poetry,” says T.S. Eliot (who has a way of parodying himself even while making wise observations). The truth in Eliot’s remark, for me, has to do not simply with Blake’s indictment of conventional churches, governments, artists but with his general, metaphysical defiance toward customary ways of understanding the universe.

The “unpleasantness of great poetry,” as exemplified by Blake, is rooted in a seductively beautiful process of unbalancing and disrupting. Great poetry gives us elaborately attractive constructions of architecture or music or landscape—while preventing us from settling comfortably into this new and engaging structure, cadence, or terrain. In his Songs of Innocence and Experience, Shewing the Two Contrary States of the Human Soul, Blake achieves a binary, deceptively simple version of that splendid “unpleasantness.”

In particular, the two poems both titled “The Chimney Sweeper” offer eloquent examples of Blake’s unsettling art. (One “Chimney Sweeper” poem comes from the Songs of Innocence; the other, from the Songs of Experience.) I can think to myself that the poem in Songs of Innocence is more powerful than the one in Songs of Experience, because the Innocence characters—both the “I” who speaks and “little Tom Dacre”—provide, in their heartbreaking extremes of acceptance, the more devastating indictment of social and economic arrangements that sell and buy children, sending them to do crippling, fatal labor.

By that light, the Experience poem entitled “The Chimney Sweeper,” explicit and accusatory, can seem a lesser work of art. The Innocence poem is implicit and ironic. Its delusional or deceptive Angel with a bright key exposes religion as exploiting the credulous children, rather than protecting them or rescuing them. The profoundly, utterly “innocent” speaker provides a subversive drama.

But that judgment is unsettled by second thoughts: Does the irony of the Innocence poem affect me all the more—does it penetrate without seeming heavy?—precisely because I am aware of the Experience poem? Do the explicit lines “They clothed me in the clothes of death,/ And taught me to sing the notes of woe” re-enforce the Innocence poem’s meanings—while pointedly differing from, maybe even criticizing, that counterpart-poem’s ironic method? And doesn’t that, too, bring another, significant note of dramatic outrage?

Or, to put it the question more in terms of subject matter, both poems dramatize the way religion, government, and custom collaborate in social arrangements that impose cruel treatment on some people while enhancing the lives of others (for example, by cleaning their chimneys). Does the naked, declarative quality of the Experience poem sharpen my understanding of the Innocence poem? Does the pairing hold back or forbid my understanding’s tendency to become self-congratulatory or pleasantly resolved? It is in the nature of William Blake’s genius to make such questions not just literary but moral.

“The Chimney Sweeper,” from Songs of Innocence

When my mother died I was very young,
And my father sold me while yet my tongue
Could scarcely cry ” ‘weep! ‘weep! ‘weep! ‘weep!’ ”
So your chimneys I sweep & in soot I sleep.

There’s little Tom Dacre, who cried when his head
That curled like a lamb’s back, was shaved: so I said,
“Hush, Tom! never mind it, for when your head’s bare
You know that the soot cannot spoil your white hair.”

And so he was quiet, & that very night,
As Tom was a-sleeping he had such a sight!
That thousands of sweepers, Dick, Joe, Ned & Jack,
Were all of them locked up in coffins of black.

And by came an Angel who had a bright key,
And he opened the coffins & set them all free;
Then down a green plain, leaping, laughing, they run,
And wash in a river and shine in the Sun.

Then naked & white, all their bags left behind,
They rise upon clouds and sport in the wind.
And the Angel told Tom, if he’d be a good boy,
He’d have God for his father & never want joy.

And so Tom awoke; and we rose in the dark,
And got with our bags & our brushes to work.
Though the morning was cold, Tom was happy & warm;
So if all do their duty, they need not fear harm.

—William Blake

More from theSource here.

Send to Kindle

Google’s Earth

From The New York Times:

“I ACTUALLY think most people don’t want Google to answer their questions,” said the search giant’s chief executive, Eric Schmidt, in a recent and controversial interview. “They want Google to tell them what they should be doing next.” Do we really desire Google to tell us what we should be doing next? I believe that we do, though with some rather complicated qualifiers.

Science fiction never imagined Google, but it certainly imagined computers that would advise us what to do. HAL 9000, in “2001: A Space Odyssey,” will forever come to mind, his advice, we assume, eminently reliable — before his malfunction. But HAL was a discrete entity, a genie in a bottle, something we imagined owning or being assigned. Google is a distributed entity, a two-way membrane, a game-changing tool on the order of the equally handy flint hand ax, with which we chop our way through the very densest thickets of information. Google is all of those things, and a very large and powerful corporation to boot.

We have yet to take Google’s measure. We’ve seen nothing like it before, and we already perceive much of our world through it. We would all very much like to be sagely and reliably advised by our own private genie; we would like the genie to make the world more transparent, more easily navigable. Google does that for us: it makes everything in the world accessible to everyone, and everyone accessible to the world. But we see everyone looking in, and blame Google.

Google is not ours. Which feels confusing, because we are its unpaid content-providers, in one way or another. We generate product for Google, our every search a minuscule contribution. Google is made of us, a sort of coral reef of human minds and their products. And still we balk at Mr. Schmidt’s claim that we want Google to tell us what to do next. Is he saying that when we search for dinner recommendations, Google might recommend a movie instead? If our genie recommended the movie, I imagine we’d go, intrigued. If Google did that, I imagine, we’d bridle, then begin our next search.

We never imagined that artificial intelligence would be like this. We imagined discrete entities. Genies. We also seldom imagined (in spite of ample evidence) that emergent technologies would leave legislation in the dust, yet they do. In a world characterized by technologically driven change, we necessarily legislate after the fact, perpetually scrambling to catch up, while the core architectures of the future, increasingly, are erected by entities like Google.

William Gibson is the author of the forthcoming novel “Zero History.”

More from theSource here.

Send to Kindle

Social networking: Failure to connect

From the Guardian:

The first time I joined Facebook, I had to quit again immediately. It was my first week of university. I was alone, along with thousands of other students, in a sea of club nights and quizzes and tedious conversations about other people’s A-levels. This was back when the site was exclusively for students. I had been told, in no uncertain terms, that joining was mandatory. Failure to do so was a form of social suicide worse even than refusing to drink alcohol. I had no choice. I signed up.

Users of Facebook will know the site has one immutable feature. You don’t have to post a profile picture, or share your likes and dislikes with the world, though both are encouraged. You can avoid the news feed, the apps, the tweet-like status updates. You don’t even have to choose a favourite quote. The one thing you cannot get away from is your friend count. It is how Facebook keeps score.

Five years ago, on probably the loneliest week of my life, my newly created Facebook page looked me square in the eye and announced: “You have 0 friends.” I closed the account.

Facebook is not a good place for a lonely person, and not just because of how precisely it quantifies your isolation. The news feed, the default point of entry to the site, is a constantly updated stream of your every friend’s every activity, opinion and photograph. It is a Twitter feed in glorious technicolour, complete with pictures, polls and videos. It exists to make sure you know exactly how much more popular everyone else is, casually informing you that 14 of your friends were tagged in the album “Fun without Tom Meltzer”. It can be, to say the least, disheartening. Without a real-world social network with which to interact, social networking sites act as proof of the old cliché: you’re never so alone as when you’re in a crowd.

The pressures put on teenagers by sites such as Facebook are well-known. Reports of cyber-bullying, happy-slapping, even self-harm and suicide attempts motivated by social networking sites have become increasingly common in the eight years since Friendster – and then MySpace, Bebo and Facebook – launched. But the subtler side-effects for a generation that has grown up with these sites are only now being felt. In March this year, the NSPCC published a detailed breakdown of calls made to ChildLine in the last five years. Though overall the number of calls from children and teenagers had risen by just 10%, calls about loneliness had nearly tripled, from 1,853 five years ago to 5,525 in 2009. Among boys, the number of calls about loneliness was more than five times higher than it had been in 2004.

This is not just a teenage problem. In May, the Mental Health Foundation released a report called The Lonely Society? Its survey found that 53% of 18-34-year-olds had felt depressed because of loneliness, compared with just 32% of people over 55. The question of why was, in part, answered by another of the report’s findings: nearly a third of young people said they spent too much time communicating online and not enough in person.

More from theSource here.

Send to Kindle

Sergey Brin’s Search for a Parkinson’s Cure

From Wired:

Several evenings a week, after a day’s work at Google headquarters in Mountain View, California, Sergey Brin drives up the road to a local pool. There, he changes into swim trunks, steps out on a 3-meter springboard, looks at the water below, and dives.

Brin is competent at all four types of springboard diving—forward, back, reverse, and inward. Recently, he’s been working on his twists, which have been something of a struggle. But overall, he’s not bad; in 2006 he competed in the master’s division world championships. (He’s quick to point out he placed sixth out of six in his event.)

The diving is the sort of challenge that Brin, who has also dabbled in yoga, gymnastics, and acrobatics, is drawn to: equal parts physical and mental exertion. “The dive itself is brief but intense,” he says. “You push off really hard and then have to twist right away. It does get your heart rate going.”

There’s another benefit as well: With every dive, Brin gains a little bit of leverage—leverage against a risk, looming somewhere out there, that someday he may develop the neurodegenerative disorder Parkinson’s disease. Buried deep within each cell in Brin’s body—in a gene called LRRK2, which sits on the 12th chromosome—is a genetic mutation that has been associated with higher rates of Parkinson’s.

Not everyone with Parkinson’s has an LRRK2 mutation; nor will everyone with the mutation get the disease. But it does increase the chance that Parkinson’s will emerge sometime in the carrier’s life to between 30 and 75 percent. (By comparison, the risk for an average American is about 1 percent.) Brin himself splits the difference and figures his DNA gives him about 50-50 odds.

That’s where exercise comes in. Parkinson’s is a poorly understood disease, but research has associated a handful of behaviors with lower rates of disease, starting with exercise. One study found that young men who work out have a 60 percent lower risk. Coffee, likewise, has been linked to a reduced risk. For a time, Brin drank a cup or two a day, but he can’t stand the taste of the stuff, so he switched to green tea. (“Most researchers think it’s the caffeine, though they don’t know for sure,” he says.) Cigarette smokers also seem to have a lower chance of developing Parkinson’s, but Brin has not opted to take up the habit. With every pool workout and every cup of tea, he hopes to diminish his odds, to adjust his algorithm by counteracting his DNA with environmental factors.

“This is all off the cuff,” he says, “but let’s say that based on diet, exercise, and so forth, I can get my risk down by half, to about 25 percent.” The steady progress of neuroscience, Brin figures, will cut his risk by around another half—bringing his overall chance of getting Parkinson’s to about 13 percent. It’s all guesswork, mind you, but the way he delivers the numbers and explains his rationale, he is utterly convincing.

Brin, of course, is no ordinary 36-year-old. As half of the duo that founded Google, he’s worth about $15 billion. That bounty provides additional leverage: Since learning that he carries a LRRK2 mutation, Brin has contributed some $50 million to Parkinson’s research, enough, he figures, to “really move the needle.” In light of the uptick in research into drug treatments and possible cures, Brin adjusts his overall risk again, down to “somewhere under 10 percent.” That’s still 10 times the average, but it goes a long way to counterbalancing his genetic predisposition.

It sounds so pragmatic, so obvious, that you can almost miss a striking fact: Many philanthropists have funded research into diseases they themselves have been diagnosed with. But Brin is likely the first who, based on a genetic test, began funding scientific research in the hope of escaping a disease in the first place.

More from theSource here.

Send to Kindle

The internet: Everything you ever need to know

From The Observer:

In spite of all the answers the internet has given us, its full potential to transform our lives remains the great unknown. Here are the nine key steps to understanding the most powerful tool of our age – and where it’s taking us.

A funny thing happened to us on the way to the future. The internet went from being something exotic to being boring utility, like mains electricity or running water – and we never really noticed. So we wound up being totally dependent on a system about which we are terminally incurious. You think I exaggerate about the dependence? Well, just ask Estonia, one of the most internet-dependent countries on the planet, which in 2007 was more or less shut down for two weeks by a sustained attack on its network infrastructure. Or imagine what it would be like if, one day, you suddenly found yourself unable to book flights, transfer funds from your bank account, check bus timetables, send email, search Google, call your family using Skype, buy music from Apple or books from Amazon, buy or sell stuff on eBay, watch clips on YouTube or BBC programmes on the iPlayer – or do the 1,001 other things that have become as natural as breathing.

The internet has quietly infiltrated our lives, and yet we seem to be remarkably unreflective about it. That’s not because we’re short of information about the network; on the contrary, we’re awash with the stuff. It’s just that we don’t know what it all means. We’re in the state once described by that great scholar of cyberspace, Manuel Castells, as “informed bewilderment”.

Mainstream media don’t exactly help here, because much – if not most – media coverage of the net is negative. It may be essential for our kids’ education, they concede, but it’s riddled with online predators, seeking children to “groom” for abuse. Google is supposedly “making us stupid” and shattering our concentration into the bargain. It’s also allegedly leading to an epidemic of plagiarism. File sharing is destroying music, online news is killing newspapers, and Amazon is killing bookshops. The network is making a mockery of legal injunctions and the web is full of lies, distortions and half-truths. Social networking fuels the growth of vindictive “flash mobs” which ambush innocent columnists such as Jan Moir. And so on.

All of which might lead a detached observer to ask: if the internet is such a disaster, how come 27% of the world’s population (or about 1.8 billion people) use it happily every day, while billions more are desperate to get access to it?

So how might we go about getting a more balanced view of the net ? What would you really need to know to understand the internet phenomenon? Having thought about it for a while, my conclusion is that all you need is a smallish number of big ideas, which, taken together, sharply reduce the bewilderment of which Castells writes so eloquently.

But how many ideas? In 1956, the psychologist George Miller published a famous paper in the journal Psychological Review. Its title was “The Magical Number Seven, Plus or Minus Two: Some Limits on our Capacity for Processing Information” and in it Miller set out to summarise some earlier experiments which attempted to measure the limits of people’s short-term memory. In each case he reported that the effective “channel capacity” lay between five and nine choices. Miller did not draw any firm conclusions from this, however, and contented himself by merely conjecturing that “the recurring sevens might represent something deep and profound or be just coincidence”. And that, he probably thought, was that.

But Miller had underestimated the appetite of popular culture for anything with the word “magical’ in the title. Instead of being known as a mere aggregator of research results, Miller found himself identified as a kind of sage — a discoverer of a profound truth about human nature. “My problem,” he wrote, “is that I have been persecuted by an integer. For seven years this number has followed me around, has intruded in my most private data, and has assaulted me from the pages of our most public journals… Either there really is something unusual about the number or else I am suffering from delusions of persecution.”

More from theSource here.

Send to Kindle

The Evolution of the Physicist’s Picture of Nature

From Scientific American:

Editor’s Note: We are republishing this article by Paul Dirac from the May 1963 issue of Scientific American, as it might be of interest to listeners to the June 24, 2010, and June 25, 2010 Science Talk podcasts, featuring award-winning writer and physicist Graham Farmelo discussing The Strangest Man, his biography of the Nobel Prize-winning British theoretical physicist.

In this article I should like to discuss the development of general physical theory: how it developed in the past and how one may expect it to develop in the future. One can look on this continual development as a process of evolution, a process that has been going on for several centuries.

The first main step in this process of evolution was brought about by Newton. Before Newton, people looked on the world as being essentially two-dimensional-the two dimensions in which one can walk about-and the up-and-down dimension seemed to be something essentially different. Newton showed how one can look on the up-and-down direction as being symmetrical with the other two directions, by bringing in gravitational forces and showing how they take their place in physical theory. One can say that Newton enabled us to pass from a picture with two-dimensional symmetry to a picture with three-dimensional symmetry.

Einstein made another step in the same direction, showing how one can pass from a picture with three-dimensional symmetry to a picture with four­dimensional symmetry. Einstein brought in time and showed how it plays a role that is in many ways symmetrical with the three space dimensions. However, this symmetry is not quite perfect. With Einstein’s picture one is led to think of the world from a four-dimensional point of view, but the four dimensions are not completely symmetrical. There are some directions in the four-dimensional picture that are different from others: directions that are called null directions, along which a ray of light can move; hence the four-dimensional picture is not completely symmetrical. Still, there is a great deal of symmetry among the four dimensions. The only lack of symmetry, so far as concerns the equations of physics, is in the appearance of a minus sign in the equations with respect to the time dimension as compared with the three space dimensions [see top equation in diagram].

four-dimensional symmetry equation and Schrodinger's equationsWe have, then, the development from the three-dimensional picture of the world to the four-dimensional picture. The reader will probably not be happy with this situation, because the world still appears three-dimensional to his consciousness. How can one bring this appearance into the four-dimensional picture that Einstein requires the physicist to have?

What appears to our consciousness is really a three-dimensional section of the four-dimensional picture. We must take a three-dimensional section to give us what appears to our consciousness at one time; at a later time we shall have a different three-dimensional section. The task of the physicist consists largely of relating events in one of these sections to events in another section referring to a later time. Thus the picture with four­dimensional symmetry does not give us the whole situation. This becomes particularly important when one takes into account the developments that have been brought about by quantum theory. Quantum theory has taught us that we have to take the process of observation into account, and observations usually require us to bring in the three-dimensional sections of the four-dimensional picture of the universe.

The special theory of relativity, which Einstein introduced, requires us to put all the laws of physics into a form that displays four-dimensional symmetry. But when we use these laws to get results about observations, we have to bring in something additional to the four-dimensional symmetry, namely the three-dimensional sections that describe our consciousness of the universe at a certain time.

Einstein made another most important contribution to the development of our physical picture: he put forward the general theory of relativity, which requires us to suppose that the space of physics is curved. Before this physicists had always worked with a flat space, the three-dimensional flat space of Newton which was then extended to the four­dimensional flat space of special relativity. General relativity made a really important contribution to the evolution of our physical picture by requiring us to go over to curved space. The general requirements of this theory mean that all the laws of physics can be formulated in curved four-dimensional space, and that they show symmetry among the four dimensions. But again, when we want to bring in observations, as we must if we look at things from the point of view of quantum theory, we have to refer to a section of this four-dimensional space. With the four-dimensional space curved, any section that we make in it also has to be curved, because in general we cannot give a meaning to a flat section in a curved space. This leads us to a picture in which we have to take curved three­dimensional sections in the curved four­dimensional space and discuss observations in these sections.

During the past few years people have been trying to apply quantum ideas to gravitation as well as to the other phenomena of physics, and this has led to a rather unexpected development, namely that when one looks at gravitational theory from the point of view of the sections, one finds that there are some degrees of freedom that drop out of the theory. The gravitational field is a tensor field with 10 components. One finds that six of the components are adequate for describing everything of physical importance and the other four can be dropped out of the equations. One cannot, however, pick out the six important components from the complete set of 10 in any way that does not destroy the four-dimensional symmetry. Thus if one insists on preserving four-dimensional symmetry in the equations, one cannot adapt the theory of gravitation to a discussion of measurements in the way quantum theory requires without being forced to a more complicated description than is needed bv the physical situation. This result has led me to doubt how fundamental the four-dimensional requirement in physics is. A few decades ago it seemed quite certain that one had to express the whole of physics in four­dimensional form. But now it seems that four-dimensional symmetry is not of such overriding importance, since the description of nature sometimes gets simplified when one departs from it.

Now I should like to proceed to the developments that have been brought about by quantum theory. Quantum theory is the discussion of very small things, and it has formed the main subject of physics for the past 60 years. During this period physicists have been amassing quite a lot of experimental information and developing a theory to correspond to it, and this combination of theory and experiment has led to important developments in the physicist’s picture of the world.

More from theSource here.

Send to Kindle

What Is I.B.M.’s Watson?

From The New York Times:

“Toured the Burj in this U.A.E. city. They say it’s the tallest tower in the world; looked over the ledge and lost my lunch.”

This is the quintessential sort of clue you hear on the TV game show “Jeopardy!” It’s witty (the clue’s category is “Postcards From the Edge”), demands a large store of trivia and requires contestants to make confident, split-second decisions. This particular clue appeared in a mock version of the game in December, held in Hawthorne, N.Y. at one of I.B.M.’s research labs. Two contestants — Dorothy Gilmartin, a health teacher with her hair tied back in a ponytail, and Alison Kolani, a copy editor — furrowed their brows in concentration. Who would be the first to answer?

Neither, as it turned out. Both were beaten to the buzzer by the third combatant: Watson, a supercomputer.

For the last three years, I.B.M. scientists have been developing what they expect will be the world’s most advanced “question answering” machine, able to understand a question posed in everyday human elocution — “natural language,” as computer scientists call it — and respond with a precise, factual answer. In other words, it must do more than what search engines like Google and Bing do, which is merely point to a document where you might find the answer. It has to pluck out the correct answer itself. Technologists have long regarded this sort of artificial intelligence as a holy grail, because it would allow machines to converse more naturally with people, letting us ask questions instead of typing keywords. Software firms and university scientists have produced question-answering systems for years, but these have mostly been limited to simply phrased questions. Nobody ever tackled “Jeopardy!” because experts assumed that even for the latest artificial intelligence, the game was simply too hard: the clues are too puzzling and allusive, and the breadth of trivia is too wide.

With Watson, I.B.M. claims it has cracked the problem — and aims to prove as much on national TV. The producers of “Jeopardy!” have agreed to pit Watson against some of the game’s best former players as early as this fall. To test Watson’s capabilities against actual humans, I.B.M.’s scientists began holding live matches last winter. They mocked up a conference room to resemble the actual “Jeopardy!” set, including buzzers and stations for the human contestants, brought in former contestants from the show and even hired a host for the occasion: Todd Alan Crain, who plays a newscaster on the satirical Onion News Network.

Technically speaking, Watson wasn’t in the room. It was one floor up and consisted of a roomful of servers working at speeds thousands of times faster than most ordinary desktops. Over its three-year life, Watson stored the content of tens of millions of documents, which it now accessed to answer questions about almost anything. (Watson is not connected to the Internet; like all “Jeopardy!” competitors, it knows only what is already in its “brain.”) During the sparring matches, Watson received the questions as electronic texts at the same moment they were made visible to the human players; to answer a question, Watson spoke in a machine-synthesized voice through a small black speaker on the game-show set. When it answered the Burj clue — “What is Dubai?” (“Jeopardy!” answers must be phrased as questions) — it sounded like a perkier cousin of the computer in the movie “WarGames” that nearly destroyed the world by trying to start a nuclear war.

More from theSource here.

Send to Kindle

Mind Over Mass Media

From the New York Times:

NEW forms of media have always caused moral panics: the printing press, newspapers, paperbacks and television were all once denounced as threats to their consumers’ brainpower and moral fiber.

So too with electronic technologies. PowerPoint, we’re told, is reducing discourse to bullet points. Search engines lower our intelligence, encouraging us to skim on the surface of knowledge rather than dive to its depths. Twitter is shrinking our attention spans.

But such panics often fail basic reality checks. When comic books were accused of turning juveniles into delinquents in the 1950s, crime was falling to record lows, just as the denunciations of video games in the 1990s coincided with the great American crime decline. The decades of television, transistor radios and rock videos were also decades in which I.Q. scores rose continuously.

For a reality check today, take the state of science, which demands high levels of brainwork and is measured by clear benchmarks of discovery. These days scientists are never far from their e-mail, rarely touch paper and cannot lecture without PowerPoint. If electronic media were hazardous to intelligence, the quality of science would be plummeting. Yet discoveries are multiplying like fruit flies, and progress is dizzying. Other activities in the life of the mind, like philosophy, history and cultural criticism, are likewise flourishing, as anyone who has lost a morning of work to the Web site Arts & Letters Daily can attest.

Critics of new media sometimes use science itself to press their case, citing research that shows how “experience can change the brain.” But cognitive neuroscientists roll their eyes at such talk. Yes, every time we learn a fact or skill the wiring of the brain changes; it’s not as if the information is stored in the pancreas. But the existence of neural plasticity does not mean the brain is a blob of clay pounded into shape by experience.

Experience does not revamp the basic information-processing capacities of the brain. Speed-reading programs have long claimed to do just that, but the verdict was rendered by Woody Allen after he read “War and Peace” in one sitting: “It was about Russia.” Genuine multitasking, too, has been exposed as a myth, not just by laboratory studies but by the familiar sight of an S.U.V. undulating between lanes as the driver cuts deals on his cellphone.

Moreover, as the psychologists Christopher Chabris and Daniel Simons show in their new book “The Invisible Gorilla: And Other Ways Our Intuitions Deceive Us,” the effects of experience are highly specific to the experiences themselves. If you train people to do one thing (recognize shapes, solve math puzzles, find hidden words), they get better at doing that thing, but almost nothing else. Music doesn’t make you better at math, conjugating Latin doesn’t make you more logical, brain-training games don’t make you smarter. Accomplished people don’t bulk up their brains with intellectual calisthenics; they immerse themselves in their fields. Novelists read lots of novels, scientists read lots of science.

The effects of consuming electronic media are also likely to be far more limited than the panic implies. Media critics write as if the brain takes on the qualities of whatever it consumes, the informational equivalent of “you are what you eat.” As with primitive peoples who believe that eating fierce animals will make them fierce, they assume that watching quick cuts in rock videos turns your mental life into quick cuts or that reading bullet points and Twitter postings turns your thoughts into bullet points and Twitter postings.

Yes, the constant arrival of information packets can be distracting or addictive, especially to people with attention deficit disorder. But distraction is not a new phenomenon. The solution is not to bemoan technology but to develop strategies of self-control, as we do with every other temptation in life. Turn off e-mail or Twitter when you work, put away your Blackberry at dinner time, ask your spouse to call you to bed at a designated hour.

And to encourage intellectual depth, don’t rail at PowerPoint or Google. It’s not as if habits of deep reflection, thorough research and rigorous reasoning ever came naturally to people. They must be acquired in special institutions, which we call universities, and maintained with constant upkeep, which we call analysis, criticism and debate. They are not granted by propping a heavy encyclopedia on your lap, nor are they taken away by efficient access to information on the Internet.

The new media have caught on for a reason. Knowledge is increasing exponentially; human brainpower and waking hours are not. Fortunately, the Internet and information technologies are helping us manage, search and retrieve our collective intellectual output at different scales, from Twitter and previews to e-books and online encyclopedias. Far from making us stupid, these technologies are the only things that will keep us smart.

Steven Pinker, a professor of psychology at Harvard, is the author of “The Stuff of Thought.”

More from theSource here.

Send to Kindle

MondayPoem: Upon Nothing

By Robert Pinsky for Slate:

The quality of wit, like the Hindu god Shiva, both creates and destroys—sometimes, both at once: The flash of understanding negates a trite or complacent way of thinking, and that stroke of obliteration at the same time creates a new form of insight and a laugh of recognition.

Also like Shiva, wit dances. Leaping gracefully, balancing speed and poise, it can re-embody and refresh old material. Negation itself, for example—verbal play with words like nothing and nobody: In one of the oldest jokes in literature, when the menacing Polyphemus asks Odysseus for his name, Odysseus tricks the monster by giving his name as the Greek equivalent of Nobody.

Another, immensely moving version of that Homeric joke (it may have been old even when Homer used it) is central to the best-known song of the great American comic Bert Williams (1874-1922). You can hear Williams’ funny, heart-rending, subtle rendition of the song (music by Williams, lyrics by Alex Rogers) at the University of California’s Cylinder Preservation and Digitization site.

The lyricist Rogers, I suspect, was aided by Williams’ improvisations as well as his virtuoso delivery. The song’s language is sharp and plain. The plainness, an almost throw-away surface, allows Williams to weave the refrain-word “Nobody” into an intricate fabric of jaunty pathos, savage lament, sly endurance—all in three syllables, with the dialect bent and stretched and released:

When life seems full of clouds and rain,
And I am full of nothing and pain,
Who soothes my thumpin’, bumpin’ brain?
Nobody.

When winter comes with snow and sleet,
And me with hunger, and cold feet—
Who says, “Here’s twenty-five cents
Go ahead and get yourself somethin’ to eat”?
Nobody.

I ain’t never done nothin’ to Nobody.
I ain’t never got nothin’ from Nobody, no time.
And, until I get somethin’ from somebody sometime,
I’ll never do nothin’ for Nobody, no time.

In his poem “Upon Nothing,” John Wilmot (1647-80), also known as the earl of Rochester, deploys wit as a flashing blade of skepticism, slashing away not only at a variety of human behaviors and beliefs, not only at false authorities and hollow reverences, not only at language, but at knowledge—at thought itself:

“Upon Nothing”

………………………1
Nothing, thou elder brother ev’n to Shade
Thou hadst a being ere the world was made,
And, well fixed, art alone of ending not afraid.

………………………2
Ere Time and Place were, Time and Place were not,
When primitive Nothing Something straight begot,
Then all proceeded from the great united What.

………………………3
Something, the general attribute of all,
Severed from thee, its sole original,
Into thy boundless self must undistinguished fall.

………………………4
Yet Something did thy mighty power command,
And from thy fruitful emptiness’s hand
Snatched men, beasts, birds, fire, water, air, and land.

………………………5
Matter, the wicked’st offspring of thy race,
By Form assisted, flew from thy embrace
And rebel Light obscured thy reverend dusky face.

………………………6
With Form and Matter, Time and Place did join,
Body, thy foe, with these did leagues combine
To spoil thy peaceful realm and ruin all thy line.

………………………7
But turncoat Time assists the foe in vain,
And bribed by thee destroys their short-lived reign,
And to thy hungry womb drives back thy slaves again.

………………………8
Though mysteries are barred from laic eyes,
And the divine alone with warrant pries
Into thy bosom, where thy truth in private lies;

………………………9
Yet this of thee the wise may truly say:
Thou from the virtuous nothing doest delay,
And to be part of thee the wicked wisely pray.

………………………10
Great Negative, how vainly would the wise
Enquire, define, distinguish, teach, devise,
Didst thou not stand to point their blind philosophies.

………………………11
Is or Is Not, the two great ends of Fate,
And true or false, the subject of debate,
That perfect or destroy the vast designs of state;

………………………12
When they have racked the politician’s breast,
Within thy bosom most securely rest,
And when reduced to thee are least unsafe, and best.

………………………13
But, Nothing, why does Something still permit
That sacred monarchs should at council sit
With persons highly thought, at best, for nothing fit;

………………………14
Whilst weighty something modestly abstains
From princes’ coffers, and from Statesmen’s brains,
And nothing there, like stately Nothing reigns?

………………………15
Nothing, who dwell’st with fools in grave disguise,
For whom they reverend shapes and forms devise,
Lawn-sleeves, and furs, and gowns, when they like thee look wise.

………………………16
French truth, Dutch prowess, British policy,
Hibernian learning, Scotch civility,
Spaniards’ dispatch, Danes’ wit, are mainly seen in thee.

………………………17
The great man’s gratitude to his best friend,
Kings’ promises, whores’ vows, towards thee they bend,
Flow swiftly into thee, and in thee ever end.

More from theSource here.

Send to Kindle

Immaculate creation: birth of the first synthetic cell

From the New Scientist:

For the first time, scientists have created life from scratch – well, sort of. Craig Venter‘s team at the J. Craig Venter Institute in Rockville, Maryland, and San Diego, California, has made a bacterial genome from smaller DNA subunits and then transplanted the whole thing into another cell. So what exactly is the science behind the first synthetic cell, and what is its broader significance?

What did Venter’s team do?

The cell was created by stitching together the genome of a goat pathogen called Mycoplasma mycoides from smaller stretches of DNA synthesised in the lab, and inserting the genome into the empty cytoplasm of a related bacterium. The transplanted genome booted up in its host cell, and then divided over and over to make billions of M. mycoides cells.

Venter and his team have previously accomplished both feats – creating a synthetic genome and transplanting a genome from one bacterium into another – but this time they have combined the two.

“It’s the first self-replicating cell on the planet that’s parent is a computer,” says Venter, referring to the fact that his team converted a cell’s genome that existed as data on a computer into a living organism.

How can they be sure that the new bacteria are what they intended?

Venter and his team introduced several distinctive markers into their synthesised genome. All of them were found in the synthetic cell when it was sequenced.

These markers do not make any proteins, but they contain the names of 46 scientists on the project and several quotations written out in a secret code. The markers also contain the key to the code.

Crack the code and you can read the messages, but as a hint, Venter revealed the quotations: “To live, to err, to fall, to triumph, to recreate life out of life,” from James Joyce’s A Portrait of the Artist as a Young Man; “See things not as they are but as they might be,” which comes from American Prometheus, a biography of nuclear physicist Robert Oppenheimer; and Richard Feynman’s famous words: “What I cannot build I cannot understand.”

Does this mean they created life?

It depends on how you define “created” and “life”. Venter’s team made the new genome out of DNA sequences that had initially been made by a machine, but bacteria and yeast cells were used to stitch together and duplicate the million base pairs that it contains. The cell into which the synthetic genome was then transplanted contained its own proteins, lipids and other molecules.

Venter himself maintains that he has not created life . “We’ve created the first synthetic cell,” he says. “We definitely have not created life from scratch because we used a recipient cell to boot up the synthetic chromosome.”

Whether you agree or not is a philosophical question, not a scientific one as there is no biological difference between synthetic bacteria and the real thing, says Andy Ellington, a synthetic biologist at the University of Texas in Austin. “The bacteria didn’t have a soul, and there wasn’t some animistic property of the bacteria that changed,” he says.

What can you do with a synthetic cell?

Venter’s work was a proof of principle, but future synthetic cells could be used to create drugs, biofuels and other useful products. He is collaborating with Exxon Mobil to produce biofuels from algae and with Novartis to create vaccines.

“As soon as next year, the flu vaccine you get could be made synthetically,” Venter says.

Ellington also sees synthetic bacteria as having potential as a scientific tool. It would be interesting, he says, to create bacteria that produce a new amino acid – the chemical units that make up proteins – and see how these bacteria evolve, compared with bacteria that produce the usual suite of amino acids. “We can ask these questions about cyborg cells in ways we never could before.”

More from theSource here.

Send to Kindle

The Search for Genes Leads to Unexpected Places

From The New York Times:

Edward M. Marcotte is looking for drugs that can kill tumors by stopping blood vessel growth, and he and his colleagues at the University of Texas at Austin recently found some good targets — five human genes that are essential for that growth. Now they’re hunting for drugs that can stop those genes from working. Strangely, though, Dr. Marcotte did not discover the new genes in the human genome, nor in lab mice or even fruit flies. He and his colleagues found the genes in yeast.

“On the face of it, it’s just crazy,” Dr. Marcotte said. After all, these single-cell fungi don’t make blood vessels. They don’t even make blood. In yeast, it turns out, these five genes work together on a completely unrelated task: fixing cell walls.

Crazier still, Dr. Marcotte and his colleagues have discovered hundreds of other genes involved in human disorders by looking at distantly related species. They have found genes associated with deafness in plants, for example, and genes associated with breast cancer in nematode worms. The researchers reported their results recently in The Proceedings of the National Academy of Sciences.

The scientists took advantage of a peculiar feature of our evolutionary history. In our distant, amoeba-like ancestors, clusters of genes were already forming to work together on building cell walls and on other very basic tasks essential to life. Many of those genes still work together in those same clusters, over a billion years later, but on different tasks in different organisms.

More from theSource here.

Send to Kindle

Why Athletes Are Geniuses

From Discover:

The qualities that set a great athlete apart from the rest of us lie not just in the muscles and the lungs but also between the ears. That’s because athletes need to make complicated decisions in a flash. One of the most spectacular examples of the athletic brain operating at top speed came in 2001, when the Yankees were in an American League playoff game with the Oakland Athletics. Shortstop Derek Jeter managed to grab an errant throw coming in from right field and then gently tossed the ball to catcher Jorge Posada, who tagged the base runner at home plate. Jeter’s quick decision saved the game—and the series—for the Yankees. To make the play, Jeter had to master both conscious decisions, such as whether to intercept the throw, and unconscious ones. These are the kinds of unthinking thoughts he must make in every second of every game: how much weight to put on a foot, how fast to rotate his wrist as he releases a ball, and so on.

In recent years neuroscientists have begun to catalog some fascinating differences between average brains and the brains of great athletes. By understanding what goes on in athletic heads, researchers hope to understand more about the workings of all brains—those of sports legends and couch potatoes alike.

As Jeter’s example shows, an athlete’s actions are much more than a set of automatic responses; they are part of a dynamic strategy to deal with an ever-changing mix of intricate challenges. Even a sport as seemingly straightforward as pistol shooting is surprisingly complex. A marksman just points his weapon and fires, and yet each shot calls for many rapid decisions, such as how much to bend the elbow and how tightly to contract the shoulder muscles. Since the shooter doesn’t have perfect control over his body, a slight wobble in one part of the arm may require many quick adjustments in other parts. Each time he raises his gun, he has to make a new calculation of what movements are required for an accurate shot, combining previous experience with whatever variations he is experiencing at the moment.

To explain how brains make these on-the-fly decisions, Reza Shadmehr of Johns Hopkins University and John Krakauer of Columbia University two years ago reviewed studies in which the brains of healthy people and of brain-damaged patients who have trouble controlling their movements were scanned. They found that several regions of the brain collaborate to make the computations needed for detailed motor actions. The brain begins by setting a goal—pick up the fork, say, or deliver the tennis serve—and calculates the best course of action to reach it. As the brain starts issuing commands, it also begins to make predictions about what sort of sensations should come back from the body if it achieves the goal. If those predictions don’t match the actual sensations, the brain then revises its plan to reduce error. Shadmehr and Krakauer’s work demonstrates that the brain does not merely issue rigid commands; it also continually updates its solution to the problem of how to move the body. Athletes may perform better than the rest of us because their brains can find better solutions than ours do.

More from theSource here.

Send to Kindle

Forget Avatar, the real 3D revolution is coming to your front room

From The Guardian:

Enjoy eating goulash? Fed up with needing three pieces of cutlery? It could be that I have a solution for you – and not just for you but for picnickers who like a bit of bread with their soup, too. Or indeed for anyone who has dreamed of seeing the spoon and the knife incorporated into one, easy to use, albeit potentially dangerous instrument. Ladies and gentlemen, I would like to introduce you to the Knoon.

The Knoon came to me in a dream – I had a vision of a soup spoon with a knife stuck to its top, blade pointing upwards. Given the potential for lacerating your mouth on the Knoon’s sharp edge, maybe my dream should have stayed just that. But thanks to a technological leap that is revolutionising manufacturing and, some hope, may even change the nature of our consumer society, I now have a Knoon sitting right in front of me. I had the idea, I drew it up and then I printed my cutlery out.

3D is this year’s buzzword in Hollywood. From Avatar to Clash of the Titans, it’s a new take on an old fad that’s coming to save the movie industry. But with less glitz and a degree less fanfare, 3D printing is changing our vision of the world too, and ultimately its effects might prove a degree more special.

Thinglab is a company that specialises in 3D printing. Based in a nondescript office building in east London, its team works mainly with commercial clients to print models that would previously have been assembled by hand. Architects design their buildings in 3D software packages and pass them to Thinglab to print scale models. When mobile phone companies come up with a new handset, they print prototypes first in order to test size, shape and feel. Jewellers not only make prototypes, they use them as a basis for moulds. Sculptors can scan in their original works, adjust the dimensions and rattle off a series of duplicates (signatures can be added later).

All this work is done in the Thinglab basement, a kind of temple to 3D where motion capture suits hang from the wall and a series of next generation TV screens (no need for 3D glasses) sit in the corner. In the middle of the room lurk two hulking 3D printers. Their facades give them the faces of miserable robots.

“We had David Hockney in here recently and he was gobsmacked,” says Robin Thomas, one of Thinglab’s directors, reeling a list of intrigued celebrities who have made a pilgrimage to his basement. “Boy George came in and we took a scan of his face.” Above the printers sit a collection of the models they’ve produced: everything from a car’s suspension system to a rendering of John Cleese’s head. “If a creative person wakes up in the morning with an idea,” says Thomas, “they could have a model by the end of the day. People who would have spent days, weeks months on these type of models can now do it with a printer. If they can think of it, we can make it.”

More from theSource here.

Send to Kindle

A beautiful and dangerous idea: art that sells itself

Artist Caleb Larsen seems to have the right idea. Rather than relying on the subjective wants and needs of galleries and the dubious nature of the secondary art market (and some equally dubious auctioneers) his art sells itself.

His work, entitled “A Tool to Deceive and Slaughter”, is an 8-inch opaque, black acrylic cube. But while the exterior may be simplicity itself, the interior holds a fascinating premise. The cube is connected to the internet. In fact, it’s connected to eBay, where through some hidden hardware and custom programming it constantly auctions itself.

As Caleb Larsen describes,

Combining Robert Morris’ Box With the Sound of Its Own Making with Baudrillard’s writing on the art auction this sculpture exists in eternal transactional flux. It is a physical sculpture that is perptually attempting to auction itself on eBay.

Every ten minutes the black box pings a server on the internet via the ethernet connection to check if it is for sale on the ebay. If its auction has ended or it has sold, it automatically creates a new auction of itself.

If a person buys it on eBay, the current owner is required to send it to the new owner. The new owner must then plug it into ethernet, and the cycle repeats itself.

The purchase agreement on eBay is quite rigorous, including stipulations such as: the buyer must keep the artwork connected to the interent at all times with disconnections allowed only for the transportation; upon purchase the artwork must be reauctioned; failure to follow all terms of the agreement forfeits the status of the artwork as a genuine work of art.

The artist was also smart enough to gain a slice of the secondary market, by requiring each buyer to return to the artist 15 percent of the appreciated value from each sale. Christie’s and Sotheby’s eat your hearts out.

Besides trying to put auctioneers out of work, the artist has broader intentions in mind, particularly when viewed alongside his larger body of work. The piece goes to the heart of the “how” and the “why” of the art market. By placing the artwork in a constant state of transactional fluidity – it’s never permanently in the hands of its new owner – it forces us to question the nature of art in relation to its market and the nature of collecting. The work can never without question be owned and collected since it is always possible that someone else will come along, enter the auction and win. Though, the first “owner” of the piece states that this was part of the appeal. Terence Spies, a California collector attests,

I had a really strong reaction right after I won the auction. I have this thing, and I really want to keep it, but the reason I want to keep it is that it might leave… The process of the piece really gets to some of the reasons why you might be collecting art in the first place.

Now of course, owning anything is transient. The Egyptian pharaohs tried taking their possessions into the “afterlife” but even to this day are being constantly thwarted by tomb-raiders and archeologists. Perhaps to some the chase, the process of collecting, is the goal, rather than owning the art itself. As I believe Caleb Larsen intended, he’s really given me something to ponder. How different, really, is it to own this self-selling art versus wandering through the world’s museums and galleries to “own” a Picasso or Warhol or Monet for 5 minutes? Ironically, our works live on, and it is we who are transient. So I think Caleb Larsen’s title for the work should be taken tongue in cheek, for it is we who are deceiving ourselves.

Send to Kindle

The Real Rules for Time Travelers

From Discover:

People all have their own ideas of what a time machine would look like. If you are a fan of the 1960 movie version of H. G. Wells’s classic novel, it would be a steampunk sled with a red velvet chair, flashing lights, and a giant spinning wheel on the back. For those whose notions of time travel were formed in the 1980s, it would be a souped-up stainless steel sports car. Details of operation vary from model to model, but they all have one thing in common: When someone actually travels through time, the machine ostentatiously dematerializes, only to reappear many years in the past or future. And most people could tell you that such a time machine would never work, even if it looked like a DeLorean.

They would be half right: That is not how time travel might work, but time travel in some other form is not necessarily off the table. Since time is kind of like space (the four dimensions go hand in hand), a working time machine would zoom off like a rocket rather than disappearing in a puff of smoke. Einstein described our universe in four dimensions: the three dimensions of space and one of time. So traveling back in time is nothing more or less than the fourth-dimensional version of walking in a circle. All you would have to do is use an extremely strong gravitational field, like that of a black hole, to bend space-time. From this point of view, time travel seems quite difficult but not obviously impossible.

These days, most people feel comfortable with the notion of curved space-time. What they trip up on is actually a more difficult conceptual problem, the time travel paradox. This is the worry that someone could go back in time and change the course of history. What would happen if you traveled into the past, to a time before you were born, and murdered your parents? Put more broadly, how do we avoid changing the past as we think we have already experienced it? At the moment, scientists don’t know enough about the laws of physics to say whether these laws would permit the time equivalent of walking in a circle—or, in the parlance of time travelers, a “closed timelike curve.” If they don’t permit it, there is obviously no need to worry about paradoxes. If physics is not an obstacle, however, the problem could still be constrained by logic. Do closed timelike curves necessarily lead to paradoxes?

If they do, then they cannot exist, simple as that. Logical contradictions cannot occur. More specifically, there is only one correct answer to the question “What happened at the vicinity of this particular event in space-time?” Something happens: You walk through a door, you are all by yourself, you meet someone else, you somehow never showed up, whatever it may be. And that something is whatever it is, and was whatever it was, and will be whatever it will be, once and forever. If, at a certain event, your grandfather and grandmother were getting it on, that’s what happened at that event. There is nothing you can do to change it, because it happened. You can no more change events in your past in a space-time with closed timelike curves than you can change events that already happened in ordinary space-time, with no closed timelike curves.

More from theSource here.

Send to Kindle

Human Culture, an Evolutionary Force

From The New York Times:

As with any other species, human populations are shaped by the usual forces of natural selection, like famine, disease or climate. A new force is now coming into focus. It is one with a surprising implication — that for the last 20,000 years or so, people have inadvertently been shaping their own evolution.

The force is human culture, broadly defined as any learned behavior, including technology. The evidence of its activity is the more surprising because culture has long seemed to play just the opposite role. Biologists have seen it as a shield that protects people from the full force of other selective pressures, since clothes and shelter dull the bite of cold and farming helps build surpluses to ride out famine.

Because of this buffering action, culture was thought to have blunted the rate of human evolution, or even brought it to a halt, in the distant past. Many biologists are now seeing the role of culture in a quite different light.

Although it does shield people from other forces, culture itself seems to be a powerful force of natural selection. People adapt genetically to sustained cultural changes, like new diets. And this interaction works more quickly than other selective forces, “leading some practitioners to argue that gene-culture co-evolution could be the dominant mode of human evolution,” Kevin N. Laland and colleagues wrote in the February issue of Nature Reviews Genetics. Dr. Laland is an evolutionary biologist at the University of St. Andrews in Scotland.

The idea that genes and culture co-evolve has been around for several decades but has started to win converts only recently. Two leading proponents, Robert Boyd of the University of California, Los Angeles, and Peter J. Richerson of the University of California, Davis, have argued for years that genes and culture were intertwined in shaping human evolution. “It wasn’t like we were despised, just kind of ignored,” Dr. Boyd said. But in the last few years, references by other scientists to their writings have “gone up hugely,” he said.

The best evidence available to Dr. Boyd and Dr. Richerson for culture being a selective force was the lactose tolerance found in many northern Europeans. Most people switch off the gene that digests the lactose in milk shortly after they are weaned, but in northern Europeans — the descendants of an ancient cattle-rearing culture that emerged in the region some 6,000 years ago — the gene is kept switched on in adulthood.

Lactose tolerance is now well recognized as a case in which a cultural practice — drinking raw milk — has caused an evolutionary change in the human genome. Presumably the extra nutrition was of such great advantage that adults able to digest milk left more surviving offspring, and the genetic change swept through the population.

More from theSource here.

Send to Kindle

Art world swoons over Romania’s homeless genius

From The Guardian:

The guests were chic, the bordeaux was sipped with elegant restraint and the hostess was suitably glamorous in a ­canary yellow cocktail dress. To an outside observer who made it past the soirée privée sign on the door of the Anne de Villepoix gallery on Thursday night, it would have seemed the quintessential Parisian art viewing.

Yet that would been leaving one ­crucial factor out of the equation: the man whose creations the crowd had come to see. In his black cowboy hat and pressed white collar, Ion Barladeanu looked every inch the established artist as he showed guests around the exhibition. But until 2007 no one had ever seen his work, and until mid-2008 he was living in the rubbish tip of a Bucharest tower block.

Today, in the culmination of a dream for a Romanian who grew up adoring Gallic film stars and treasures a miniature Eiffel Tower he once found in a bin, ­Barladeanu will see his first French exhibition open to the general public.

Dozens of collages he created from scraps of discarded magazines during and after the Communist regime of Nicolae Ceausescu are on sale for more than €1,000 (£895) each. They are being hailed as politically brave and culturally irreverent.

For the 63-year-old artist, the journey from the streets of Bucharest to the galleries of Europe has finally granted him recognition. “I feel as if I have been born again,” he said, as some of France’s leading collectors and curators jostled for position to see his collages. “Now I feel like a prince. A pauper can become a prince. But he can go back to being a pauper too.”

More from theSource here.

Send to Kindle