Ugliness Behind the Beautiful Game

Google-map-QatarQatar hosts the World Cup in 2022. This gives the emirate another 8 years to finish construction of the various football venues, hotels and infrastructure required to support the world’s biggest single sporting event.

Perhaps, it will also give the emirate some time to clean up its appalling record of worker abuse and human rights violations. Numerous  laborers have died during the construction process, while others are paid minimal wages or not at all. And to top it off most employees live in atrocious conditions , cannot move freely, nor can they change jobs or even repatriate — many come from the Indian subcontinent or East Asia. You could be forgiven for labeling these people indentured servants rather than workers.

From the Guardian:

Migrant workers who built luxury offices used by Qatar’s 2022 football World Cup organisers have told the Guardian they have not been paid for more than a year and are now working illegally from cockroach-infested lodgings.

Officials in Qatar’s Supreme Committee for Delivery and Legacy have been using offices on the 38th and 39th floors of Doha’s landmark al-Bidda skyscraper – known as the Tower of Football – which were fitted out by men from Nepal, Sri Lanka and India who say they have not been paid for up to 13 months’ work.

The project, a Guardian investigation shows, was directly commissioned by the Qatar government and the workers’ plight is set to raise fresh doubts over the autocratic emirate’s commitment to labour rights as construction starts this year on five new stadiums for the World Cup.

The offices, which cost £2.5m to fit, feature expensive etched glass, handmade Italian furniture, and even a heated executive toilet, project sources said. Yet some of the workers have not been paid, despite complaining to the Qatari authorities months ago and being owed wages as modest as £6 a day.

By the end of this year, several hundred thousand extra migrant workers from some of the world’s poorest countries are scheduled to have travelled to Qatar to build World Cup facilities and infrastructure. The acceleration in the building programme comes amid international concern over a rising death toll among migrant workers and the use of forced labour.

“We don’t know how much they are spending on the World Cup, but we just need our salary,” said one worker who had lost a year’s pay on the project. “We were working, but not getting the salary. The government, the company: just provide the money.”

The migrants are squeezed seven to a room, sleeping on thin, dirty mattresses on the floor and on bunk beds, in breach of Qatar’s own labour standards. They live in constant fear of imprisonment because they have been left without paperwork after the contractor on the project, Lee Trading and Contracting, collapsed. They say they are now being exploited on wages as low as 50p an hour.

Their case was raised with Qatar’s prime minister by Amnesty International last November, but the workers have said 13 of them remain stranded in Qatar. Despite having done nothing wrong, five have even been arrested and imprisoned by Qatari police because they did not have ID papers. Legal claims lodged against the former employer at the labour court in November have proved fruitless. They are so poor they can no longer afford the taxi to court to pursue their cases, they say.

A 35-year-old Nepalese worker and father of three who ssaid he too had lost a year’s pay: “If I had money to buy a ticket, I would go home.”

Qatar’s World Cup organising committee confirmed that it had been granted use of temporary offices on the floors fitted out by the unpaid workers. It said it was “heavily dismayed to learn of the behaviour of Lee Trading with regard to the timely payment of its workers”. The committee stressed it did not commission the firm. “We strongly disapprove and will continue to press for a speedy and fair conclusion to all cases,” it said.

Jim Murphy, the shadow international development secretary, said the revelation added to the pressure on the World Cup organising committee. “They work out of this building, but so far they can’t even deliver justice for the men who toiled at their own HQ,” he said.

Sharan Burrow, secretary general of the International Trade Union Confederation, said the workers’ treatment was criminal. “It is an appalling abuse of fundamental rights, yet there is no concern from the Qatar government unless they are found out,” she said. “In any other country you could prosecute this behaviour.”

Read the entire article here.

Image: Qatar. Courtesy of Google Maps.

Send to Kindle

MondayMap: Drought Mapping

US-droughtThe NYT has an fascinating and detailed article bursting with charts and statistics that shows the pervasive grip of the drought in the United States. The desert Southwest and West continue to be parched and scorching. This is not a pretty picture for farmers and increasingly for those (sub-)urban dwellers who rely upon a fragile and dwindling water supply.

From the NYT:

Droughts appear to be intensifying over much of the West and Southwest as a result of global warming. Over the past decade, droughts in some regions have rivaled the epic dry spells of the 1930s and 1950s. About 34 percent of the contiguous United States was in at least a moderate drought as of July 22.
Things have been particularly bad in California, where state officials have approved drastic measures to reduce water consumption. California farmers, without water from reservoirs in the Central Valley, are left to choose which of their crops to water. Parts of Texas, Oklahoma and surrounding states are also suffering from drought conditions.
The relationship between the climate and droughts is complicated. Parts of the country are becoming wetter: East of the Mississippi, rainfall has been rising. But global warming also appears to be causing moisture to evaporate faster in places that were already dry. Researchers believe drought conditions in these places are likely to intensify in coming years.
There has been little relief for some places since the summer of 2012. At the recent peak this May, about 40 percent of the country was abnormally dry or in at least a moderate drought.

Read the entire story and see the statistics for yourself here.

Image courtesy of Drought Monitor / NYT.

Send to Kindle

Computer Generated Reality

Computer games have come a very long way since the pioneering days of Pong and Pacman. Games are now so realistic that many are indistinguishable from the real-world characters and scenarios they emulate. It is a testament to the skill and ingenuity of hardware and software engineers and the creativity of developers who bring all the diverse underlying elements of a game together. Now, however, they have a match in the form of computer system that is able to generate richly  imagined and rendered world for use in the games themselves. It’s all done through algorithms.

From Technology Review:

Read the entire story here.

Video: No Man’s Sky. Courtesy of Hello Games.

 

 

Send to Kindle

Gun Love

Gun Violence in America

The second amendment remains ever strong in the U.S. And, of course so does the number of homicides and child deaths at the hands of guns. Sigh!

From the Guardian:

In February, a nine-year-old Arkansas boy called Hank asked his uncle if he could head off on his own from their remote camp to hunt a rabbit with his .22 calibre rifle. “I said all right,” recalled his uncle Brent later. “It wasn’t a concern. Some people are like, ‘a nine year old shouldn’t be off by himself,’ but he wasn’t an average nine year old.”

Hank was steeped in hunting: when he was two, his father, Brad, would put him in a rucksack on his back when he went turkey hunting. Brad regularly took Hank hunting and said that his son often went off hunting by himself. On this particular day, Hank and his uncle Brent had gone squirrel hunting together as his father was too sick to go.

When Hank didn’t return from hunting the rabbit, his uncle raised the alarm. His mother, Kelli, didn’t learn about his disappearance for seven hours. “They didn’t want to bother me unduly,” she says.

The following morning, though, after police, family and hundreds of locals searched around the camp, Hank’s body was found by a creek with a single bullet wound to the forehead. The cause of death was, according to the police, most likely a hunting accident.

“He slipped and the butt of the gun hit the ground and the gun fired,” says Kelli.

Kelli had recently bought the gun for Hank. “It was the first gun I had purchased for my son, just a youth .22 rifle. I never thought it would be a gun that would take his life.”

Both Kelli and Brad, from whom she is separated, believe that the gun was faulty – it shouldn’t have gone off unless the trigger was pulled, they claim. Since Hank’s death, she’s been posting warnings on her Facebook page about the gun her son used: “I wish someone else had posted warnings about it before what happened,” she says.

Had Kelli not bought the gun and had Brad not trained his son to use it, Hank would have celebrated his 10th birthday on 6 June, which his mother commemorated by posting Hank’s picture on her Facebook page with the message: “Happy Birthday Hank! Mommy loves you!”

Little Hank thus became one in a tally of what the makers of a Channel 4 documentary called Kids and Guns claim to be 3,000 American children who die each year from gun-related accidents. A recent Yale University study found that more than 7,000 US children and adolescents are hospitalised or killed by guns each year and estimates that about 20 children a day are treated in US emergency rooms following incidents involving guns.

Hank’s story is striking, certainly for British readers, for two reasons. One, it dramatises how hunting is for many Americans not the privileged pursuit it is overwhelmingly here, but a traditional family activity as much to do with foraging for food as it is a sport.

Francine Shaw, who directed Kids and Guns, says: “In rural America … people hunt to eat.”

Kelli has a fond memory of her son coming home with what he’d shot. “He’d come in and say: “Momma – I’ve got some squirrel to cook.” And I’d say ‘Gee, thanks.’ That child was happy to bring home meat. He was the happiest child when he came in from shooting.”

But Hank’s story is also striking because it shows how raising kids to hunt and shoot is seen as good parenting, perhaps even as an essential part of bringing up children in America – a society rife with guns and temperamentally incapable of overturning the second amendment that confers the right to bear arms, no matter how many innocent Americans die or get maimed as a result.

“People know I was a good mother and loved him dearly,” says Kelli. “We were both really good parents and no one has said anything hateful to us. The only thing that has been said is in a news report about a nine year old being allowed to hunt alone.”

Does Kelli regret that Hank was allowed to hunt alone at that young age? “Obviously I do, because I’ve lost my son,” she tells me. But she doesn’t blame Brent for letting him go off from camp unsupervised with a gun.

“We’re sure not anti-gun here, but do I wish I could go back in time and not buy that gun? Yes I do. I know you in England don’t have guns. I wish I could go back and have my son back. I would live in England, away from the guns.”

Read the entire article here.

Infographic courtesy of Care2 via visua.ly

Send to Kindle

The Best

The United States is home to many first and superlatives: first in democracy, wealth, openness, innovation, industry, innovation. The nation also takes great pride in its personal and cultural freedoms. Yet it is also home to another superlative: first in rates of incarceration.  In fact, the US leads other nations by such a wide margin that questions continue to be asked. In the land of the free, something must be wrong.

From the Atlantic:

On Friday, the U.S. Sentencing Commission voted unanimously to allow nearly 50,000 nonviolent federal drug offenders to seek lower sentences. The commission’s decision retroactively applied an earlier change in sentencing guidelines to now cover roughly half of those serving federal drug sentences. Endorsed by both the Department of Justice and prison-reform advocates, the move is a significant step forward in reversing decades of mass incarcerationthough in a global context, still modest—step forward in reversing decades of mass incarceration.

How large is America’s prison problem? More than 2.4 million people are behind bars in the United States today, either awaiting trial or serving a sentence. That’s more than the combined population of 15 states, all but three U.S. cities, and the U.S. armed forces. They’re scattered throughout a constellation of 102 federal prisons, 1,719 state prisons, 2,259 juvenile facilities, 3,283 local jails, and many more military, immigration, territorial, and Indian Country facilities.

Compared to the rest of the world, these numbers are staggering. Here’s how the United States’ incarceration rate compares with those of other modern liberal democracies like Britain and Canada:

That graph is from a recent report by Prison Policy Initiative, an invaluable resource on mass incarceration. (PPI also has a disturbing graph comparing state incarceration rates with those of other countries around the world, which I highly recommend looking at here.) “Although our level of crime is comparable to those of other stable, internally secure, industrialized nations,” the report says, “the United States has an incarceration rate far higher than any other country.”

Some individual states like Louisiana contribute disproportionately, but no state is free from mass incarceration. Disturbingly, many states’ prison populations outrank even those of dictatorships and illiberal democracies around the world. New York jails more people per capita than Rwanda, where tens of thousands await trial for their roles in the 1994 genocide. California, Illinois, and Ohio each have a higher incarceration rate than Cuba and Russia. Even Maine and Vermont imprison a greater share of people than Saudi Arabia, Venezuela, or Egypt.

But mass incarceration is more than just an international anomaly; it’s also a relatively recent phenomenon in American criminal justice. Starting in the 1970s with the rise of tough-on-crime politicians and the War on Drugs, America’s prison population jumped eightfold between 1970 and 2010.

These two metrics—the international and the historical—have to be seen together to understand how aberrant mass incarceration is. In time or in space, the warehousing of millions of Americans knows no parallels. In keeping with American history, however, it also disproportionately harms the non-white and the non-wealthy. “For a great many poor people in America, particularly poor black men, prison is a destination that braids through an ordinary life, much as high school and college do for rich white ones,” wrote Adam Gopnik in his seminal 2012 article.

Mass incarceration on a scale almost unexampled in human history is a fundamental fact of our country today—perhaps the fundamental fact, as slavery was the fundamental fact of 1850. In truth, there are more black men in the grip of the criminal-justice system—in prison, on probation, or on parole—than were in slavery then. Over all, there are now more people under “correctional supervision” in America—more than six million—than were in the Gulag Archipelago under Stalin at its height.

Mass incarceration’s effects are not confined to the cell block. Through the inescapable stigma it imposes, a brush with the criminal-justice system can hamstring a former inmate’s employment and financial opportunities for life. The effect is magnified for those who already come from disadvantaged backgrounds. Black men, for example, made substantial economic progress between 1940 and 1980 thanks to the post-war economic boom and the dismantling of de jure racial segregation. But mass incarceration has all but ground that progress to a halt: A new University of Chicago study found that black men are no better off in 2014 than they were when Congress passed the Civil Rights Act 50 years earlier.

Read the entire article here.

Send to Kindle

Climate Change Denial: English Only

It’s official. Native English-speakers are more likely to be in denial over climate change than non-English speakers. In fact, many who do not see a human hand in our planet’s environmental and climatic troubles are located in the United States, Britain,  Australia and Canada. Enough said, in English.

Sacre bleu!

Now, the Guardian would have you believe that media monopolist — Rupert Murdoch — is behind the climate change skeptics and deniers. After all, he is well known for his views on climate and his empire controls large swathes of the media that most English-speaking people consume.  However, it’s probably a little more complicated.

From the Guardian:

Here in the United States, we fret a lot about global warming denial. Not only is it a dangerous delusion, it’s an incredibly prevalent one. Depending on your survey instrument of choice, we regularly learn that substantial minorities of Americans deny, or are sceptical of, the science of climate change.

The global picture, however, is quite different. For instance, recently the UK-based market research firm Ipsos MORI released its “Global Trends 2014” report, which included a number of survey questions on the environment asked across 20 countries. (h/t Leo Hickman). And when it came to climate change, the result was very telling.

Note that these results are not perfectly comparable across countries, because the data were gathered online, and Ipsos MORI cautions that for developing countries like India and China, “the results should be viewed as representative of a more affluent and ‘connected’ population.”

Nonetheless, some pretty significant patterns are apparent. Perhaps most notably: Not only is the United States clearly the worst in its climate denial, but Great Britain and Australia are second and third worst, respectively. Canada, meanwhile, is the seventh worst.

What do these four nations have in common? They all speak the language of Shakespeare.

Why would that be? After all, presumably there is nothing about English, in and of itself, that predisposes you to climate change denial. Words and phrases like “doubt,” “natural causes,” “climate models,” and other sceptic mots are readily available in other languages. So what’s the real cause?

One possible answer is that it’s all about the political ideologies prevalent in these four countries.

The US climate change counter movement is comprised of 91 separate organizations, with annual funding, collectively, of “just over $900 million.” And they all speak English.

“I do not find these results surprising,” says Riley Dunlap, a sociologist at Oklahoma State University who has extensively studied the climate denial movement. “It’s the countries where neo-liberalism is most hegemonic and with strong neo-liberal regimes (both in power and lurking on the sidelines to retake power) that have bred the most active denial campaigns—US, UK, Australia and now Canada. And the messages employed by these campaigns filter via the media and political elites to the public, especially the ideologically receptive portions.” (Neoliberalism is an economic philosophy centered on the importance of free markets and broadly opposed to big government interventions.)

Indeed, the English language media in three of these four countries are linked together by a single individual: Rupert Murdoch. An apparent climate sceptic or lukewarmer, Murdoch is the chairman of News Corp and 21st Century Fox. (You can watch him express his climate views here.) Some of the media outlets subsumed by the two conglomerates that he heads are responsible for quite a lot of English language climate scepticism and denial.

In the US, Fox News and the Wall Street Journal lead the way; research shows that Fox watching increases distrust of climate scientists. (You can also catch Fox News in Canada.) In Australia, a recent study found that slightly under a third of climate-related articles in 10 top Australian newspapers “did not accept” the scientific consensus on climate change, and that News Corp papers — the Australian, the Herald Sun, and the Daily Telegraph — were particular hotbeds of scepticism. “TheAustralian represents climate science as matter of opinion or debate rather than as a field for inquiry and investigation like all scientific fields,” noted the study.

And then there’s the UK. A 2010 academic study found that while News Corp outlets in this country from 1997 to 2007 did not produce as much strident climate scepticism as did their counterparts in the US and Australia, “the Sun newspaper offered a place for scornful sceptics on its opinion pages as did The Times and Sunday Times to a lesser extent.” (There are also other outlets in the UK, such as the Daily Mail, that feature plenty of scepticism but aren’t owned by News Corp.)

Thus, while there may not be anything inherent to the English language that impels climate denial, the fact that English language media are such a major source of that denial may in effect create a language barrier.

And media aren’t the only reason that denialist arguments are more readily available in the English language. There’s also the Anglophone nations’ concentration of climate “sceptic” think tanks, which provide the arguments and rationalisations necessary to feed this anti-science position.

According to a study in the journal Climatic Change earlier this year, the US is home to 91 different organisations (think tanks, advocacy groups, and trade associations) that collectively comprise a “climate change counter-movement.” The annual funding of these organisations, collectively, is “just over $900 million.” That is a truly massive amount of English-speaking climate “sceptic” activity, and while the study was limited to the US, it is hard to imagine that anything comparable exists in non-English speaking countries.

Read the entire article here.

Send to Kindle

A Godless Universe: Mind or Mathematics

In his science column for the NYT George Johnson reviews several recent books by noted thinkers who for different reasons believe science needs to expand its borders. Philosopher Thomas Nagel and physicist Max Tegmark both agree that our current understanding of the universe is rather limited and that science needs to turn to new or alternate explanations. Nagel, still an atheist, suggests in his book Mind and Cosmos that the mind somehow needs to be considered a fundamental structure of the universe. While Tegmark in his book Our Mathematical Universe: My Quest for the Ultimate Nature of Reality suggests that mathematics is the core, irreducible framework of the cosmos. Two radically different ideas — yet both are correct in one respect: we still know so very little about ourselves and our surroundings.

From the NYT:

Though he probably didn’t intend anything so jarring, Nicolaus Copernicus, in a 16th-century treatise, gave rise to the idea that human beings do not occupy a special place in the heavens. Nearly 500 years after replacing the Earth with the sun as the center of the cosmic swirl, we’ve come to see ourselves as just another species on a planet orbiting a star in the boondocks of a galaxy in the universe we call home. And this may be just one of many universes — what cosmologists, some more skeptically than others, have named the multiverse.

Despite the long string of demotions, we remain confident, out here on the edge of nowhere, that our band of primates has what it takes to figure out the cosmos — what the writer Timothy Ferris called “the whole shebang.” New particles may yet be discovered, and even new laws. But it is almost taken for granted that everything from physics to biology, including the mind, ultimately comes down to four fundamental concepts: matter and energy interacting in an arena of space and time.

There are skeptics who suspect we may be missing a crucial piece of the puzzle. Recently, I’ve been struck by two books exploring that possibility in very different ways. There is no reason why, in this particular century, Homo sapiens should have gathered all the pieces needed for a theory of everything. In displacing humanity from a privileged position, the Copernican principle applies not just to where we are in space but to when we are in time.

Since it was published in 2012, “Mind and Cosmos,” by the philosopher Thomas Nagel, is the book that has caused the most consternation. With his taunting subtitle — “Why the Materialist Neo-Darwinian Conception of Nature Is Almost Certainly False” — Dr. Nagel was rejecting the idea that there was nothing more to the universe than matter and physical forces. He also doubted that the laws of evolution, as currently conceived, could have produced something as remarkable as sentient life. That idea borders on anathema, and the book quickly met with a blistering counterattack. Steven Pinker, a Harvard psychologist, denounced it as “the shoddy reasoning of a once-great thinker.”

What makes “Mind and Cosmos” worth reading is that Dr. Nagel is an atheist, who rejects the creationist idea of an intelligent designer. The answers, he believes, may still be found through science, but only by expanding it further than it may be willing to go.

“Humans are addicted to the hope for a final reckoning,” he wrote, “but intellectual humility requires that we resist the temptation to assume that the tools of the kind we now have are in principle sufficient to understand the universe as a whole.”

Dr. Nagel finds it astonishing that the human brain — this biological organ that evolved on the third rock from the sun — has developed a science and a mathematics so in tune with the cosmos that it can predict and explain so many things.

Neuroscientists assume that these mental powers somehow emerge from the electrical signaling of neurons — the circuitry of the brain. But no one has come close to explaining how that occurs.

Continue reading the main story Continue reading the main story
Continue reading the main story

That, Dr. Nagel proposes, might require another revolution: showing that mind, along with matter and energy, is “a fundamental principle of nature” — and that we live in a universe primed “to generate beings capable of comprehending it.” Rather than being a blind series of random mutations and adaptations, evolution would have a direction, maybe even a purpose.

“Above all,” he wrote, “I would like to extend the boundaries of what is not regarded as unthinkable, in light of how little we really understand about the world.”

Dr. Nagel is not alone in entertaining such ideas. While rejecting anything mystical, the biologist Stuart Kauffman has suggested that Darwinian theory must somehow be expanded to explain the emergence of complex, intelligent creatures. And David J. Chalmers, a philosopher, has called on scientists to seriously consider “panpsychism” — the idea that some kind of consciousness, however rudimentary, pervades the stuff of the universe.

Some of this is a matter of scientific taste. It can be just as exhilarating, as Stephen Jay Gould proposed in “Wonderful Life,” to consider the conscious mind as simply a fluke, no more inevitable than the human appendix or a starfish’s five legs. But it doesn’t seem so crazy to consider alternate explanations.

Heading off in another direction, a new book by the physicist Max Tegmark suggests that a different ingredient — mathematics — needs to be admitted into science as one of nature’s irreducible parts. In fact, he believes, it may be the most fundamental of all.

In a well-known 1960 essay, the physicist Eugene Wigner marveled at “the unreasonable effectiveness of mathematics” in explaining the world. It is “something bordering on the mysterious,” he wrote, for which “there is no rational explanation.”

The best he could offer was that mathematics is “a wonderful gift which we neither understand nor deserve.”

Dr. Tegmark, in his new book, “Our Mathematical Universe: My Quest for the Ultimate Nature of Reality,” turns the idea on its head: The reason mathematics serves as such a forceful tool is that the universe is a mathematical structure. Going beyond Pythagoras and Plato, he sets out to show how matter, energy, space and time might emerge from numbers.

Read the entire article here.

Send to Kindle

Non-Spooky Action at a Distance

Albert Einstein famously called quantum entanglement “spooky action at a distance”. It refers to the notion that measuring the state of one of two entangled particles makes the state of the second particle known instantaneously, regardless of the distance  separating the two particles. Entanglement seems to link these particles and make them behave as one system. This peculiar characteristic has been a core element of the counterintuitiive world of quantum theory. Yet while experiments have verified this spookiness, other theorists maintain that both theory and experiment are flawed, and that a different interpretation is required. However, one such competing theory — the many worlds interpretation — makes equally spooky predictions.

From ars technica:

Quantum nonlocality, perhaps one of the most mysterious features of quantum mechanics, may not be a real phenomenon. Or at least that’s what a new paper in the journal PNAS asserts. Its author claims that nonlocality is nothing more than an artifact of the Copenhagen interpretation, the most widely accepted interpretation of quantum mechanics.

Nonlocality is a feature of quantum mechanics where particles are able to influence each other instantaneously regardless of the distance between them, an impossibility in classical physics. Counterintuitive as it may be, nonlocality is currently an accepted feature of the quantum world, apparently verified by many experiments. It’s achieved such wide acceptance that even if our understandings of quantum physics turn out to be completely wrong, physicists think some form of nonlocality would be a feature of whatever replaced it.

The term “nonlocality” comes from the fact that this “spooky action at a distance,” as Einstein famously called it, seems to put an end to our intuitive ideas about location. Nothing can travel faster than the speed of light, so if two quantum particles can influence each other faster than light could travel between the two, then on some level, they act as a single system—there must be no real distance between them.

The concept of location is a bit strange in quantum mechanics anyway. Each particle is described by a mathematical quantity known as the “wave function.” The wave function describes a probability distribution for the particle’s location, but not a definite location. These probable locations are not just scientists’ guesses at the particle’s whereabouts; they’re actual, physical presences. That is to say, the particles exist in a swarm of locations at the same time, with some locations more probable than others.

A measurement collapses the wave function so that the particle is no longer spread out over a variety of locations. It begins to act just like objects we’re familiar with—existing in one specific location.

The experiments that would measure nonlocality, however, usually involve two particles that are entangled, which means that both are described by a shared wave function. The wave function doesn’t just deal with the particle’s location, but with other aspects of its state as well, such as the direction of the particle’s spin. So if scientists can measure the spin of one of the two entangled particles, the shared wave function collapses and the spins of both particles become certain. This happens regardless of the distance between the particles.

The new paper calls all this into question.

The paper’s sole author, Frank Tipler, argues that the reason previous studies apparently confirmed quantum nonlocality is that they were relying on an oversimplified understanding of quantum physics in which the quantum world and the macroscopic world we’re familiar with are treated as distinct from one another. Even large structures obey the laws of quantum Physics, Tipler points out, so the scientists making the measurements must be considered part of the system being studied.

It is intuitively easy to separate the quantum world from our everyday world, as they appear to behave so differently. However, the equations of quantum mechanics can be applied to large objects like human beings, and they essentially predict that you’ll behave just as classical physics—and as observation—says you will. (Physics students who have tried calculating their own wave functions can attest to this). The laws of quantum physics do govern the entire Universe, even if distinctly quantum effects are hard to notice at a macroscopic level.

When this is taken into account, according to Tipler, the results of familiar nonlocality experiments are altered. Typically, such experiments are thought to involve only two measurements: one on each of two entangled particles. But Tipler argues that in such experiments, there’s really a third measurement taking place when the scientists compare the results of the two.

This third measurement is crucial, Tipler argues, as without it, the first two measurements are essentially meaningless. Without comparing the first two, there’s no way to know that one particle’s behavior is actually linked to the other’s. And crucially, in order for the first two measurements to be compared, information must be exchanged between the particles, via the scientists, at a speed less than that of light. In other words, when the third measurement is taken into account, the two particles are not communicating faster than light. There is no “spooky action at a distance.”

Tipler has harsh criticism for the reasoning that led to nonlocality. “The standard argument that quantum phenomena are nonlocal goes like this,” he says in the paper. “(i) Let us add an unmotivated, inconsistent, unobservable, nonlocal process (collapse) to local quantum mechanics; (ii) note that the resulting theory is nonlocal; and (iii) conclude that quantum mechanics is [nonlocal].”

He’s essentially saying that scientists are arbitrarily adding nonlocality, which they can’t observe, and then claiming they have discovered nonlocality. Quite an accusation, especially for the science world. (The “collapse” he mentions is the collapse of the particle’s wave function, which he asserts is not a real phenomenon.) Instead, he claims that the experiments thought to confirm nonlocality are in fact confirming an alternative to the Copenhagen interpretation called the many-worlds interpretation (MWI). As its name implies, the MWI predicts the existence of other universes.

The Copenhagen interpretation has been summarized as “shut up and measure.” Even though the consequences of a wave function-based world don’t make much intuitive sense, it works. The MWI tries to keep particles concrete at the cost of making our world a bit fuzzy. It posits that rather than becoming a wave function, particles remain distinct objects but enter one of a number of alternative universes, which recombine to a single one when the particle is measured.

Scientists who thought they were measuring nonlocality, Tipler claims, were in fact observing the effects of alternate universe versions of themselves, also measuring the same particles.

Part of the significance of Tipler’s claim is that he’s able to mathematically derive the same experimental results from the MWI without use of nonlocality. But this does not necessarily make for evidence that the MWI is correct; either interpretation remains consistent with the data. Until the two can be distinguished experimentally, it all comes down to whether you personally like or dislike nonlocality.

Read the entire article here.

Send to Kindle

Isolation Fractures the Mind

Through the lens of extreme isolation Michael Bond shows us in this fascinating article how we really are social animals. Remove a person from all meaningful social contact — even for a short while — and her mind will begin to play tricks and eventually break. Michael Bond is author of The Power of Others.

From the BBC:

When people are isolated from human contact, their mind can do some truly bizarre things, says Michael Bond. Why does this happen?

Sarah Shourd’s mind began to slip after about two months into her incarceration. She heard phantom footsteps and flashing lights, and spent most of her day crouched on all fours, listening through a gap in the door.

That summer, the 32-year-old had been hiking with two friends in the mountains of Iraqi Kurdistan when they were arrested by Iranian troops after straying onto the border with Iran. Accused of spying, they were kept in solitary confinement in Evin prison in Tehran, each in their own tiny cell. She endured almost 10,000 hours with little human contact before she was freed. One of the most disturbing effects was the hallucinations.

“In the periphery of my vision, I began to see flashing lights, only to jerk my head around to find that nothing was there,” she wrote in the New York Times in 2011. “At one point, I heard someone screaming, and it wasn’t until I felt the hands of one of the friendlier guards on my face, trying to revive me, that I realised the screams were my own.”

We all want to be alone from time to time, to escape the demands of our colleagues or the hassle of crowds. But not alone alone. For most people, prolonged social isolation is all bad, particularly mentally. We know this not only from reports by people like Shourd who have experienced it first-hand, but also from psychological experiments on the effects of isolation and sensory deprivation, some of which had to be called off due to the extreme and bizarre reactions of those involved. Why does the mind unravel so spectacularly when we’re truly on our own, and is there any way to stop it?

We’ve known for a while that isolation is physically bad for us. Chronically lonely people have higher blood pressure, are more vulnerable to infection, and are also more likely to develop Alzheimer’s disease and dementia. Loneliness also interferes with a whole range of everyday functioning, such as sleep patterns, attention and logical and verbal reasoning. The mechanisms behind these effects are still unclear, though what is known is that social isolation unleashes an extreme immune response – a cascade of stress hormones and inflammation. This may have been appropriate in our early ancestors, when being isolated from the group carried big physical risks, but for us the outcome is mostly harmful.

Yet some of the most profound effects of loneliness are on the mind. For starters, isolation messes with our sense of time. One of the strangest effects is the ‘time-shifting’ reported by those who have spent long periods living underground without daylight. In 1961, French geologist Michel Siffre led a two-week expedition to study an underground glacier beneath the French Alps and ended up staying two months, fascinated by how the darkness affected human biology. He decided to abandon his watch and “live like an animal”. While conducting tests with his team on the surface, they discovered it took him five minutes to count to what he thought was 120 seconds.

A similar pattern of ‘slowing time’ was reported by Maurizio Montalbini, a sociologist and caving enthusiast. In 1993, Montalbini spent 366 days in an underground cavern near Pesaro in Italy that had been designed with Nasa to simulate space missions, breaking his own world record for time spent underground. When he emerged, he was convinced only 219 days had passed. His sleep-wake cycles had almost doubled in length. Since then, researchers have found that in darkness most people eventually adjust to a 48-hour cycle: 36 hours of activity followed by 12 hours of sleep. The reasons are still unclear.

As well as their time-shifts, Siffre and Montalbini reported periods of mental instability too. But these experiences were nothing compared with the extreme reactions seen in notorious sensory deprivation experiments in the mid-20th Century.

In the 1950s and 1960s, China was rumoured to be using solitary confinement to “brainwash” American prisoners captured during the Korean War, and the US and Canadian governments were all too keen to try it out. Their defence departments funded a series of research programmes that might be considered ethically dubious today.

The most extensive took place at McGill University Medical Center in Montreal, led by the psychologist Donald Hebb. The McGill researchers invited paid volunteers – mainly college students – to spend days or weeks by themselves in sound-proof cubicles, deprived of meaningful human contact. Their aim was to reduce perceptual stimulation to a minimum, to see how their subjects would behave when almost nothing was happening. They minimised what they could feel, see, hear and touch, fitting them with translucent visors, cotton gloves and cardboard cuffs extending beyond the fingertips. As Scientific American magazine reported at the time, they had them lie on U-shaped foam pillows to restrict noise, and set up a continuous hum of air-conditioning units to mask small sounds.

After only a few hours, the students became acutely restless. They started to crave stimulation, talking, singing or reciting poetry to themselves to break the monotony. Later, many of them became anxious or highly emotional. Their mental performance suffered too, struggling with arithmetic and word association tests.

But the most alarming effects were the hallucinations. They would start with points of light, lines or shapes, eventually evolving into bizarre scenes, such as squirrels marching with sacks over their shoulders or processions of eyeglasses filing down a street. They had no control over what they saw: one man saw only dogs; another, babies.

Some of them experienced sound hallucinations as well: a music box or a choir, for instance. Others imagined sensations of touch: one man had the sense he had been hit in the arm by pellets fired from guns. Another, reaching out to touch a doorknob, felt an electric shock.

When they emerged from the experiment they found it hard to shake this altered sense of reality, convinced that the whole room was in motion, or that objects were constantly changing shape and size.

Read the entire article here.

 

Send to Kindle

The Art of Annoyance

g-g-clad

Our favorite voyeurs and provocateurs of contemporary British culture are at it again. Artists Gilbert & George have resurfaced with a new and thoroughly annoying collection — Scapegoating Pictures. You can catch their latest treatise on the state of their city (London) and nation at White Cube in London from July 18 – September 28.

From the Guardian.

The world of art is overwhelmingly liberal and forward looking. Unless you start following the money into Charles Saatchi’s bank account, the mood, content and operating assumptions of contemporary art are strikingly leftwing, from Bob and Roberta Smith’s cute posters to Jeremy Deller’s people’s art. The consensus is so progressive it does not need saying.

Gilbert & George have never signed up to that consensus. I am not saying they are rightwing. I am definitely not saying they are “racist”. But throughout their long careers, from a nostalgia for Edwardian music-hall songs to a more unsettling affinity for skinheads, they have delighted in provoking … us, dear Guardian reader.

Their new exhibition of grand, relentless photomontages restates their defiant desire to offend on a colossal scale. I could almost hear them at my shoulder asking: “Are you annoyed yet?”

Then suddenly they were at my shoulder, as I wrote down choice quotes from Scapegoating Pictures, the scabrous triptych of slogan-spattered pictures that climaxes this exhibition. When I confessed I was wondering which ones I could quote in a newspaper they insisted it’s all quotable: “We have a free press.” So here goes: “Fuck the Vicar.” “Get Frotting.” “Be candid with christians.” “Jerk off a judge.” “Crucify a curator.” “Molest a mullah.”

This wall of insults, mostly directed at religion, is the manifesto of Gilbert & George’s new pictures – and yet you discover it only at the end of the show. Before revealing where they are really coming from in this dirty-mouthed atheist onslaught, they have teased you with all kinds of dubious paranoias. What are these old men – Gilbert & George are 70 and 72, and the self-portraits that warp and gyrate through this kaleidoscopic digital-age profusion of images make no attempt to conceal their ageing process – so scared of?

At times this exhibition is like going on a tour of east London with one of Ukip’s less presentable candidates. Just look at that woman veiling her face. And here is a poster calling for an Islamic state in Britain.

Far from being scared, these artists are bold as brass. No one is asking Gilbert & George to go over the top one more time and plumb the psychic depths of Britain. They’re respectable now; they could just sit back in their suits. But, in these turbulent and estranging works, they give voice to the divided reality of a country at one and the same time gloriously plural and savagely bigoted.

In reality, nothing could be further from the mentality of racists and little Englanders than the polymorphically playful world of Gilbert & George. Their images merge with the faces of young men of all races who have caught their eye. Bullet-like metal canisters pulse through the pictures like threats of violence. Yet these menacing forms are actually empty containers for the drug nitrous oxide found by the artists outside their home, things that look evil but are residues of ecstatic nights.

No other artists today portray their own time and place with the curiosity that Gilbert & George display here. Their own lives are starkly visible, as they walk around their local streets in Spitalfiields, collecting the evidence of drug-fuelled mayhem and looking at the latest graffiti.

Read the entire story and see more of G & G’s works here.

Image: Clad, Gilbert & George, 2013. Courtesy of Gilbert & George / Guardian.

Send to Kindle

You Are a Neural Computation

Since the days of Aristotle, and later Descartes, thinkers have sought to explain consciousness and free will. Several thousand years on and we are still pondering the notion; science has made great strides and yet fundamentally we still have little idea.

Many neuroscientists now armed with new and very precise research tools are aiming to change this. Yet, increasingly it seems that free will may indeed by a cognitive illusion. Evidence suggests that our subconscious decides and initiates action for us long before we are aware of making a conscious decision. There seems to be no god or ghost in the machine.

From Technology Review:

It was an expedition seeking something never caught before: a single human neuron lighting up to create an urge, albeit for the minor task of moving an index finger, before the subject was even aware of feeling anything. Four years ago, Itzhak Fried, a neurosurgeon at the University of California, Los Angeles, slipped several probes, each with eight hairlike electrodes able to record from single neurons, into the brains of epilepsy patients. (The patients were undergoing surgery to diagnose the source of severe seizures and had agreed to participate in experiments during the process.) Probes in place, the patients—who were conscious—were given instructions to press a button at any time of their choosing, but also to report when they’d first felt the urge to do so.

Later, Gabriel Kreiman, a neuroscientist at Harvard Medical School and Children’s Hospital in Boston, captured the quarry. Poring over data after surgeries in 12 patients, he found telltale flashes of individual neurons in the pre-­supplementary motor area (associated with movement) and the anterior cingulate (associated with motivation and attention), preceding the reported urges by anywhere from hundreds of milliseconds to several seconds. It was a direct neural measurement of the unconscious brain at work—caught in the act of formulating a volitional, or freely willed, decision. Now Kreiman and his colleagues are planning to repeat the feat, but this time they aim to detect pre-urge signatures in real time and stop the subject from performing the action—or see if that’s even possible.

A variety of imaging studies in humans have revealed that brain activity related to decision-making tends to precede conscious action. Implants in macaques and other animals have examined brain circuits involved in perception and action. But Kreiman broke ground by directly measuring a preconscious decision in humans at the level of single neurons. To be sure, the readouts came from an average of just 20 neurons in each patient. (The human brain has about 86 billion of them, each with thousands of connections.) And ultimately, those neurons fired only in response to a chain of even earlier events. But as more such experiments peer deeper into the labyrinth of neural activity behind decisions—whether they involve moving a finger or opting to buy, eat, or kill something—science could eventually tease out the full circuitry of decision-making and perhaps point to behavioral therapies or treatments. “We need to understand the neuronal basis of voluntary decision-making—or ‘freely willed’ decision-­making—and its pathological counterparts if we want to help people such as drug, sex, food, and gambling addicts, or patients with obsessive-compulsive disorder,” says Christof Koch, chief scientist at the Allen Institute of Brain Science in Seattle (see “Cracking the Brain’s Codes”). “Many of these people perfectly well know that what they are doing is dysfunctional but feel powerless to prevent themselves from engaging in these behaviors.”

Kreiman, 42, believes his work challenges important Western philosophical ideas about free will. The Argentine-born neuroscientist, an associate professor at Harvard Medical School, specializes in visual object recognition and memory formation, which draw partly on unconscious processes. He has a thick mop of black hair and a tendency to pause and think a long moment before reframing a question and replying to it expansively. At the wheel of his Jeep as we drove down Broadway in Cambridge, Massachusetts, Kreiman leaned over to adjust the MP3 player—toggling between Vivaldi, Lady Gaga, and Bach. As he did so, his left hand, the one on the steering wheel, slipped to let the Jeep drift a bit over the double yellow lines. Kreiman’s view is that his neurons made him do it, and they also made him correct his small error an instant later; in short, all actions are the result of neural computations and nothing more. “I am interested in a basic age-old question,” he says. “Are decisions really free? I have a somewhat extreme view of this—that there is nothing really free about free will. Ultimately, there are neurons that obey the laws of physics and mathematics. It’s fine if you say ‘I decided’—that’s the language we use. But there is no god in the machine—only neurons that are firing.”

Our philosophical ideas about free will date back to Aristotle and were systematized by René Descartes, who argued that humans possess a God-given “mind,” separate from our material bodies, that endows us with the capacity to freely choose one thing rather than another. Kreiman takes this as his departure point. But he’s not arguing that we lack any control over ourselves. He doesn’t say that our decisions aren’t influenced by evolution, experiences, societal norms, sensations, and perceived consequences. “All of these external influences are fundamental to the way we decide what we do,” he says. “We do have experiences, we do learn, we can change our behavior.”

But the firing of a neuron that guides us one way or another is ultimately like the toss of a coin, Kreiman insists. “The rules that govern our decisions are similar to the rules that govern whether a coin will land one way or the other. Ultimately there is physics; it is chaotic in both cases, but at the end of the day, nobody will argue the coin ‘wanted’ to land heads or tails. There is no real volition to the coin.”

Testing Free Will

It’s only in the past three to four decades that imaging tools and probes have been able to measure what actually happens in the brain. A key research milestone was reached in the early 1980s when Benjamin Libet, a researcher in the physiology department at the University of California, San Francisco, made a remarkable study that tested the idea of conscious free will with actual data.

Libet fitted subjects with EEGs—gadgets that measure aggregate electrical brain activity through the scalp—and had them look at a clock dial that spun around every 2.8 seconds. The subjects were asked to press a button whenever they chose to do so—but told they should also take note of where the time hand was when they first felt the “wish or urge.” It turns out that the actual brain activity involved in the action began 300 milliseconds, on average, before the subject was conscious of wanting to press the button. While some scientists criticized the methods—questioning, among other things, the accuracy of the subjects’ self-reporting—the study set others thinking about how to investigate the same questions. Since then, functional magnetic resonance imaging (fMRI) has been used to map brain activity by measuring blood flow, and other studies have also measured brain activity processes that take place before decisions are made. But while fMRI transformed brain science, it was still only an indirect tool, providing very low spatial resolution and averaging data from millions of neurons. Kreiman’s own study design was the same as Libet’s, with the important addition of the direct single-neuron measurement.

When Libet was in his prime, ­Kreiman was a boy. As a student of physical chemistry at the University of Buenos Aires, he was interested in neurons and brains. When he went for his PhD at Caltech, his passion solidified under his advisor, Koch. Koch was deep in collaboration with Francis Crick, co-discoverer of DNA’s structure, to look for evidence of how consciousness was represented by neurons. For the star-struck kid from Argentina, “it was really life-changing,” he recalls. “Several decades ago, people said this was not a question serious scientists should be thinking about; they either had to be smoking something or have a Nobel Prize”—and Crick, of course, was a Nobelist. Crick hypothesized that studying how the brain processed visual information was one way to study consciousness (we tap unconscious processes to quickly decipher scenes and objects), and he collaborated with Koch on a number of important studies. Kreiman was inspired by the work. “I was very excited about the possibility of asking what seems to be the most fundamental aspect of cognition, consciousness, and free will in a reductionist way—in terms of neurons and circuits of neurons,” he says.

One thing was in short supply: humans willing to have scientists cut open their skulls and poke at their brains. One day in the late 1990s, Kreiman attended a journal club—a kind of book club for scientists reviewing the latest literature—and came across a paper by Fried on how to do brain science in people getting electrodes implanted in their brains to identify the source of severe epileptic seizures. Before he’d heard of Fried, “I thought examining the activity of neurons was the domain of monkeys and rats and cats, not humans,” Kreiman says. Crick introduced Koch to Fried, and soon Koch, Fried, and Kreiman were collaborating on studies that investigated human neural activity, including the experiment that made the direct neural measurement of the urge to move a finger. “This was the opening shot in a new phase of the investigation of questions of voluntary action and free will,” Koch says.

Read the entire article here.

Send to Kindle

Go Forth And Declutter

Google-search-hoarding

Having only just recently re-located to Colorado’s wondrous Front Range of the Rocky Mountains, your friendly editor now finds himself surrounded by figurative, less-inspiring mountains: moving boxes, bins, bags, more boxes. It’s floor to ceiling clutter as far as the eye can see.

Some of these boxes contain essentials, yet probably around 80 percent hold stuff. Yes, just stuff — aging items that hold some kind of sentimental meaning or future promise: old CDs, baby clothes, used ticket stubs, toys from an attic three moves ago, too many socks, ill-fitting clothing, 13 allen wrenches and screwdrivers, first-grade school projects, photo negatives, fading National Geographic magazines, gummed-up fountain pens, European postcards…

So, here’s a very timely story on the psychology of clutter and hoarding.

From the WSJ:

Jennifer James and her husband don’t have a lot of clutter—but they do find it hard to part with their children’s things. The guest cottage behind their home in Oklahoma City is half-filled with old toys, outgrown clothing, artwork, school papers, two baby beds, a bassinet and a rocking horse.

“Every time I think about getting rid of it, I want to cry,” says Ms. James, a 46-year-old public-relations consultant. She fears her children, ages 6, 8 and 16, will grow up and think she didn’t love them if she doesn’t save it all. “In keeping all this stuff, I think someday I’ll be able to say to my children, ‘See—I treasured your innocence. I treasured you!’ “

Many powerful emotions are lurking amid stuff we keep. Whether it’s piles of unread newspapers, clothes that don’t fit, outdated electronics, even empty margarine tubs, the things we accumulate reflect some of our deepest thoughts and feelings.

Now there’s growing recognition among professional organizers that to come to grips with their clutter, clients need to understand why they save what they save, or things will inevitably pile up again. In some cases, therapists are working along with organizers to help clients confront their psychological demons.

“The work we do with clients goes so much beyond making their closets look pretty,” says Collette Shine, president of the New York chapter of the National Association of Professional Organizers. “It involves getting into their hearts and their heads.”

For some people—especially those with big basements—hanging onto old and unused things doesn’t present a problem. But many others say they’re drowning in clutter.

“I have clients who say they are distressed at all the clutter they have, and distressed at the thought of getting rid of things,” says Simon Rego, director of psychology training at Montefiore Medical Center in Bronx, N.Y., who makes house calls, in extreme cases, to help hoarders.

In some cases, chronic disorganization can be a symptom of Attention Deficit Hyperactivity Disorder, Obsessive-Compulsive Disorder and dementia—all of which involve difficulty with planning, focusing and making decisions.

The extreme form, hoarding, is now a distinct psychiatric disorder, defined in the new Diagnostic and Statistical Manual-5 as “persistent difficulty discarding possessions, regardless of their value” such that living areas cannot be used. Despite all the media attention, only 2% to 5% of people fit the criteria—although many more joke, or fear, they are headed that way.

Difficulty letting go of your stuff can also go hand in hand with separation anxiety, compulsive shopping, perfectionism, procrastination and body-image issues. And the reluctance to cope can create a vicious cycle of avoidance, anxiety and guilt.

In most cases, however, psychologists say that clutter can be traced to what they call cognitive errors—flawed thinking that drives dysfunctional behaviors that can get out of hand.

Among the most common clutter-generating bits of logic: “I might need these someday.” “These might be valuable.” “These might fit again if I lose (or gain) weight.”

“We all have these dysfunctional thoughts. It’s perfectly normal,” Dr. Rego says. The trick, he says, is to recognize the irrational thought that makes you cling to an item and substitute one that helps you let go, such as, “Somebody else could use this, so I’ll give it away.”

He concedes he has saved “maybe 600″ disposable Allen wrenches that came with IKEA furniture over the years.

The biggest sources of clutter and the hardest to discard are things that hold sentimental meaning. Dr. Rego says it’s natural to want to hang onto objects that trigger memories, but some people confuse letting go of the object with letting go of the person.

Linda Samuels, president of the Institute for Challenging Disorganization, an education and research group, says there’s no reason to get rid of things just for the sake of doing it.

“Figure out what’s important to you and create an environment that supports that,” she says.

Robert McCollum, a state tax auditor and Ms. James’s husband, says he treasures items like the broken fairy wand one daughter carried around for months.

“I don’t want to lose my memories, and I don’t need a professional organizer,” he says. “I’ve already organized it all in bins.” The only problem would be if they ever move to a place that doesn’t have 1,000 square feet of storage, he adds.

Sometimes the memories people cling to are images of themselves in different roles or happier times. “Our closets are windows into our internal selves,” says Jennifer Baumgartner, a Baltimore psychologist and author of “You Are What You Wear.”

“Say you’re holding on to your team uniforms from college,” she says. “Ask yourself, what about that experience did you like? What can you do in your life now to recapture that?”

Somebody-might-need-this thinking is often what drives people to save stacks of newspapers, magazines, outdated electronic equipment, decades of financial records and craft supplies. With a little imagination, anything could be fodder for scrapbooks or Halloween costumes.

For people afraid to toss things they might want in the future, Dr. Baumgartner says it helps to have a worst-case scenario plan. “What if you do need that tutu you’ve given away for a Halloween costume? What would you do? You can find almost anything on eBay.

Read the entire story here.

Image courtesy of Google search.

Send to Kindle

Questioning Quantum Orthodoxy

de-BrogliePhysics works very well in explaining our world, yet it is also broken — it cannot, at the moment, reconcile our views of the very small (quantum theory) with those of the very large (relativity theory).

So although the probabilistic underpinnings of quantum theory have done wonders in allowing physicists to construct the Standard Model, gaps remain.

Back in the mid-1920s, the probabilistic worldview proposed by Niels Bohr and others gained favor and took hold. A competing theory, known as the pilot wave theory, proposed by a young Louis de Broglie, was given short shrift. Yet some theorists have maintained that it may do a better job of reconciling this core gap in our understanding — so it is time to revisit and breathe fresh life into pilot wave theory.

From Wired / Quanta:

For nearly a century, “reality” has been a murky concept. The laws of quantum physics seem to suggest that particles spend much of their time in a ghostly state, lacking even basic properties such as a definite location and instead existing everywhere and nowhere at once. Only when a particle is measured does it suddenly materialize, appearing to pick its position as if by a roll of the dice.

This idea that nature is inherently probabilistic — that particles have no hard properties, only likelihoods, until they are observed — is directly implied by the standard equations of quantum mechanics. But now a set of surprising experiments with fluids has revived old skepticism about that worldview. The bizarre results are fueling interest in an almost forgotten version of quantum mechanics, one that never gave up the idea of a single, concrete reality.

The experiments involve an oil droplet that bounces along the surface of a liquid. The droplet gently sloshes the liquid with every bounce. At the same time, ripples from past bounces affect its course. The droplet’s interaction with its own ripples, which form what’s known as a pilot wave, causes it to exhibit behaviors previously thought to be peculiar to elementary particles — including behaviors seen as evidence that these particles are spread through space like waves, without any specific location, until they are measured.

Particles at the quantum scale seem to do things that human-scale objects do not do. They can tunnel through barriers, spontaneously arise or annihilate, and occupy discrete energy levels. This new body of research reveals that oil droplets, when guided by pilot waves, also exhibit these quantum-like features.

To some researchers, the experiments suggest that quantum objects are as definite as droplets, and that they too are guided by pilot waves — in this case, fluid-like undulations in space and time. These arguments have injected new life into a deterministic (as opposed to probabilistic) theory of the microscopic world first proposed, and rejected, at the birth of quantum mechanics.

“This is a classical system that exhibits behavior that people previously thought was exclusive to the quantum realm, and we can say why,” said John Bush, a professor of applied mathematics at the Massachusetts Institute of Technology who has led several recent bouncing-droplet experiments. “The more things we understand and can provide a physical rationale for, the more difficult it will be to defend the ‘quantum mechanics is magic’ perspective.”

Magical Measurements

The orthodox view of quantum mechanics, known as the “Copenhagen interpretation” after the home city of Danish physicist Niels Bohr, one of its architects, holds that particles play out all possible realities simultaneously. Each particle is represented by a “probability wave” weighting these various possibilities, and the wave collapses to a definite state only when the particle is measured. The equations of quantum mechanics do not address how a particle’s properties solidify at the moment of measurement, or how, at such moments, reality picks which form to take. But the calculations work. As Seth Lloyd, a quantum physicist at MIT, put it, “Quantum mechanics is just counterintuitive and we just have to suck it up.”

A classic experiment in quantum mechanics that seems to demonstrate the probabilistic nature of reality involves a beam of particles (such as electrons) propelled one by one toward a pair of slits in a screen. When no one keeps track of each electron’s trajectory, it seems to pass through both slits simultaneously. In time, the electron beam creates a wavelike interference pattern of bright and dark stripes on the other side of the screen. But when a detector is placed in front of one of the slits, its measurement causes the particles to lose their wavelike omnipresence, collapse into definite states, and travel through one slit or the other. The interference pattern vanishes. The great 20th-century physicist Richard Feynman said that this double-slit experiment “has in it the heart of quantum mechanics,” and “is impossible, absolutely impossible, to explain in any classical way.”

Some physicists now disagree. “Quantum mechanics is very successful; nobody’s claiming that it’s wrong,” said Paul Milewski, a professor of mathematics at the University of Bath in England who has devised computer models of bouncing-droplet dynamics. “What we believe is that there may be, in fact, some more fundamental reason why [quantum mechanics] looks the way it does.”

Riding Waves

The idea that pilot waves might explain the peculiarities of particles dates back to the early days of quantum mechanics. The French physicist Louis de Broglie presented the earliest version of pilot-wave theory at the 1927 Solvay Conference in Brussels, a famous gathering of the founders of the field. As de Broglie explained that day to Bohr, Albert Einstein, Erwin Schrödinger, Werner Heisenberg and two dozen other celebrated physicists, pilot-wave theory made all the same predictions as the probabilistic formulation of quantum mechanics (which wouldn’t be referred to as the “Copenhagen” interpretation until the 1950s), but without the ghostliness or mysterious collapse.

The probabilistic version, championed by Bohr, involves a single equation that represents likely and unlikely locations of particles as peaks and troughs of a wave. Bohr interpreted this probability-wave equation as a complete definition of the particle. But de Broglie urged his colleagues to use two equations: one describing a real, physical wave, and another tying the trajectory of an actual, concrete particle to the variables in that wave equation, as if the particle interacts with and is propelled by the wave rather than being defined by it.

For example, consider the double-slit experiment. In de Broglie’s pilot-wave picture, each electron passes through just one of the two slits, but is influenced by a pilot wave that splits and travels through both slits. Like flotsam in a current, the particle is drawn to the places where the two wavefronts cooperate, and does not go where they cancel out.

De Broglie could not predict the exact place where an individual particle would end up — just like Bohr’s version of events, pilot-wave theory predicts only the statistical distribution of outcomes, or the bright and dark stripes — but the two men interpreted this shortcoming differently. Bohr claimed that particles don’t have definite trajectories; de Broglie argued that they do, but that we can’t measure each particle’s initial position well enough to deduce its exact path.

In principle, however, the pilot-wave theory is deterministic: The future evolves dynamically from the past, so that, if the exact state of all the particles in the universe were known at a given instant, their states at all future times could be calculated.

At the Solvay conference, Einstein objected to a probabilistic universe, quipping, “God does not play dice,” but he seemed ambivalent about de Broglie’s alternative. Bohr told Einstein to “stop telling God what to do,” and (for reasons that remain in dispute) he won the day. By 1932, when the Hungarian-American mathematician John von Neumann claimed to have proven that the probabilistic wave equation in quantum mechanics could have no “hidden variables” (that is, missing components, such as de Broglie’s particle with its well-defined trajectory), pilot-wave theory was so poorly regarded that most physicists believed von Neumann’s proof without even reading a translation.

More than 30 years would pass before von Neumann’s proof was shown to be false, but by then the damage was done. The physicist David Bohm resurrected pilot-wave theory in a modified form in 1952, with Einstein’s encouragement, and made clear that it did work, but it never caught on. (The theory is also known as de Broglie-Bohm theory, or Bohmian mechanics.)

Later, the Northern Irish physicist John Stewart Bell went on to prove a seminal theorem that many physicists today misinterpret as rendering hidden variables impossible. But Bell supported pilot-wave theory. He was the one who pointed out the flaws in von Neumann’s original proof. And in 1986 he wrote that pilot-wave theory “seems to me so natural and simple, to resolve the wave-particle dilemma in such a clear and ordinary way, that it is a great mystery to me that it was so generally ignored.”

The neglect continues. A century down the line, the standard, probabilistic formulation of quantum mechanics has been combined with Einstein’s theory of special relativity and developed into the Standard Model, an elaborate and precise description of most of the particles and forces in the universe. Acclimating to the weirdness of quantum mechanics has become a physicists’ rite of passage. The old, deterministic alternative is not mentioned in most textbooks; most people in the field haven’t heard of it. Sheldon Goldstein, a professor of mathematics, physics and philosophy at Rutgers University and a supporter of pilot-wave theory, blames the “preposterous” neglect of the theory on “decades of indoctrination.” At this stage, Goldstein and several others noted, researchers risk their careers by questioning quantum orthodoxy.

A Quantum Drop

Now at last, pilot-wave theory may be experiencing a minor comeback — at least, among fluid dynamicists. “I wish that the people who were developing quantum mechanics at the beginning of last century had access to these experiments,” Milewski said. “Because then the whole history of quantum mechanics might be different.”

The experiments began a decade ago, when Yves Couder and colleagues at Paris Diderot University discovered that vibrating a silicon oil bath up and down at a particular frequency can induce a droplet to bounce along the surface. The droplet’s path, they found, was guided by the slanted contours of the liquid’s surface generated from the droplet’s own bounces — a mutual particle-wave interaction analogous to de Broglie’s pilot-wave concept.

Read the entire article here.

Image: Louis de Broglie. Courtesy of Wikipedia.

Send to Kindle

Defying Enemy Number One

Sir_Isaac_NewtonEnemy number one in this case is not your favorite team’s arch-rival or your political nemesis or your neighbor’s nocturnal barking dog. It is not sugar, nor is it trans-fat. Enemy number one is not North Korea (close),  nor is it the latest group of murderous  terrorists  (closer).

The real enemy is gravity. Not the movie, that is, but the natural phenomenon.

Gravity is constricting: it anchors us to our measly home  planet, making extra-terrestrial exploration rather difficult. Gravity is painful: it drags us down, it makes us fall — and when we’re down , it helps other things fall on top  of us. Gravity is an enigma.

But help may not be too distant; enter The Gravity Research Foundation. While the foundation’s mission may no longer be to counteract gravity, it still aims to help us better understand.

From the NYT:

Not long after the bombings of Hiroshima and Nagasaki, while the world was reckoning with the specter of nuclear energy, a businessman named Roger Babson was worrying about another of nature’s forces: gravity.

It had been 55 years since his sister Edith drowned in the Annisquam River, in Gloucester, Mass., when gravity, as Babson later described it, “came up and seized her like a dragon and brought her to the bottom.” Later on, the dragon took his grandson, too, as he tried to save a friend during a boating mishap.

Something had to be done.

“It seems as if there must be discovered some partial insulator of gravity which could be used to save millions of lives and prevent accidents,” Babson wrote in a manifesto, “Gravity — Our Enemy Number One.” In 1949, drawing on his considerable wealth, he started the Gravity Research Foundation and began awarding annual cash prizes for the best new ideas for furthering his cause.

It turned out to be a hopeless one. By the time the 2014 awards were announced last month, the foundation was no longer hoping to counteract gravity — it forms the very architecture of space-time — but to better understand it. What began as a crank endeavor has become mainstream. Over the years, winners of the prizes have included the likes of Stephen Hawking, Freeman Dyson, Roger Penrose and Martin Rees.

With his theory of general relativity, Einstein described gravity with an elegance that has not been surpassed. A mass like the sun makes the universe bend, causing smaller masses like planets to move toward it.

The problem is that nature’s other three forces are described in an entirely different way, by quantum mechanics. In this system forces are conveyed by particles. Photons, the most familiar example, are the carriers of light. For many scientists, the ultimate prize would be proof that gravity is carried by gravitons, allowing it to mesh neatly with the rest of the machine.

So far that has been as insurmountable as Babson’s old dream. After nearly a century of trying, the best physicists have come up with is superstring theory, a self-consistent but possibly hollow body of mathematics that depends on the existence of extra dimensions and implies that our universe is one of a multitude, each unknowable to the rest.

With all the accomplishments our species has achieved, we could be forgiven for concluding that we have reached a dead end. But human nature compels us to go on.

This year’s top gravity prize of $4,000 went to Lawrence Krauss and Frank Wilczek. Dr. Wilczek shared a Nobel Prize in 2004 for his part in developing the theory of the strong nuclear force, the one that holds quarks together and forms the cores of atoms.

So far gravitons have eluded science’s best detectors, like LIGO, the Laser Interferometer Gravitational-Wave Observatory. Mr. Dyson suggested at a recent talk that the search might be futile, requiring an instrument with mirrors so massive that they would collapse to form a black hole — gravity defeating its own understanding. But in their paper Dr. Krauss and Dr. Wilczek suggest how gravitons might leave their mark on cosmic background radiation, the afterglow of the Big Bang.

Continue reading the main story Continue reading the main story
Continue reading the main story

There are other mysteries to contend with. Despite the toll it took on Babson’s family, theorists remain puzzled over why gravity is so much weaker than electromagnetism. Hold a refrigerator magnet over a paper clip, and it will fly upward and away from Earth’s pull.

Reaching for an explanation, the physicists Lisa Randall and Raman Sundrum once proposed that gravity is diluted because it leaks into a parallel universe. Striking off in a different direction, Dr. Randall and another colleague, Matthew Reece, recently speculated that the pull of a disk of dark matter might be responsible for jostling the solar system and unleashing periodic comet storms like one that might have killed off the dinosaurs.

It was a young theorist named Bryce DeWitt who helped disabuse Babson of his dream of stopping such a mighty force. In “The Perfect Theory,” a new book about general relativity, the Oxford astrophysicist Pedro G. Ferreira tells how DeWitt, in need of a down payment for a house, entered the Gravitational Research Foundation’s competition in 1953 with a paper showing why the attempt to make any kind of antigravity device was “a waste of time.”

He won the prize, the foundation became more respectable, and DeWitt went on to become one of the most prominent theorists of general relativity. Babson, however, was not entirely deterred. In 1962 after more than 100 prominent Atlantans were killed in a plane crash in Paris, he donated $5,000 to Emory University along with a marble monument “to remind students of the blessings forthcoming” once gravity is counteracted.

He paid for similar antigravity monuments at more than a dozen campuses, including one at Tufts University, where newly minted doctoral students in cosmology kneel before it in a ceremony in which an apple is dropped on their heads.

I thought of Babson recently during a poignant scene in the movie “Gravity,” in which two astronauts are floating high above Earth, stranded from home. During a moment of calm, one of them, Lt. Matt Kowalski (played by George Clooney), asks the other, Dr. Ryan Stone (Sandra Bullock), “What do you miss down there?”

She tells him about her daughter:

“She was 4. She was at school playing tag, slipped and hit her head, and that was it. The stupidest thing.” It was gravity that did her in.

Read the entire article here.

Image: Portrait of Isaac Newton (1642-1727) by  Sir Godfrey Kneller (1646–1723). Courtesy of Wikipedia.

Send to Kindle

Iran, Women, Clothes

hajib_Jeune_femmeA fascinating essay by Haleh Anvari, Iranian writer and artist, provides an insightful view of the role that fashion takes in shaping many of our perceptions — some right, many wrong — of women.

Quite rightly she argues that the measures our culture places on women, through the lens of Western fashion or Muslim tradition, are misleading. In both cases, there remains a fundamental need to address and to continue to address women’s rights versus those of men. Fashion stereotypes may be vastly different across continents, but the underlying issues remain very much the same whether a woman wears a hijab on the street or lingerie on a catwalk.

From the NYT:

I took a series of photographs of myself in 2007 that show me sitting on the toilet, weighing myself, and shaving my legs in the bath. I shot them as an angry response to an encounter with a gallery owner in London’s artsy Brick Lane. I had offered him photos of colorful chadors — an attempt to question the black chador as the icon of Iran by showing the world that Iranian women were more than this piece of black cloth. The gallery owner wasn’t impressed. “Do you have any photos of Iranian women in their private moments?” he asked.

As an Iranian with a reinforced sense of the private-public divide we navigate daily in our country, I found his curiosity offensive. So I shot my “Private Moments” in a sardonic spirit, to show that Iranian women are like all women around the world if you get past the visual hurdle of the hijab. But I never shared those, not just because I would never get a permit to show them publicly in Iran, but also because I am prepared to go only so far to prove a point. Call me old-fashioned.Read the entire article here.

Ever since the hijab, a generic term for every Islamic modesty covering, became mandatory after the 1979 revolution, Iranian women have been used to represent the country visually. For the new Islamic republic, the all-covering cloak called a chador became a badge of honor, a trademark of fundamental change. To Western visitors, it dropped a pin on their travel maps, where the bodies of Iranian women became a stand-in for the character of Iranian society. When I worked with foreign journalists for six years, I helped produce reports that were illustrated invariably with a woman in a black chador. I once asked a photojournalist why. He said, “How else can we show where we are?”

How wonderful. We had become Iran’s Eiffel Tower or Big Ben.

Next came the manteau-and-head scarf combo — less traditional, and more relaxed, but keeping the lens on the women. Serious reports about elections used a “hair poking out of scarf” standard as an exit poll, or images of scarf-clad women lounging in coffee shops, to register change. One London newspaper illustrated a report on the rise of gasoline prices with a woman in a head scarf, photographed in a gas station, holding a pump nozzle with gasoline suggestively dripping from its tip. A visitor from Mars or a senior editor from New York might have been forgiven for imagining Iran as a strange land devoid of men, where fundamentalist chador-clad harridans vie for space with heathen babes guzzling cappuccinos. (Incidentally, women hardly ever step out of the car to pump gas here; attendants do it for us.)

The disputed 2009 elections, followed by demonstrations and a violent backlash, brought a brief respite. The foreign press was ejected, leaving the reporting to citizen journalists not bound by the West’s conventions. They depicted a politically mature citizenry, male and female, demanding civic acknowledgment together.

We are now witnessing another shift in Iran’s image. It shows Iran “unveiled” — a tired euphemism now being used to literally undress Iranian women or show them off as clotheshorses. An Iranian fashion designer in Paris receives more plaudits in the Western media for his blog’s street snapshots of stylish, affluent young women in North Tehran than he gets for his own designs. In this very publication, a male Iranian photographer depicted Iranian women through flimsy fabrics under the title “Veiled Truths”; one is shown in a one-piece pink swimsuit so minimal it could pass for underwear; others are made more sensual behind sheer “veils,” reinforcing a sense of peeking at them. Search the Internet and you can get an eyeful of nubile limbs in opposition to the country’s official image, shot by Iranian photographers of both sexes, keen to show the hidden, supposedly true, other side of Iran.

Young Iranians rightly desire to show the world the unseen sides of their lives. But their need to show themselves as like their peers in the West takes them into dangerous territory. Professional photographers and artists, encouraged by Western curators and seeking fast-track careers, are creating a new wave of homegrown neo-Orientalism. A favorite reworking of an old cliché is the thin, beautiful young woman reclining while smoking a hookah, dancing, or otherwise at leisure in her private spaces. Ingres could sue for plagiarism.

In a country where the word feminism is pejorative, there is no inkling that the values of both fundamentalism and Western consumerism are two sides of the same coin — the female body as an icon defining Iranian culture.

It is true that we Iranians live dual lives, and so it is true that to see us in focus, you must enter our inner sanctum. But the inner sanctum includes women who believe in the hijab, fat women, old women and, most important, women in professions from doctor to shopkeeper. It also includes men, not all of whom are below 30 years of age. If you wish to see Iran as it is, you need go no further than Facebook and Instagram. Here, Iran is neither fully veiled nor longing to undress itself. Its complex variety is shown through the lens of its own people, in both private and public spaces.

Read the entire essay here.

Image: Young woman from Naplouse in a hijab, c1867-1885. Courtesy of Wikipedia.

Send to Kindle

Dinosaurs of Retail

moa

Shopping malls in the United States were in their prime in the 1970s and ’80s. Many had positioned themselves a a bright, clean, utopian alternative to inner-city blight and decay. A quarter of a century on, while the mega-malls may be thriving, the numerous smaller suburban brethren are seeing lower sales. As internet shopping and retailing pervades all reaches of our society many midsize malls are decaying or shutting down completely.  Documentary photographer Seth Lawless captures this fascinating transition in a new book: Black Friday: the Collapse of the American Shopping Mall.

From the Guardian:

It is hard to believe there has ever been any life in this place. Shattered glass crunches under Seph Lawless’s feet as he strides through its dreary corridors. Overhead lights attached to ripped-out electrical wires hang suspended in the stale air and fading wallpaper peels off the walls like dead skin.

Lawless sidesteps debris as he passes from plot to plot in this retail graveyard called Rolling Acres Mall in Akron, Ohio. The shopping centre closed in 2008, and its largest retailers, which had tried to make it as standalone stores, emptied out by the end of last year. When Lawless stops to overlook a two-storey opening near the mall’s once-bustling core, only an occasional drop of water, dribbling through missing ceiling tiles, breaks the silence.

“You came, you shopped, you dressed nice – you went to the mall. That’s what people did,” says Lawless, a pseudonymous photographer who grew up in a suburb of nearby Cleveland. “It was very consumer-driven and kind of had an ugly side, but there was something beautiful about it. There was something there.”

Gazing down at the motionless escalators, dead plants and empty benches below, he adds: “It’s still beautiful, though. It’s almost like ancient ruins.”

Dying shopping malls are speckled across the United States, often in middle-class suburbs wrestling with socioeconomic shifts. Some, like Rolling Acres, have already succumbed. Estimates on the share that might close or be repurposed in coming decades range from 15 to 50%. Americans are returning downtown; online shopping is taking a 6% bite out of brick-and-mortar sales; and to many iPhone-clutching, city-dwelling and frequently jobless young people, the culture that spawned satire like Mallrats seems increasingly dated, even cartoonish.

According to longtime retail consultant Howard Davidowitz, numerous midmarket malls, many of them born during the country’s suburban explosion after the second world war, could very well share Rolling Acres’ fate. “They’re going, going, gone,” Davidowitz says. “They’re trying to change; they’re trying to get different kinds of anchors, discount stores … [But] what’s going on is the customers don’t have the fucking money. That’s it. This isn’t rocket science.”

Shopping culture follows housing culture. Sprawling malls were therefore a natural product of the postwar era, as Americans with cars and fat wallets sprawled to the suburbs. They were thrown up at a furious pace as shoppers fled cities, peaking at a few hundred per year at one point in the 1980s, according to Paco Underhill, an environmental psychologist and author of Call of the Mall: The Geography of Shopping. Though construction has since tapered off, developers left a mall overstock in their wake.

Currently, the US contains around 1,500 of the expansive “malls” of suburban consumer lore. Most share a handful of bland features. Brick exoskeletons usually contain two storeys of inward-facing stores separated by tile walkways. Food courts serve mediocre pizza. Parking lots are big enough to easily misplace a car. And to anchor them economically, malls typically depend on department stores: huge vendors offering a variety of products across interconnected sections.

For mid-century Americans, these gleaming marketplaces provided an almost utopian alternative to the urban commercial district, an artificial downtown with less crime and fewer vermin. As Joan Didion wrote in 1979, malls became “cities in which no one lives but everyone consumes”. Peppered throughout disconnected suburbs, they were a place to see and be seen, something shoppers have craved since the days of the Greek agora. And they quickly matured into a self-contained ecosystem, with their own species – mall rats, mall cops, mall walkers – and an annual feeding frenzy known as Black Friday.

“Local governments had never dealt with this sort of development and were basically bamboozled [by developers],” Underhill says of the mall planning process. “In contrast to Europe, where shopping malls are much more a product of public-private negotiation and funding, here in the US most were built under what I call ‘cowboy conditions’.”

Shopping centres in Europe might contain grocery stores or childcare centres, while those in Japan are often built around mass transit. But the suburban American variety is hard to get to and sells “apparel and gifts and damn little else”, Underhill says.

Nearly 700 shopping centres are “super-regional” megamalls, retail leviathans usually of at least 1 million square feet and upward of 80 stores. Megamalls typically outperform their 800 slightly smaller, “regional” counterparts, though size and financial health don’t overlap entirely. It’s clearer, however, that luxury malls in affluent areas are increasingly forcing the others to fight for scraps. Strip malls – up to a few dozen tenants conveniently lined along a major traffic artery – are retail’s bottom feeders and so well-suited to the new environment. But midmarket shopping centres have begun dying off alongside the middle class that once supported them. Regional malls have suffered at least three straight years of declining profit per square foot, according to the International Council of Shopping Centres (ICSC).

Read the entire story here.

Image: Mall of America. Courtesy of Wikipedia.

Send to Kindle

Your Tax Dollars At Work — Leetspeak

US-FBI-ShadedSealIt’s fascinating to see what our government agencies are doing with some of our hard earned tax dollars.

In this head-scratching example, the FBI — the FBI’s Intelligence Research Support Unit, no less — has just completed a 83-page glossary of Internet slang or “leetspeak”. LOL and Ugh! (the latter is not an acronym).

Check out the document via Muckrock here — they obtained the “secret” document through the Freedom of Information Act.

From the Washington Post:

The Internet is full of strange and bewildering neologisms, which anyone but a text-addled teen would struggle to understand. So the fine, taxpayer-funded people of the FBI — apparently not content to trawl Urban Dictionary, like the rest of us — compiled a glossary of Internet slang.

An 83-page glossary. Containing nearly 3,000 terms.

The glossary was recently made public through a Freedom of Information request by the group MuckRock, which posted the PDF, called “Twitter shorthand,” online. Despite its name, this isn’t just Twitter slang: As the FBI’s Intelligence Research Support Unit explains in the introduction, it’s a primer on shorthand used across the Internet, including in “instant messages, Facebook and Myspace.” As if that Myspace reference wasn’t proof enough that the FBI’s a tad out of touch, the IRSU then promises the list will prove useful both professionally and “for keeping up with your children and/or grandchildren.” (Your tax dollars at work!)

All of these minor gaffes could be forgiven, however, if the glossary itself was actually good. Obviously, FBI operatives and researchers need to understand Internet slang — the Internet is, increasingly, where crime goes down these days. But then we get things like ALOTBSOL (“always look on the bright side of life”) and AMOG (“alpha male of group”) … within the first 10 entries.

ALOTBSOL has, for the record, been tweeted fewer than 500 times in the entire eight-year history of Twitter. AMOG has been tweeted far more often, but usually in Spanish … as a misspelling, it would appear, of “amor” and “amigo.”

Among the other head-scratching terms the FBI considers can’t-miss Internet slang:

  1. AYFKMWTS (“are you f—— kidding me with this s—?”) — 990 tweets
  2. BFFLTDDUP (“best friends for life until death do us part) — 414 tweets
  3. BOGSAT (“bunch of guys sitting around talking”) — 144 tweets
  4. BTDTGTTSAWIO (“been there, done that, got the T-shirt and wore it out”) — 47 tweets
  5. BTWITIAILWY (“by the way, I think I am in love with you”) — 535 tweets
  6. DILLIGAD (“does it look like I give a damn?”) — 289 tweets
  7. DITYID (“did I tell you I’m depressed?”) — 69 tweets
  8. E2EG (“ear-to-ear grin”) — 125 tweets
  9. GIWIST (“gee, I wish I said that”) — 56 tweets
  10. HCDAJFU (“he could do a job for us”) — 25 tweets
  11. IAWTCSM (“I agree with this comment so much”) — 20 tweets
  12. IITYWIMWYBMAD (“if I tell you what it means will you buy me a drink?”) — 250 tweets
  13. LLTA (“lots and lots of thunderous applause”) — 855 tweets
  14. NIFOC (“naked in front of computer”) — 1,065 tweets, most of them referring to acronym guides like this one.
  15. PMYMHMMFSWGAD (“pardon me, you must have mistaken me for someone who gives a damn”) — 128 tweets
  16. SOMSW (“someone over my shoulder watching) — 170 tweets
  17. WAPCE (“women are pure concentrated evil”) — 233 tweets, few relating to women
  18. YKWRGMG (“you know what really grinds my gears?”) — 1,204 tweets

In all fairness to the FBI, they do get some things right: “crunk” is helpfully defined as “crazy and drunk,” FF is “a recommendation to follow someone referenced in the tweet,” and a whole range of online patois is translated to its proper English equivalent: hafta is “have to,” ima is “I’m going to,” kewt is “cute.”

Read the entire article here.

Image: FBI Seal. Courtesy of U.S. Government.

Send to Kindle

Goostman Versus Turing

eugene-goostman

Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree :-) Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question :-(

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

Send to Kindle

Nuts About Nutella

nutella

Connoisseurs of Nutella — that wonderful concoction of hazelnuts and chocolate — are celebrating 50 years of the iconic Italian spread. Here’s to the next 50 bites, sorry years! Say no more.

From the Guardian:

In Piedmont they have been making gianduiotto, a confectionery combining hazelnuts and cocoa sold in a pretty tinfoil wrapper, since the mid-18th century. They realised long ago that the nuts, which are plentiful in the surrounding hills, are a perfect match for chocolate. But no one had any idea that their union would prove so harmonious, lasting and fruitful. Only after the second world war was this historic marriage finally sealed.

Cocoa beans are harder to come by and, consequently, more expensive. Pietro Ferrero, an Alba-based pastry cook, decided to turn the problem upside down. Chocolate should not be allowed to dictate its terms. By using more nuts and less cocoa, one could obtain a product that was just as good and not as costly. What is more, it would be spread.

Nutella, one of the world’s best-known brands, celebrated its 50th anniversary in Alba last month. In telling the story of this chocolate spread, it’s difficult to avoid cliches: a success story emblematic of Italy’s postwar recovery, the tale of a visionary entrepreneur and his perseverance, a business model driven by a single product.

The early years were spectacular. In 1946 the Ferrero brothers produced and sold 300kg of their speciality; nine months later output had reached 10 tonnes. Pietro stayed at home making the spread. Giovanni went to market across Italy in his little Fiat. In 1948 Ferrero, now a limited company, moved into a 5,000 sq metre factory equipped to produce 50 tonnes of gianduiotto a month.

By 1949 the process was nearing perfection, with the launch of the “supercrema” version, which was smoother and stuck more to the bread than the knife. It was also the year Pietro died. He did not live long enough to savour his triumph.

His son Michele was driven by the same obsession with greater spreadability. Under his leadership Ferrero became an empire. But it would take another 15 years of hard work and endless experiments before finally, in 1964, Nutella was born.

The firm now sells 365,000 tonnes of Nutella a year worldwide, the biggest consumers being the Germans, French, Italians and Americans. The anniversary was, of course, the occasion for a big promotional operation. At a gathering in Rome last month, attended by two government ministers, journalists received a 1kg jar marked with the date and a commemorative Italian postage stamp. It is an ideal opportunity for Ferrero – which also owns the Tic Tac, Ferrero Rocher, Kinder and Estathé brands, among others – to affirm its values and rehearse its well-established narrative.

There are no recent pictures of the patriarch Michele, who divides his time between Belgium and Monaco. According to Forbes magazine he was worth $9.5bn in 2009, making him the richest person in Italy. He avoids the media and making public appearances, even eschewing the boards of leading Italian firms.

His son Giovanni, who has managed the company on his own after the early death of his brother Pietro in 2011, only agreed to a short interview on Italy’s main public TV channel. He abides by the same rule as his father: “Only on two occasions should the papers mention one’s name – birth and death.”

In contrast, Ferrero executives have plenty to say about both products and the company, with its 30,000-strong workforce at 14 locations, its €8bn ($10bn) revenue, 72% share of the chocolate-spreads market, 5 million friends on Facebook, 40m Google references, its hazelnut plantations in both hemispheres securing it a round-the-year supply of fresh ingredients and, of course, its knowhow.

“The recipe for Nutella is not a secret like Coca-Cola,” says marketing manager Laurent Cremona. “Everyone can find out the ingredients. We simply know how to combine them better than other people.”

Be that as it may, the factory in Alba is as closely guarded as Fort Knox and visits are not allowed. “It’s not a company, it’s an oasis of happiness,” says Francesco Paolo Fulci, a former ambassador and president of the Ferrero foundation. “In 70 years, we haven’t had a single day of industrial action.”

Read the entire article here.

Image: Never enough Nutella. Courtesy of secret Nutella fans the world over / Ferrero, S.P.A

Send to Kindle

theDiagonal is Dislocating to The Diagonal

Flatirons_Winter_SunriseDear readers, theDiagonal is in the midst of a major dislocation in May-June 2014. Thus, your friendly editor would like to apologize for the recent, intermittent service. While theDiagonal lives online, its human-powered (currently) editor is physically relocating with family to Boulder, CO. Normal, daily service from theDiagonal will resume in July.

The city of Boulder intersects Colorado State Highway 119, as it sweeps on a SW to NE track from the Front Range towards the Central Plains. Coincidentally, or not, highway 119 is more affectionately known as The Diagonal.

Image: The Flatirons, mountain formations, in Boulder, Colorado. Courtesy of Jesse Varner / AzaToth / Wikipedia.

Send to Kindle

Images: Go Directly To Jail or…

open-door

If you live online and write or share images it’s likely that you’ve been, or will soon be, sued by the predatory Getty Images. Your kindly editor at theDiagonal uses images found to be in the public domain or references them as fair use in this blog, and yet has fallen prey to this extortionate nuisance of a company.

Getty with its army of fee extortion collectors — many are not even legally trained or accredited — will find reason to send you numerous legalistic and threatening letters demanding hundreds of dollars in compensation and damages. It will do this without sound proof, relying on the threats to cajole unwary citizens to part with significant sums. This is such a big market for Getty that numerous services, such as this one, have sprung up over the years to help writers and bloggers combat the Getty extortion.

With that in mind, it’s refreshing to see the Metropolitan Museum of Art in New York taking a rather different stance: the venerable institution is doing us all a wonderful service by making many hundreds of thousands of classic images available online for free. Getty take that!

From WSJ:

This month, the Metropolitan Museum of Art released for download about 400,000 digital images of works that are in the public domain. The images, which are free to use for non-commercial use without permission or fees, may now be downloaded from the museum’s website. The museum will continue to add images to the collection as they digitize files as part of the initiative Open Access for Scholarly Content (OASC). 

When asked about the impact of the initiative, Sree Sreenivasan, Chief Digital Officer, said the new program would provide increased access and streamline the process of obtaining these images. “In keeping with the Museum’s mission, we hope the new image policy will stimulate new scholarship in a variety of media, provide greater access to our vast collection, and broaden the reach of the Museum to researchers world-wide. By providing open access, museums and scholars will no longer have to request permission to use our public domain images, they can download the images directly from our website.”

Thomas P. Campbell, director and chief executive of the Metropolitan Museum of Art, said the Met joins a growing number of museums using an open-access policy to make available digital images of public domain works. “I am delighted that digital technology can open the doors to this trove of images from our encyclopedic collection,” Mr. Campbell said in his May 16 announcement. Other New York institutions that have initiated similar programs include the New York Public Library (map collection),  the Brooklyn Academy of Music and the New York Philharmonic. 

See more images here.

Image: “The Open Door,” earlier than May 1844. Courtesy of William Henry Fox Talbot/The Metropolitan Museum of Art, New York.

Send to Kindle

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0″ if A and B are the same and “1″ if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

Send to Kindle

c2=e/m

Feynmann_Diagram_Gluon_RadiationParticle physicists will soon attempt to reverse the direction of Einstein’s famous equation delineating energy-matter equivalence, e=mc2. Next year, they plan to crash quanta of light into each other to create matter. Cool or what!

From the Guardian:

Researchers have worked out how to make matter from pure light and are drawing up plans to demonstrate the feat within the next 12 months.

The theory underpinning the idea was first described 80 years ago by two physicists who later worked on the first atomic bomb. At the time they considered the conversion of light into matter impossible in a laboratory.

But in a report published on Sunday, physicists at Imperial College London claim to have cracked the problem using high-powered lasers and other equipment now available to scientists.

“We have shown in principle how you can make matter from light,” said Steven Rose at Imperial. “If you do this experiment, you will be taking light and turning it into matter.”

The scientists are not on the verge of a machine that can create everyday objects from a sudden blast of laser energy. The kind of matter they aim to make comes in the form of subatomic particles invisible to the naked eye.

The original idea was written down by two US physicists, Gregory Breit and John Wheeler, in 1934. They worked out that – very rarely – two particles of light, or photons, could combine to produce an electron and its antimatter equivalent, a positron. Electrons are particles of matter that form the outer shells of atoms in the everyday objects around us.

But Breit and Wheeler had no expectations that their theory would be proved any time soon. In their study, the physicists noted that the process was so rare and hard to produce that it would be “hopeless to try to observe the pair formation in laboratory experiments”.

Oliver Pike, the lead researcher on the study, said the process was one of the most elegant demonstrations of Einstein’s famous relationship that shows matter and energy are interchangeable currencies. “The Breit-Wheeler process is the simplest way matter can be made from light and one of the purest demonstrations of E=mc2,” he said.

Writing in the journal Nature Photonics, the scientists describe how they could turn light into matter through a number of separate steps. The first step fires electrons at a slab of gold to produce a beam of high-energy photons. Next, they fire a high-energy laser into a tiny gold capsule called a hohlraum, from the German for “empty room”. This produces light as bright as that emitted from stars. In the final stage, they send the first beam of photons into the hohlraum where the two streams of photons collide.

The scientists’ calculations show that the setup squeezes enough particles of light with high enough energies into a small enough volume to create around 100,000 electron-positron pairs.

The process is one of the most spectacular predictions of a theory called quantum electrodynamics (QED) that was developed in the run up to the second world war. “You might call it the most dramatic consequence of QED and it clearly shows that light and matter are interchangeable,” Rose told the Guardian.

The scientists hope to demonstrate the process in the next 12 months. There are a number of sites around the world that have the technology. One is the huge Omega laser in Rochester, New York. But another is the Orion laser at Aldermaston, the atomic weapons facility in Berkshire.

A successful demonstration will encourage physicists who have been eyeing the prospect of a photon-photon collider as a tool to study how subatomic particles behave. “Such a collider could be used to study fundamental physics with a very clean experimental setup: pure light goes in, matter comes out. The experiment would be the first demonstration of this,” Pike said.

Read the entire story here.

Image: Feynmann diagram for gluon radiation. Courtesy of Wikipedia.

 

 

Send to Kindle

95.5 Percent is Made Up and It’s Dark

Petrarch_by_Bargilla

Physicists and astronomers observe the very small and the very big. Although they are focused on very different areas of scientific endeavor and discovery, they tend to agree on one key observation: 95.5 of the cosmos is currently invisible to us. That is, only around 4.5 percent of our physical universe is made up of matter or energy that we can see or sense directly through experimental interaction. The rest, well, it’s all dark — so-called dark matter and dark energy. But nobody really knows what or how or why. Effectively, despite tremendous progress in our understanding of our world, we are still in a global “Dark Age”.

From the New Scientist:

TO OUR eyes, stars define the universe. To cosmologists they are just a dusting of glitter, an insignificant decoration on the true face of space. Far outweighing ordinary stars and gas are two elusive entities: dark matter and dark energy. We don’t know what they are… except that they appear to be almost everything.

These twin apparitions might be enough to give us pause, and make us wonder whether all is right with the model universe we have spent the past century so carefully constructing. And they are not the only thing. Our standard cosmology also says that space was stretched into shape just a split second after the big bang by a third dark and unknown entity called the inflaton field. That might imply the existence of a multiverse of countless other universes hidden from our view, most of them unimaginably alien – just to make models of our own universe work.

Are these weighty phantoms too great a burden for our observations to bear – a wholesale return of conjecture out of a trifling investment of fact, as Mark Twain put it?

The physical foundation of our standard cosmology is Einstein’s general theory of relativity. Einstein began with a simple observation: that any object’s gravitational mass is exactly equal to its resistance to accelerationMovie Camera, or inertial mass. From that he deduced equations that showed how space is warped by mass and motion, and how we see that bending as gravity. Apples fall to Earth because Earth’s mass bends space-time.

In a relatively low-gravity environment such as Earth, general relativity’s effects look very like those predicted by Newton’s earlier theory, which treats gravity as a force that travels instantaneously between objects. With stronger gravitational fields, however, the predictions diverge considerably. One extra prediction of general relativity is that large accelerating masses send out tiny ripples in the weave of space-time called gravitational waves. While these waves have never yet been observed directly, a pair of dense stars called pulsars, discovered in 1974, are spiralling in towards each other just as they should if they are losing energy by emitting gravitational waves.

Gravity is the dominant force of nature on cosmic scales, so general relativity is our best tool for modelling how the universe as a whole moves and behaves. But its equations are fiendishly complicated, with a frightening array of levers to pull. If you then give them a complex input, such as the details of the real universe’s messy distribution of mass and energy, they become effectively impossible to solve. To make a working cosmological model, we make simplifying assumptions.

The main assumption, called the Copernican principle, is that we are not in a special place. The cosmos should look pretty much the same everywhere – as indeed it seems to, with stuff distributed pretty evenly when we look at large enough scales. This means there’s just one number to put into Einstein’s equations: the universal density of matter.

Einstein’s own first pared-down model universe, which he filled with an inert dust of uniform density, turned up a cosmos that contracted under its own gravity. He saw that as a problem, and circumvented it by adding a new term into the equations by which empty space itself gains a constant energy density. Its gravity turns out to be repulsive, so adding the right amount of this “cosmological constant” ensured the universe neither expanded nor contracted. When observations in the 1920s showed it was actually expanding, Einstein described this move as his greatest blunder.

It was left to others to apply the equations of relativity to an expanding universe. They arrived at a model cosmos that grows from an initial point of unimaginable density, and whose expansion is gradually slowed down by matter’s gravity.

This was the birth of big bang cosmology. Back then, the main question was whether the expansion would ever come to a halt. The answer seemed to be no; there was just too little matter for gravity to rein in the fleeing galaxies. The universe would coast outwards forever.

Then the cosmic spectres began to materialise. The first emissary of darkness put a foot in the door as long ago as the 1930s, but was only fully seen in the late 1970s when astronomers found that galaxies are spinning too fast. The gravity of the visible matter would be too weak to hold these galaxies together according to general relativity, or indeed plain old Newtonian physics. Astronomers concluded that there must be a lot of invisible matter to provide extra gravitational glue.

The existence of dark matter is backed up by other lines of evidence, such as how groups of galaxies move, and the way they bend light on its way to us. It is also needed to pull things together to begin galaxy-building in the first place. Overall, there seems to be about five times as much dark matter as visible gas and stars.

Dark matter’s identity is unknown. It seems to be something beyond the standard model of particle physics, and despite our best efforts we have yet to see or create a dark matter particle on Earth (see “Trouble with physics: Smashing into a dead end”). But it changed cosmology’s standard model only slightly: its gravitational effect in general relativity is identical to that of ordinary matter, and even such an abundance of gravitating stuff is too little to halt the universe’s expansion.

The second form of darkness required a more profound change. In the 1990s, astronomers traced the expansion of the universe more precisely than ever before, using measurements of explosions called type 1a supernovae. They showed that the cosmic expansion is accelerating. It seems some repulsive force, acting throughout the universe, is now comprehensively trouncing matter’s attractive gravity.

This could be Einstein’s cosmological constant resurrected, an energy in the vacuum that generates a repulsive force, although particle physics struggles to explain why space should have the rather small implied energy density. So imaginative theorists have devised other ideas, including energy fields created by as-yet-unseen particles, and forces from beyond the visible universe or emanating from other dimensions.

Whatever it might be, dark energy seems real enough. The cosmic microwave background radiation, released when the first atoms formed just 370,000 years after the big bang, bears a faint pattern of hotter and cooler spots that reveals where the young cosmos was a little more or less dense. The typical spot sizes can be used to work out to what extent space as a whole is warped by the matter and motions within it. It appears to be almost exactly flat, meaning all these bending influences must cancel out. This, again, requires some extra, repulsive energy to balance the bending due to expansion and the gravity of matter. A similar story is told by the pattern of galaxies in space.

All of this leaves us with a precise recipe for the universe. The average density of ordinary matter in space is 0.426 yoctograms per cubic metre (a yoctogram is 10-24 grams, and 0.426 of one equates to about 250 protons), making up 4.5 per cent of the total energy density of the universe. Dark matter makes up 22.5 per cent, and dark energy 73 per cent (see diagram). Our model of a big-bang universe based on general relativity fits our observations very nicely – as long as we are happy to make 95.5 per cent of it up.

Arguably, we must invent even more than that. To explain why the universe looks so extraordinarily uniform in all directions, today’s consensus cosmology contains a third exotic element. When the universe was just 10-36 seconds old, an overwhelming force took over. Called the inflaton field, it was repulsive like dark energy, but far more powerful, causing the universe to expand explosively by a factor of more than 1025, flattening space and smoothing out any gross irregularities.

When this period of inflation ended, the inflaton field transformed into matter and radiation. Quantum fluctuations in the field became slight variations in density, which eventually became the spots in the cosmic microwave background, and today’s galaxies. Again, this fantastic story seems to fit the observational facts. And again it comes with conceptual baggage. Inflation is no trouble for general relativity – mathematically it just requires an add-on term identical to the cosmological constant. But at one time this inflaton field must have made up 100 per cent of the contents of the universe, and its origin poses as much of a puzzle as either dark matter or dark energy. What’s more, once inflation has started it proves tricky to stop: it goes on to create a further legion of universes divorced from our own. For some cosmologists, the apparent prediction of this multiverse is an urgent reason to revisit the underlying assumptions of our standard cosmology (see “Trouble with physics: Time to rethink cosmic inflation?”).

The model faces a few observational niggles, too. The big bang makes much more lithium-7 in theory than the universe contains in practice. The model does not explain the possible alignment in some features in the cosmic background radiation, or why galaxies along certain lines of sight seem biased to spin left-handedly. A newly discovered supergalactic structure 4 billion light years long calls into question the assumption that the universe is smooth on large scales.

Read the entire story here.

Image: Petrarch, who first conceived the idea of a European “Dark Age”, by Andrea di Bartolo di Bargilla, c1450. Courtesy of Galleria degli Uffizi, Florence, Italy / Wikipedia.

Send to Kindle

Building a Memory Palace

Feats of memory have long been the staple of human endeavor — for instance, memorizing and recalling Pi to hundreds of decimal places. Nowadays, however, memorization is a competitive sport replete with grand prizes, worthy of a place in an X-Games tournament.

From the NYT:

The last match of the tournament had all the elements of a classic showdown, pitting style versus stealth, quickness versus deliberation, and the world’s foremost card virtuoso against its premier numbers wizard.

If not quite Ali-Frazier or Williams-Sharapova, the duel was all the audience of about 100 could ask for. They had come to the first Extreme Memory Tournament, or XMT, to see a fast-paced, digitally enhanced memory contest, and that’s what they got.

The contest, an unusual collaboration between industry and academic scientists, featured one-minute matches between 16 world-class “memory athletes” from all over the world as they met in a World Cup-like elimination format. The grand prize was $20,000; the potential scientific payoff was large, too.

One of the tournament’s sponsors, the company Dart NeuroScience, is working to develop drugs for improved cognition. The other, Washington University in St. Louis, sent a research team with a battery of cognitive tests to determine what, if anything, sets memory athletes apart. Previous research was sparse and inconclusive.

Yet as the two finalists, both Germans, prepared to face off — Simon Reinhard, 35, a lawyer who holds the world record in card memorization (a deck in 21.19 seconds), and Johannes Mallow, 32, a teacher with the record for memorizing digits (501 in five minutes) — the Washington group had one preliminary finding that wasn’t obvious.

“We found that one of the biggest differences between memory athletes and the rest of us,” said Henry L. Roediger III, the psychologist who led the research team, “is in a cognitive ability that’s not a direct measure of memory at all but of attention.”

The Memory Palace

The technique the competitors use is no mystery.

People have been performing feats of memory for ages, scrolling out pi to hundreds of digits, or phenomenally long verses, or word pairs. Most store the studied material in a so-called memory palace, associating the numbers, words or cards with specific images they have already memorized; then they mentally place the associated pairs in a familiar location, like the rooms of a childhood home or the stops on a subway line.

The Greek poet Simonides of Ceos is credited with first describing the method, in the fifth century B.C., and it has been vividly described in popular books, most recently “Moonwalking With Einstein,” by Joshua Foer.

Each competitor has his or her own variation. “When I see the eight of diamonds and the queen of spades, I picture a toilet, and my friend Guy Plowman,” said Ben Pridmore, 37, an accountant in Derby, England, and a former champion. “Then I put those pictures on High Street in Cambridge, which is a street I know very well.”

As these images accumulate during memorization, they tell an increasingly bizarre but memorable story. “I often use movie scenes as locations,” said James Paterson, 32, a high school psychology teacher in Ascot, near London, who competes in world events. “In the movie ‘Gladiator,’ which I use, there’s a scene where Russell Crowe is in a field, passing soldiers, inspecting weapons.”

Mr. Paterson uses superheroes to represent combinations of letters or numbers: “I might have Batman — one of my images — playing Russell Crowe, and something else playing the horse, and so on.”

The material that competitors attempt to memorize falls into several standard categories. Shuffled decks of cards. Random words. Names matched with faces. And numbers, either binary (ones and zeros) or integers. They are given a set amount of time to study — up to one minute in this tournament, an hour or more in others — before trying to reproduce as many cards, words or digits in the order presented.

Now and then, a challenger boasts online of having discovered an entirely new method, and shows up at competitions to demonstrate it.

“Those people are easy to find, because they come in last, or close to it,” said another world-class competitor, Boris Konrad, 29, a German postdoctoral student in neuroscience. “Everyone here uses this same type of technique.”

Anyone can learn to construct a memory palace, researchers say, and with practice remember far more detail of a particular subject than before. The technique is accessible enough that preteens pick it up quickly, and Mr. Paterson has integrated it into his teaching.

“I’ve got one boy, for instance, he has no interest in academics really, but he knows the Premier League, every team, every player,” he said. “I’m working with him, and he’s using that knowledge as scaffolding to help remember what he’s learning in class.”

Experts in Forgetting

The competitors gathered here for the XMT are not just anyone, however. This is the all-world team, an elite club of laser-smart types who take a nerdy interest in stockpiling facts and pushing themselves hard.

In his doctoral study of 30 world-class performers (most from Germany, which has by far the highest concentration because there are more competitions), Mr. Konrad has found as much. The average I.Q.: 130. Average study time: 1,000 to 2,000 hours and counting. The top competitors all use some variation of the memory-palace system and test, retest and tweak it.

“I started with my own system, but now I use his,” said Annalena Fischer, 20, pointing to her boyfriend, Christian Schäfer, 22, whom she met at a 2010 memory competition in Germany. “Except I don’t use the distance runners he uses; I don’t know anything about the distance runners.” Both are advanced science students and participants in Mr. Konrad’s study.

One of the Washington University findings is predictable, if still preliminary: Memory athletes score very highly on tests of working memory, the mental sketchpad that serves as a shopping list of information we can hold in mind despite distractions.

One way to measure working memory is to have subjects solve a list of equations (5 + 4 = x; 8 + 9 = y; 7 + 2 = z; and so on) while keeping the middle numbers in mind (4, 9 and 2 in the above example). Elite memory athletes can usually store seven items, the top score on the test the researchers used; the average for college students is around two.

“And college students tend to be good at this task,” said Dr. Roediger, a co-author of the new book “Make It Stick: The Science of Successful Learning.” “What I’d like to do is extend the scoring up to, say, 21, just to see how far the memory athletes can go.”

Yet this finding raises another question: Why don’t the competitors’ memory palaces ever fill up? Players usually have many favored locations to store studied facts, but they practice and compete repeatedly. They use and reuse the same blueprints hundreds of times, and the new images seem to overwrite the old ones — virtually without error.

“Once you’ve remembered the words or cards or whatever it is, and reported them, they’re just gone,” Mr. Paterson said.

Many competitors say the same: Once any given competition is over, the numbers or words or facts are gone. But this is one area in which they have less than precise insight.

In its testing, which began last year, the Washington University team has given memory athletes surprise tests on “old” material — lists of words they’d been tested on the day before. On Day 2, they recalled an average of about three-quarters of the words they memorized on Day 1 (college students remembered fewer than 5 percent). That is, despite what competitors say, the material is not gone; far from it.

Yet to install a fresh image-laden “story” in any given memory palace, a memory athlete must clear away the old one in its entirety. The same process occurs when we change a password: The old one must be suppressed, so it doesn’t interfere with the new one.

One term for that skill is “attentional control,” and psychologists have been measuring it for years with standardized tests. In the best known, the Stroop test, people see words flash by on a computer screen and name the color in which a word is presented. Answering is nearly instantaneous when the color and the word match — “red” displayed in red — but slower when there’s a mismatch, like “red” displayed in blue.

Read the entire article here.

Send to Kindle

Life and Death: Sharing Startups

The great cycle of re-invention spawned by the Internet and mobile technologies continues apace. This time it’s the entrepreneurial businesses laying the foundation for the sharing economy — whether that be beds, room, clothes, tuition, bicycles or cars. A few succeed to become great new businesses; most fail.

From the WSJ:

A few high-profile “sharing-economy” startups are gaining quick traction with users, including those that let consumers rent apartments and homes like Airbnb Inc., or get car rides, such as Uber Technologies Inc.

Both Airbnb and Uber are valued in the billions of dollars, a sign that investors believe the segment is hot—and a big reason why more entrepreneurs are embracing the business model.

At MassChallenge, a Boston-based program to help early-stage entrepreneurs, about 9% of participants in 2013 were starting companies to connect consumers or businesses with products and services that would otherwise go unused. That compares with about 5% in 2010, for instance.

“We’re bullish on the sharing economy, and we’ll definitely make more investments in it,” said Sam Altman, president of Y Combinator, a startup accelerator in Mountain View, Calif., and one of Airbnb’s first investors.

Yet at least a few dozen sharing-economy startups have failed since 2012, including BlackJet, a Florida-based service that touted itself as the “Uber for jet travel,” and Tutorspree, a New York service dubbed the “Airbnb for tutors.” Most ran out of money, following struggles that ranged from difficulties building a critical mass of supply and demand, to higher-than-expected operating costs.

“We ended up being unable to consistently produce a level of demand on par with what we needed to scale rapidly,” said Aaron Harris, co-founder of Tutorspree, which launched in January 2011 and shuttered in August 2013.

“If you have to reacquire the customer every six months, they’ll forget you,” said Howard Morgan, co-founder of First Round Capital, which was an investor in BlackJet. “A private jet ride isn’t something you do every day. If you’re very wealthy, you have your own plane.” By comparison, he added that he recently used Uber’s ride-sharing service three times in one day.

Consider carpooling startup Ridejoy, for example. During its first year in 2011, its user base was growing by about 30% a month, with more than 25,000 riders and drivers signed up, and an estimated 10,000 rides completed, said Kalvin Wang, one of its three founders. But by the spring of 2013, Ridejoy, which had raised $1.3 million from early-stage investors like Freestyle Capital, was facing ferocious competition from free alternatives, such as carpooling forums on college websites.

Also, some riders could—and did—begin to sidestep the middleman. Many skipped paying its 10% transaction fee by handing their drivers cash instead of paying by credit card on Ridejoy’s website or mobile app. Others just didn’t get it, and even 25,000 users wasn’t sufficient to sustain the business. “You never really have enough inventory,” said Mr. Wang.

After it folded in the summer of 2013, Ridejoy returned about half of its funding to investors, according to Mr. Wang. Alexis Ohanian, an entrepreneur in Brooklyn, N.Y., who was an investor in Ridejoy, said it “could just be the timing or execution that was off.” He cited the success so far of Lyft Inc., the two-year-old San Francisco company that is valued at more than $700 million and offers a short-distance ride-sharing service. “It turned out the short rides are what the market really wanted,” Mr. Ohanian said.

One drawback is that because much of the revenue a sharing business generates goes directly back to the suppliers—of bedrooms, parking spots, vehicles or other “shared” assets—the underlying business may be continuously strapped for cash.

Read the entire article here.

Send to Kindle

The (Space) Explorers Club

clangers

Thirteen private companies recently met in New York city to present their plans and ideas for their commercial space operations. Ranging from space tourism to private exploration of the Moon and asteroid mining the companies gathered at the Explorers Club to herald a new phase of human exploration.

From Technology Review:

It was a rare meeting of minds. Representatives from 13 commercial space companies gathered on May 1 at a place dedicated to going where few have gone before: the Explorers Club in New York.

Amid the mansions and high-end apartment buildings just off Central Park, executives from space-tourism companies, rocket-making startups, and even a business that hopes to make money by mining asteroids for useful materials showed off displays and gave presentations.

The Explorers Club event provided a snapshot of what may be a new industry in the making. In an era when NASA no longer operates manned space missions and government funding for unmanned missions is tight, a host of startups—most funded by space enthusiasts with very deep pockets—have stepped up in hope of filling the gap. In the past few years, several have proved themselves. Elon Musk’s SpaceX, for example, delivers cargo to the International Space Station for NASA. Both Richard Branson’s Virgin Galactic and rocket-plane builder XCOR Aerospace plan to perform demonstrations this year that will help catapult commercial spaceflight from the fringe into the mainstream.

The advancements being made by space companies could matter to more than the few who can afford tickets to space. SpaceX has already shaken incumbents in the $190 billion satellite launch industry by offering cheaper rides into space for communications, mapping, and research satellites.

However, space tourism also looks set to become significantly cheaper. “People don’t have to actually go up for it to impact them,” says David Mindell, an MIT professor of aeronautics and astronautics and a specialist in the history of engineering. “At $200,000 you’ll have a lot more ‘space people’ running around, and over time that could have a big impact.” One direct result, says Mindell, may be increased public support for human spaceflight, especially “when everyone knows someone who’s been into space.”

Along with reporters, Explorer Club members, and members of the public who had paid the $75 to $150 entry fee, several former NASA astronauts were in attendance to lend their endorsements—including the MC for the evening, Michael López-Alegría, veteran of the space shuttle and the ISS. Also on hand, highlighting the changing times with his very presence, was the world’s first second-generation astronaut, Richard Garriott. Garriott’s father flew missions on Skylab and the space shuttle in the 1970s and 1980s, respectively. However, Garriott paid his own way to the International Space Station in 2008 as a private citizen.

The evening was a whirlwind of activity, with customer testimonials and rapid-fire displays of rocket launches, spacecraft in orbit, and space ships under construction and being tested. It all painted a picture of an industry on the move, with multiple companies offering services from suborbital experiences and research opportunities to flights to Earth orbit and beyond.

The event also offered a glimpse at the plans of several key players.

Lauren De Niro Pipher, head of astronaut relations at Virgin Galactic, revealed that the company’s founder plans to fly with his family aboard the Virgin Galactic SpaceShipTwo rocket plane in November or December of this year. The flight will launch the company’s suborbital spaceflight business, for which De Niro Pipher said more than 700 customers have so far put down deposits on tickets costing $200,000 to $250,000.

The director of business development for Blue Origin, Bretton Alexander, announced his company’s intention to begin test flights of its first full-scale vehicle within the next year. “We have not publicly started selling rides in space as others have,” said Alexander during his question-and-answer session. “But that is our plan to do that, and we look forward to doing that, hopefully soon.”

Blue Origin is perhaps the most secretive of the commercial spaceflight companies, typically revealing little of its progress toward the services it plans to offer: suborbital manned spaceflight and, later, orbital flight. Like Virgin, it was founded by a wealthy entrepreneur, in this case Amazon founder Jeff Bezos. The company, which is headquartered in Kent, Washington, has so far conducted at least one supersonic test flight and a test of its escape rocket system, both at its West Texas test center.

Also on hand was the head of Planetary Resources, Chris Lewicki, a former spacecraft engineer and manager for Mars programs at NASA. He showed off a prototype of his company’s Arkyd 100, an asteroid-hunting space telescope the size of a toaster oven. If all goes according to plan, a fleet of Arkyd 100s will first scan the skies from Earth orbit in search of nearby asteroids that might be rich in mineral wealth and water, to be visited by the next generation of Arkyd probes. Water is potentially valuable for future space-based enterprises as rocket fuel (split into its constituent elements of hydrogen and oxygen) and for use in life support systems. Planetary Resources plans to “launch early, launch often,” Lewicki told me after his presentation. To that end, the company is building a series of CubeSat-size spacecraft dubbed Arkyd 3s, to be launched from the International Space Station by the end of this year.

Andrew Antonio, experience manager at a relatively new company, World View Enterprises, showed a computer-generated video of his company’s planned balloon flights to the edge of space. A manned capsule will ascend to 100,000 feet, or about 20 miles up, from which the curvature of Earth and the black sky of space are visible. At $75,000 per ticket (reduced to $65,000 for Explorers Club members), the flight will be more affordable than competing rocket-powered suborbital experiences but won’t go as high. Antonio said his company plans to launch a small test vehicle “in about a month.”

XCOR’s director of payload sales and operations, Khaki Rodway, showed video clips of the company’s Lynx suborbital rocket plane coming together in Mojave, California, as well as a profile of an XCOR spaceflight customer. Hangared just down the flight line at the same air and space port where Virgin Galactic’s SpaceShipTwo is undergoing flight testing, the Lynx offers seating for one paying customer per flight at $95,000. XCOR hopes the Lynx will begin flying by the end of this year.

Read the entire article here.

Image: Still from the Clangers TV show. Courtesy of BBC / Smallfilms.

Send to Kindle