All posts by Mike

Extending Moore’s Law Through Evolution

[div class=attrib]From Smithsonian:[end-div]

In 1965, Intel co-founder Gordon Moore made a prediction about computing that has held true to this day. Moore’s law, as it came to be known, forecasted that the number of transistors we’d be able to cram onto a circuit—and thereby, the effective processing speed of our computers—would double roughly every two years. Remarkably enough, this rule has been accurate for nearly 50 years, but most experts now predict that this growth will slow by the end of the decade.

Someday, though, a radical new approach to creating silicon semiconductors might enable this rate to continue—and could even accelerate it. As detailed in a study published in this month’s Proceedings of the National Academy of Sciences, a team of researchers from the University of California at Santa Barbara and elsewhere have harnessed the process of evolution to produce enzymes that create novel semiconductor structures.

“It’s like natural selection, but here, it’s artificial selection,” Daniel Morse, professor emeritus at UCSB and a co-author of the study, said in an interview. After taking an enzyme found in marine sponges and mutating it into many various forms, “we’ve selected the one in a million mutant DNAs capable of making a semiconductor.”

In an earlier study, Morse and other members of the research team had discovered silicatein—a natural enzyme used used by marine sponges to construct their silica skeletons. The mineral, as it happens, also serves as the building block of semiconductor computer chips. “We then asked the question—could we genetically engineer the structure of the enzyme to make it possible to produce other minerals and semiconductors not normally produced by living organisms?” Morse said.

To make this possible, the researchers isolated and made many copies of the part of the sponge’s DNA that codes for silicatein, then intentionally introduced millions of different mutations in the DNA. By chance, some of these would likely lead to mutant forms of silicatein that would produce different semiconductors, rather than silica—a process that mirrors natural selection, albeit on a much shorter time scale, and directed by human choice rather than survival of the fittest.

[div class=attrib]Read the entire article after the jump.[end-div]

Empathy and Touch

[div class=attrib]From Scientific American:[end-div]

When a friend hits her thumb with a hammer, you don’t have to put much effort into imagining how this feels. You know it immediately. You will probably tense up, your “Ouch!” may arise even quicker than your friend’s, and chances are that you will feel a little pain yourself. Of course, you will then thoughtfully offer consolation and bandages, but your initial reaction seems just about automatic. Why?

Neuroscience now offers you an answer: A recent line of research has demonstrated that seeing other people being touched activates primary sensory areas of your brain, much like experiencing the same touch yourself would do. What these findings suggest is beautiful in its simplicity—that you literally “feel with” others.

There is no denying that the exceptional interpersonal understanding we humans show is by and large a product of our emotional responsiveness. We are automatically affected by other people’s feelings, even without explicit communication. Our involvement is sometimes so powerful that we have to flee it, turning our heads away when we see someone get hurt in a movie. Researchers hold that this capacity emerged long before humans evolved. However, only quite recently has it been given a name: A mere hundred years ago, the word “Empathy”—a combination of the Greek “in” (em-) and “feeling” (pathos)—was coined by the British psychologist E. B. Titchener during his endeavor to translate the German Einfühlungsvermögen (“the ability to feel into”).

Despite the lack of a universally agreed-upon definition of empathy, the mechanisms of sharing and understanding another’s experience have always been of scientific and public interest—and particularly so since the introduction of “mirror neurons.” This important discovery was made two decades ago by  Giacomo Rizzolatti and his co-workers at the University of Parma, who were studying motor neuron properties in macaque monkeys. To compensate for the tedious electrophysiological recordings required, the monkey was occasionally given food rewards. During these incidental actions something unexpected happened: When the monkey, remaining perfectly still, saw the food being grasped by an experimenter in a specific way, some of its motor neurons discharged. Remarkably, these neurons normally fired when the monkey itself grasped the food in this way. It was as if the monkey’s brain was directly mirroring the actions it observed. This “neural resonance,” which was later also demonstrated in humans, suggested the existence of a special type of “mirror” neurons that help us understand other people’s actions.

Do you find yourself wondering, now, whether a similar mirror mechanism could have caused your pungent empathic reaction to your friend maltreating herself with a hammer? A group of scientists led by Christian Keysers believed so. The researchers had their participants watch short movie clips of people being touched, while using functional magnetic resonance imaging (fMRI) to record their brain activity. The brain scans revealed that the somatosensory cortex, a complex of brain regions processing touch information, was highly active during the movie presentations—although participants were not being touched at all. As was later confirmed by other studies, this activity strongly resembled the somatosensory response participants showed when they were actually touched in the same way. A recent study by Esther Kuehn and colleagues even found that, during the observation of a human hand being touched, parts of the somatosensory cortex were particularly active when (judging by perspective) the hand clearly belonged to another person.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Media Consolidation

The age of the rambunctious and megalomaniacal newspaper baron has passed, excepting, of course, Rupert Murdoch. Though while the colorful personalities of the late-19th and early 20th centuries have mostly disappeared, the 21st century has replaced these aging white men with faceless international corporations, all of which are, of course, run by aging white men.

The infographic below puts the current media landscape in clear perspective; one statistic is clear: more and more people are consuming news and entertainment from fewer and fewer sources.

[div class=attrib]Infographic courtesy of Frugal Dad.[end-div]

A View on Innovation

Joi Ito Director of the MIT Media Lab muses on the subject of innovation in this article excerpted from the Edge.

[div class=attrib]From the Edge:[end-div]

I grew up in Japan part of my life, and we were surrounded by Buddhists. If you read some of the interesting books from the Dalai Lama talking about happiness, there’s definitely a difference in the way that Buddhists think about happiness, the world and how it works, versus the West. I think that a lot of science and technology has this somewhat Western view, which is how do you control nature, how do you triumph over nature? Even if you look at the gardens in Europe, a lot of it is about look at what we made this hedge do.

What’s really interesting and important to think about is, as we start to realize that the world is complex, and as the science that we use starts to become complex and, Timothy Leary used this quote, “Newton’s laws work well when things are normal sized, when they’re moving at a normal speed.” You can predict the motion of objects using Newton’s laws in most circumstances, but when things start to get really fast, really big, and really complex, you find out that Newton’s laws are actually local ordinances, and there’s a bunch of other stuff that comes into play.

One of the things that we haven’t done very well is we’ve been looking at science and technology as trying to make things more efficient, more effective on a local scale, without looking at the system around it. We were looking at objects rather than the system, or looking at the nodes rather than the network. When we talk about big data, when we talk about networks, we understand this.

I’m an Internet guy, and I divide the world into my life before the Internet and after the Internet. I helped build one of the first commercial Internet service providers in Japan, and when we were building that, there was a tremendous amount of resistance. There were lawyers who wrote these big articles about how the Internet was illegal because there was no one in charge. There was a competing standard back then called X.25, which was being built by the telephone companies and the government. It was centrally-planned, huge specifications; it was very much under control.

The Internet was completely distributed. David Weinberger would use the term ‘small pieces loosely joined.’ But it was really a decentralized innovation that was somewhat of a kind of working anarchy. As we all know, the Internet won. What the Internet winning was, was the triumph of distributed innovation over centralized innovation. It was a triumph of chaos over control. There were a bunch of different reasons. Moore’s law, lowering the cost of innovation—it was this kind of complexity that was going on, the fact that you could change things later, that made this kind of distributed innovation work. What happened when the Internet happened is that the Internet combined with Moore’s law, kept on driving the cost of innovation lower and lower and lower and lower. When you think about the Googles or the Yahoos or the Facebooks of the world, those products, those services were created not in big, huge R&D labs with hundreds of millions of dollars of funding; they were created by kids in dorm rooms.

In the old days, you’d have to have an idea and then you’d write a proposal for a grant or a VC, and then you’d raise the money, you’d plan the thing, you would hire the people and build it. Today, what you do is you build the thing, you raise the money and then you figure out the plan and then you figure out the business model. It’s completely the opposite, you don’t have to ask permission to innovate anymore. What’s really important is, imagine if somebody came up to you and said, “I’m going to build the most popular encyclopedia in the world, and the trick is anyone can edit it.” You wouldn’t have given the guy a desk, you wouldn’t have given the guy five bucks. But the fact that he can just try that, and in retrospect it works, it’s fine, what we’re realizing is that a lot of the greatest innovations that we see today are things that wouldn’t have gotten approval, right?

The Internet, the DNA and the philosophy of the Internet is all about freedom to connect, freedom to hack, and freedom to innovate. It’s really lowering the cost of distribution and innovation. What’s really important about that is that when you started thinking about how we used to innovate was we used to raise money and we would make plans. Well, it’s an interesting coincidence because the world is now so complex, so fast, so unpredictable, that you can’t. Your plans don’t really work that well. Every single major thing that’s happened, both good and bad, was probably unpredicted, and most of our plans failed.

Today, what you want is you want to have resilience and agility, and you want to be able to participate in, and interact with the disruptive things. Everybody loves the word ‘disruptive innovation.’ Well, how does, and where does disruptive innovation happen? It doesn’t happen in the big planned R&D labs; it happens on the edges of the network. Many important ideas, especially in the consumer Internet space, but more and more now in other things like hardware and biotech, you’re finding it happening around the edges.

What does it mean, innovation on the edges? If you sit there and you write a grant proposal, basically what you’re doing is you’re saying, okay, I’m going to build this, so give me money. By definition it’s incremental because first of all, you’ve got to be able to explain what it is you’re going to make, and you’ve got to say it in a way that’s dumbed-down enough that the person who’s giving you money can understand it. By definition, incremental research isn’t going to be very disruptive. Scholarship is somewhat incremental. The fact that if you have a peer review journal, it means five other people have to believe that what you’re doing is an interesting thing. Some of the most interesting innovations that happen, happen when the person doing it doesn’t even know what’s going on. True discovery, I think, happens in a very undirected way, when you figure it out as you go along.

Look at YouTube. First version of YouTube, if you saw 2005, it’s a dating site with video. It obviously didn’t work. The default was I am male, looking for anyone between 18 and 35, upload video. That didn’t work. They pivot it, it became Flicker for video. That didn’t work. Then eventually they latched onto Myspace and it took off like crazy. But they figured it out as they went along. This sort of discovery as you go along is a really, really important mode of innovation. The problem is, whether you’re talking about departments in academia or you’re talking about traditional sort of R&D, anything under control is not going to exhibit that behavior.

If you apply that to what I’m trying to do at the Media Lab, the key thing about the Media Lab is that we have undirected funds. So if a kid wants to try something, he doesn’t have to write me a proposal. He doesn’t have to explain to me what he wants to do. He can just go, or she can just go, and do whatever they want, and that’s really important, this undirected research.

The other part that’s really important, as you start to look for opportunities is what I would call pattern recognition or peripheral vision. There’s a really interesting study, if you put a dot on a screen and you put images like colors around it. If you tell the person to look at the dot, they’ll see the stuff on the first reading, but the minute you give somebody a financial incentive to watch it, I’ll give you ten bucks to watch the dot, those peripheral images disappear. If you’ve ever gone mushroom hunting, it’s a very similar phenomenon. If you are trying to find mushrooms in a forest, the whole thing is you have to stop looking, and then suddenly your pattern recognition kicks in and the mushrooms pop out. Hunters do this same thing, archers looking for animals.

When you focus on something, what you’re actually doing is only seeing really one percent of your field of vision. Your brain is filling everything else in with what you think is there, but it’s actually usually wrong, right? So what’s really important when you’re trying to discover those disruptive things that are happening in your periphery. If you are a newspaper and you’re trying to figure out what is the world like without printing presses, well, if you’re staring at your printing press, you’re not looking at the stuff around you. So what’s really important is how do you start to look around you?

[div class=attrib]Read the entire article following the jump.[end-div]

Happiness for Pessimists

Pessimists can take heart from Oliver Burkeman’s latest book “The Antidote”. His research shows that there are valid alternatives to the commonly held belief that positive thinking and goal visualization lead inevitably to happiness. He shows that there is “a long tradition in philosophical and spiritual thought which embraces negativity and bathes in insecurity and failure.” Glass half-full types, you may have been right all along.

[tube]bOJL7WkaadY[/tube]

King Canute or Mother Nature in North Carolina, Virginia, Texas?

Legislators in North Carolina recently went one better than King C’Nut (Canute). The king of Denmark, England, Norway and parts of Sweden during various periods between 1018 and 1035, famously and unsuccessfully tried to hold back the incoming tide. The now mythic story tells of Canute’s arrogance. Not to be outdone, North Carolina’s state legislature recently passed a law that bans state agencies from reporting that sea-level rise is accelerating.

The bill From North Carolina states:

“… rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of sea-level rise may be extrapolated linearly to estimate future rates of rise but shall not include scenarios of accelerated rates of sea-level rise.”

This comes hot on the heals of the recent revisionist push in Virginia where references to phrases such as “sea level rise” and “climate change” are forbidden in official state communications. Last year of course, Texas led the way for other states following the climate science denial program when the Texas Commission on Environmental Quality, which had commissioned a scientific study of Galveston Bay, removed all references to “rising sea levels”.

For more detailed reporting on this unsurprising and laughable state of affairs check out this article at Skeptical Science.

[div class=attrib]From Scientific American:[end-div]

Less than two weeks after the state’s senate passed a climate science-squelching bill, research shows that sea level along the coast between N.C. and Massachusetts is rising faster than anywhere on Earth.

Could nature be mocking North Carolina’s law-makers? Less than two weeks after the state’s senate passed a bill banning state agencies from reporting that sea-level rise is accelerating, research has shown that the coast between North Carolina and Massachusetts is experiencing the fastest sea-level rise in the world.

Asbury Sallenger, an oceanographer at the US Geological Survey in St Petersburg, Florida, and his colleagues analysed tide-gauge records from around North America. On 24 June, they reported in Nature Climate Change that since 1980, sea-level rise between Cape Hatteras, North Carolina, and Boston, Massachusetts, has accelerated to between 2 and 3.7 millimetres per year. That is three to four times the global average, and it means the coast could see 20–29 centimetres of sea-level rise on top of the metre predicted for the world as a whole by 2100 ( A. H. Sallenger Jr et al. Nature Clim. Change http://doi.org/hz4; 2012).

“Many people mistakenly think that the rate of sea-level rise is the same everywhere as glaciers and ice caps melt,” says Marcia McNutt, director of the US Geological Survey. But variations in currents and land movements can cause large regional differences. The hotspot is consistent with the slowing measured in Atlantic Ocean circulation, which may be tied to changes in water temperature, salinity and density.

North Carolina’s senators, however, have tried to stop state-funded researchers from releasing similar reports. The law approved by the senate on 12 June banned scientists in state agencies from using exponential extrapolation to predict sea-level rise, requiring instead that they stick to linear projections based on historical data.

Following international opprobrium, the state’s House of Representatives rejected the bill on 19 June. However, a compromise between the house and the senate forbids state agencies from basing any laws or plans on exponential extrapolations for the next three to four years, while the state conducts a new sea-level study.

According to local media, the bill was the handiwork of industry lobbyists and coastal municipalities who feared that investors and property developers would be scared off by predictions of high sea-level rises. The lobbyists invoked a paper published in the Journal of Coastal Research last year by James Houston, retired director of the US Army Corps of Engineers’ research centre in Vicksburg, Mississippi, and Robert Dean, emeritus professor of coastal engineering at the University of Florida in Gainesville. They reported that global sea-level rise has slowed since 1930 ( J. R. Houston and R. G. Dean J. Coastal Res. 27 , 409 – 417 ; 2011) — a contention that climate sceptics around the world have seized on.

Speaking to Nature, Dean accused the oceanographic community of ideological bias. “In the United States, there is an overemphasis on unrealistically high sea-level rise,” he says. “The reason is budgets. I am retired, so I have the freedom to report what I find without any bias or need to chase funding.” But Sallenger says that Houston and Dean’s choice of data sets masks acceleration in the sea-level-rise hotspot.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Policymic.[end-div]

The Inevitability of Life: A Tale of Protons and Mitochondria

A fascinating article by Nick Lane a leading researcher into the origins of life. Lane is a Research Fellow at University College London.

He suggests that it would be surprising if simple, bacterial-like, life were not common throughout the universe. However, the acquisition of one cell by another — an event that led to all higher organisms on planet Earth, is an altogether much rarer occurrence. So are we alone in the universe?

[div class=attrib]From the New Scientist:[end-div]

UNDER the intense stare of the Kepler space telescope, more and more planets similar to our own are revealing themselves to us. We haven’t found one exactly like Earth yet, but so many are being discovered that it appears the galaxy must be teeming with habitable planets.

These discoveries are bringing an old paradox back into focus. As physicist Enrico Fermi asked in 1950, if there are many suitable homes for life out there and alien life forms are common, where are they all? More than half a century of searching for extraterrestrial intelligence has so far come up empty-handed.

Of course, the universe is a very big place. Even Frank Drake’s famously optimistic “equation” for life’s probability suggests that we will be lucky to stumble across intelligent aliens: they may be out there, but we’ll never know it. That answer satisfies no one, however.

There are deeper explanations. Perhaps alien civilisations appear and disappear in a galactic blink of an eye, destroying themselves long before they become capable of colonising new planets. Or maybe life very rarely gets started even when conditions are perfect.

If we cannot answer these kinds of questions by looking out, might it be possible to get some clues by looking in? Life arose only once on Earth, and if a sample of one were all we had to go on, no grand conclusions could be drawn. But there is more than that. Looking at a vital ingredient for life – energy – suggests that simple life is common throughout the universe, but it does not inevitably evolve into more complex forms such as animals. I might be wrong, but if I’m right, the immense delay between life first appearing on Earth and the emergence of complex life points to another, very different explanation for why we have yet to discover aliens.

Living things consume an extraordinary amount of energy, just to go on living. The food we eat gets turned into the fuel that powers all living cells, called ATP. This fuel is continually recycled: over the course of a day, humans each churn through 70 to 100 kilograms of the stuff. This huge quantity of fuel is made by enzymes, biological catalysts fine-tuned over aeons to extract every last joule of usable energy from reactions.

The enzymes that powered the first life cannot have been as efficient, and the first cells must have needed a lot more energy to grow and divide – probably thousands or millions of times as much energy as modern cells. The same must be true throughout the universe.

This phenomenal energy requirement is often left out of considerations of life’s origin. What could the primordial energy source have been here on Earth? Old ideas of lightning or ultraviolet radiation just don’t pass muster. Aside from the fact that no living cells obtain their energy this way, there is nothing to focus the energy in one place. The first life could not go looking for energy, so it must have arisen where energy was plentiful.

Today, most life ultimately gets its energy from the sun, but photosynthesis is complex and probably didn’t power the first life. So what did? Reconstructing the history of life by comparing the genomes of simple cells is fraught with problems. Nevertheless, such studies all point in the same direction. The earliest cells seem to have gained their energy and carbon from the gases hydrogen and carbon dioxide. The reaction of H2 with CO2 produces organic molecules directly, and releases energy. That is important, because it is not enough to form simple molecules: it takes buckets of energy to join them up into the long chains that are the building blocks of life.

A second clue to how the first life got its energy comes from the energy-harvesting mechanism found in all known life forms. This mechanism was so unexpected that there were two decades of heated altercations after it was proposed by British biochemist Peter Mitchell in 1961.

Universal force field

Mitchell suggested that cells are powered not by chemical reactions, but by a kind of electricity, specifically by a difference in the concentration of protons (the charged nuclei of hydrogen atoms) across a membrane. Because protons have a positive charge, the concentration difference produces an electrical potential difference between the two sides of the membrane of about 150 millivolts. It might not sound like much, but because it operates over only 5 millionths of a millimetre, the field strength over that tiny distance is enormous, around 30 million volts per metre. That’s equivalent to a bolt of lightning.

Mitchell called this electrical driving force the proton-motive force. It sounds like a term from Star Wars, and that’s not inappropriate. Essentially, all cells are powered by a force field as universal to life on Earth as the genetic code. This tremendous electrical potential can be tapped directly, to drive the motion of flagella, for instance, or harnessed to make the energy-rich fuel ATP.

However, the way in which this force field is generated and tapped is extremely complex. The enzyme that makes ATP is a rotating motor powered by the inward flow of protons. Another protein that helps to generate the membrane potential, NADH dehydrogenase, is like a steam engine, with a moving piston for pumping out protons. These amazing nanoscopic machines must be the product of prolonged natural selection. They could not have powered life from the beginning, which leaves us with a paradox.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Transmission electron microscope image of a thin section cut through an area of mammalian lung tissue. The high magnification image shows a mitochondria. Courtesy of Wikipedia.[end-div]

CDM: Cosmic Discovery Machine

We think CDM sounds much more fun than LHC, a rather dry acronym for Large Hadron Collider.

Researchers at the LHC are set to announce the latest findings in early July from the record-breaking particle smasher buried below the French and Swiss borders. Rumors point towards the discovery of the so-called Higgs boson, the particle theorized to give mass to all the other fundamental building blocks of matter. So, while this would be another exciting discovery from CERN and yet another confirmation of the fundamental and elegant Standard Model of particle physics, perhaps there is yet more to uncover, such as the exotically named “inflaton”.

[div class=attrib]From Scientific American:[end-div]

Within a sliver of a second after it was born, our universe expanded staggeringly in size, by a factor of at least 10^26. That’s what most cosmologists maintain, although it remains a mystery as to what might have begun and ended this wild expansion. Now scientists are increasingly wondering if the most powerful particle collider in history, the Large Hadron Collider (LHC) in Europe, could shed light on this mysterious growth, called inflation, by catching a glimpse of the particle behind it. It could be that the main target of the collider’s current experiments, the Higgs boson, which is thought to endow all matter with mass, could also be this inflationary agent.

During inflation, spacetime is thought to have swelled in volume at an accelerating rate, from about a quadrillionth the size of an atom to the size of a dime. This rapid expansion would help explain why the cosmos today is as extraordinarily uniform as it is, with only very tiny variations in the distribution of matter and energy. The expansion would also help explain why the universe on a large scale appears geometrically flat, meaning that the fabric of space is not curved in a way that bends the paths of light beams and objects traveling within it.

The particle or field behind inflation, referred to as the “inflaton,” is thought to possess a very unusual property: it generates a repulsive gravitational field. To cause space to inflate as profoundly and temporarily as it did, the field’s energy throughout space must have varied in strength over time, from very high to very low, with inflation ending once the energy sunk low enough, according to theoretical physicists.

Much remains unknown about inflation, and some prominent critics of the idea wonder if it happened at all. Scientists have looked at the cosmic microwave background radiation—the afterglow of the big bang—to rule out some inflationary scenarios. “But it cannot tell us much about the nature of the inflaton itself,” says particle cosmologist Anupam Mazumdar at Lancaster University in England, such as its mass or the specific ways it might interact with other particles.

A number of research teams have suggested competing ideas about how the LHC might discover the inflaton. Skeptics think it highly unlikely that any earthly particle collider could shed light on inflation, because the uppermost energy densities one could imagine with inflation would be about 10^50 times above the LHC’s capabilities. However, because inflation varied with strength over time, scientists have argued the LHC may have at least enough energy to re-create inflation’s final stages.

It could be that the principal particle ongoing collider runs aim to detect, the Higgs boson, could also underlie inflation.

“The idea of the Higgs driving inflation can only take place if the Higgs’s mass lies within a particular interval, the kind which the LHC can see,” says theoretical physicist Mikhail Shaposhnikov at the École Polytechnique Fédérale de Lausanne in Switzerland. Indeed, evidence of the Higgs boson was reported at the LHC in December at a mass of about 125 billion electron volts, roughly the mass of 125 hydrogen atoms.

Also intriguing: the Higgs as well as the inflaton are thought to have varied with strength over time. In fact, the inventor of inflation theory, cosmologist Alan Guth at the Massachusetts Institute of Technology, originally assumed inflation was driven by the Higgs field of a conjectured grand unified theory.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Physics World.[end-div]

Faux Fashion is More Than Skin-Deep

Some innovative research shows that we are generally more inclined to cheat others if we are clad in counterfeit designer clothing or carrying faux accessories.

[div class=attrib]From Scientific American:[end-div]

Let me tell you the story of my debut into the world of fashion. When Jennifer Wideman Green (a friend of mine from graduate school) ended up living in New York City, she met a number of people in the fashion industry. Through her I met Freeda Fawal-Farah, who worked for Harper’s Bazaar. A few months later Freeda invited me to give a talk at the magazine, and because it was such an atypical crowd for me, I agreed.

I found myself on a stage before an auditorium full of fashion mavens. Each woman was like an exhibit in a museum: her jewelry, her makeup, and, of course, her stunning shoes. I talked about how people make decisions, how we compare prices when we are trying to figure out how much something is worth, how we compare ourselves to others, and so on. They laughed when I hoped they would, asked thoughtful questions, and offered plenty of their own interesting ideas. When I finished the talk, Valerie Salembier, the publisher of Harper’s Bazaar, came onstage, hugged and thanked me—and gave me a stylish black Prada overnight bag.

I headed downtown to my next meeting. I had some time to kill, so I decided to take a walk. As I wandered, I couldn’t help thinking about my big black leather bag with its large Prada logo. I debated with myself: should I carry my new bag with the logo facing outward? That way, other people could see and admire it (or maybe just wonder how someone wearing jeans and red sneakers could possibly have procured it). Or should I carry it with the logo facing toward me, so that no one could recognize that it was a Prada? I decided on the latter and turned the bag around.

Even though I was pretty sure that with the logo hidden no one realized it was a Prada bag, and despite the fact that I don’t think of myself as someone who cares about fashion, something felt different to me. I was continuously aware of the brand on the bag. I was wearing Prada! And it made me feel different; I stood a little straighter and walked with a bit more swagger. I wondered what would happen if I wore Ferrari underwear. Would I feel more invigorated? More confident? More agile? Faster?

I continued walking and passed through Chinatown, which was bustling with activity. Not far away, I spotted an attractive young couple in their twenties taking in the scene. A Chinese man approached them. “Handbags, handbags!” he called, tilting his head to indicate the direction of his small shop. After a moment or two, the woman asked the Chinese man, “You have Prada?”

The vendor nodded. I watched as she conferred with her partner. He smiled at her, and they followed the man to his stand.

The Prada they were referring to, of course, was not actually Prada. Nor were the $5 “designer” sunglasses on display in his stand really Dolce&Gabbana. And the Armani perfumes displayed over by the street food stands? Fakes too.

From Ermine to Armani

Going back a way, ancient Roman law included a set of regulations called sumptuary laws, which filtered down through the centuries into the laws of nearly all European nations. Among other things, the laws dictated who could wear what, according to their station and class. For example, in Renaissance England, only the nobility could wear certain kinds of fur, fabrics, laces, decorative beading per square foot, and so on, while those in the gentry could wear decisively less appealing clothing. (The poorest were generally excluded from the law, as there was little point in regulating musty burlap, wool, and hair shirts.) People who “dressed above their station” were silently, but directly, lying to those around them. And those who broke the law were often hit with fines and other punishments.

What may seem to be an absurd degree of obsessive compulsion on the part of the upper crust was in reality an effort to ensure that people were what they signaled themselves to be; the system was designed to eliminate disorder and confusion. Although our current sartorial class system is not as rigid as it was in the past, the desire to signal success and individuality is as strong today as ever.

When thinking about my experience with the Prada bag, I wondered whether there were other psychological forces related to fakes that go beyond external signaling. There I was in Chinatown holding my real Prada bag, watching the woman emerge from the shop holding her fake one. Despite the fact that I had neither picked out nor paid for mine, it felt to me that there was a substantial difference between the way I related to my bag and the way she related to hers.

More generally, I started wondering about the relationship between what we wear and how we behave, and it made me think about a concept that social scientists call self-signaling. The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people— inferring who we are and what we like from our actions.

For example, imagine that you see a beggar on the street. Rather than ignoring him or giving him money, you decide to buy him a sandwich. The action in itself does not define who you are, your morality, or your character, but you interpret the deed as evidence of your compassionate and charitable character. Now, armed with this “new” information, you start believing more intensely in your own benevolence. That’s self-signaling at work.

The same principle could also apply to fashion accessories. Carrying a real Prada bag—even if no one else knows it is real—could make us think and act a little differently than if we were carrying a counterfeit one. Which brings us to the questions: Does wearing counterfeit products somehow make us feel less legitimate? Is it possible that accessorizing with fakes might affect us in unexpected and negative ways?

Calling All Chloés

I decided to call Freeda and tell her about my recent interest in high fashion. During our conversation, Freeda promised to convince a fashion designer to lend me some items to use in some experiments. A few weeks later, I received a package from the Chloé label containing twenty handbags and twenty pairs of sunglasses. The statement accompanying the package told me that the handbags were estimated to be worth around $40,000 and the sunglasses around $7,000. (The rumor about this shipment quickly traveled around Duke, and I became popular among the fashion-minded crowd.)

With those hot commodities in hand, Francesca Gino, Mike Norton (both professors at Harvard University), and I set about testing whether participants who wore fake products would feel and behave differently from those wearing authentic ones. If our participants felt that wearing fakes would broadcast (even to themselves) a less honorable self-image, we wondered whether they might start thinking of themselves as somewhat less honest. And with this tainted self-concept in mind, would they be more likely to continue down the road of dishonesty?

Using the lure of Chloé accessories, we enlisted many female MBA students for our experiment. We assigned each woman to one of three conditions: authentic, fake or no information. In the authentic condition, we told participants that they would be donning real Chloé designer sunglasses. In the fake condition, we told them that they would be wearing counterfeit sunglasses that looked identical to those made by Chloé (in actuality all the products we used were the real McCoy). Finally, in the no-information condition, we didn’t say anything about the authenticity of the sunglasses.

Once the women donned their sunglasses, we directed them to the hallway, where we asked them to look at different posters and out the windows so that they could later evaluate the quality and experience of looking through their sunglasses. Soon after, we called them into another room for another task.

In this task, the participants were given 20 sets of 12 numbers (3.42, 7.32 and so on), and they were asked to find in each set the two numbers that add up to 10. They had five minutes to solve as many as possible and were paid for each correct answer. We set up the test so that the women could cheat—report that they solved more sets than they did (after shredding their worksheet and all the evidence)—while allowing us to figure out who cheated and by how much (by rigging the shredders so that they only cut the sides of the paper).

Over the years we carried out many versions of this experiment, and we repeatedly find that a lot of people cheated by a few questions. This experiment was not different in this regard, but what was particularly interesting was the effect of wearing counterfeits. While “only” 30 percent of the participants in the authentic condition reported solving more matrices than they actually had, 74 percent of those in the fake condition reported solving more matrices than they actually had. These results gave rise to another interesting question. Did the presumed fakeness of the product make the women cheat more than they naturally would? Or did the genuine Chloé label make them behave more honestly than they would otherwise?

This is why we also had a no-information condition, in which we didn’t mention anything about whether the sunglasses were real or fake. In that condition 42 percent of the women cheated. That result was between the other two, but it was much closer to the authentic condition (in fact, the two conditions were not statistically different from each other). These results suggest that wearing a genuine product does not increase our honesty (or at least not by much). But once we knowingly put on a counterfeit product, moral constraints loosen to some degree, making it easier for us to take further steps down the path of dishonesty.

The moral of the story? If you, your friend, or someone you are dating wears counterfeit products, be careful! Another act of dishonesty may be closer than you expect.

Up to No Good

These results led us to another question: if wearing counterfeits changes the way we view our own behavior, does it also cause us to be more suspicious of others? To find out, we asked another group of participants to put on what we told them were either real or counterfeit Chloé sunglasses. This time, we asked them to fill out a rather long survey with their sunglasses on. In this survey, we included three sets of questions. The questions in set A asked participants to estimate the likelihood that people they know might engage in various ethically questionable behaviors such as standing in the express line with too many groceries. The questions in set B asked them to estimate the likelihood that when people say particular phrases, including “Sorry, I’m late. Traffic was terrible,” they are lying. Set C presented participants with two scenarios depicting someone who has the opportunity to behave dishonestly, and asked them to estimate the likelihood that the person in the scenario would take the opportunity to cheat.

What were the results? You guessed it. When reflecting on the behavior of people they know, participants in the counterfeit condition judged their acquaintances to be more likely to behave dishonestly than did participants in the authentic condition. They also interpreted the list of common excuses as more likely to be lies, and judged the actor in the two scenarios as being more likely to choose the shadier option. We concluded that counterfeit products not only tend to make us more dishonest; they also cause us to view others as less than honest as well.

[div class=attrib]Read the entire article after the jump.[end-div]

La Macchina: The Machine as Art, for Caffeine Addicts

You may not know their names, but Desiderio Pavoni and Luigi Bezzerra are to coffee as are Steve Jobs and Steve Wozniak to computers. Modern day espresso machines owe all to the innovative design and business savvy of this early 20th century Italian duo.

[div class=attrib]From Smithsonian:[end-div]

For many coffee drinkers, espresso is coffee. It is the purest distillation of the coffee bean, the literal essence of a bean. In another sense, it is also the first instant coffee. Before espresso, it could take up to five minutes –five minutes!– for a cup of coffee to brew. But what exactly is espresso and how did it come to dominate our morning routines? Although many people are familiar with espresso these days thanks to the Starbucksification of the world, there is often still some confusion over what it actually is – largely due to “espresso roasts” available on supermarket shelves everywhere. First, and most importantly, espresso is not a roasting method. It is neither a bean nor a blend. It is a method of preparation. More specifically, it is a preparation method in which highly-pressurized hot water is forced over coffee grounds to produce a very concentrated coffee drink with a deep, robust flavor. While there is no standardized process for pulling a shot of espresso, Italian coffeemaker Illy’s definition of the authentic espresso seems as good a measure as any:

A jet of hot water at 88°-93°C (190°-200°F) passes under a pressure of nine or more atmospheres through a seven-gram (.25 oz) cake-like layer of ground and tamped coffee. Done right, the result is a concentrate of not more than 30 ml (one oz) of pure sensorial pleasure.

For those of you who, like me, are more than a few years out of science class, nine atmospheres of pressure is the equivalent to nine times the amount of pressure normally exerted by the earth’s atmosphere. As you might be able to tell from the precision of Illy’s description, good espresso is good chemistry. It’s all about precision and consistency and finding the perfect balance between grind, temperature, and pressure. Espresso happens at the molecular level. This is why technology has been such an important part of the historical development of espresso and a key to the ongoing search for the perfect shot. While espresso was never designed per se, the machines –or Macchina– that make our cappuccinos and lattes have a history that stretches back more than a century.

In the 19th century, coffee was a huge business in Europe with cafes flourishing across the continent. But coffee brewing was a slow process and, as is still the case today, customers often had to wait for their brew. Seeing an opportunity, inventors across Europe began to explore ways of using steam machines to reduce brewing time – this was, after all, the age of steam. Though there were surely innumerable patents and prototypes, the invention of the machine and the method that would lead to espresso is usually attributed to Angelo Moriondo of Turin, Italy, who was granted a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” The machine consisted of a large boiler, heated to 1.5 bars of pressure, that pushed water through a large bed of coffee grounds on demand, with a second boiler producing steam that would flash the bed of coffee and complete the brew. Though Moriondo’s invention was the first coffee machine to use both water and steam, it was purely a bulk brewer created for the Turin General Exposition. Not much more is known about Moriondo, due in large part to what we might think of today as a branding failure. There were never any “Moriondo” machines, there are no verifiable machines still in existence, and there aren’t even photographs of his work. With the exception of his patent, Moriondo has been largely lost to history. The two men who would improve on Morinodo’s design to produce a single serving espresso would not make that same mistake.

Luigi Bezzerra and Desiderio Pavoni were the Steve Wozniak and Steve Jobs of espresso. Milanese manufacturer and “maker of liquors” Luigi Bezzera had the know-how. He invented single-shot espresso in the early years of the 20th century while looking for a method of quickly brewing coffee directly into the cup. He made several improvements to Moriondo’s machine, introduced the portafilter, multiple brewheads, and many other innovations still associated with espresso machines today. In Bezzera’s original patent, a large boiler with built-in burner chambers filled with water was heated until it pushed water and steam through a tamped puck of ground coffee. The mechanism through which the heated water passed also functioned as heat radiators, lowering the temperature of the water from 250°F in the boiler to the ideal brewing temperature of approximately 195°F (90°C). Et voila, espresso. For the first time, a cup of coffee was brewed to order in a matter of seconds. But Bezzera’s machine was heated over an open flame, which made it difficult to control pressure and temperature, and nearly impossible to to produce a consistent shot. And consistency is key in the world of espresso. Bezzera designed and built a few prototypes of his machine but his beverage remained largely unappreciated because he didn’t have any money to expand his business or any idea how to market the machine. But he knew someone who did. Enter Desiderio Pavoni.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A 1910 Ideale espresso machine. Courtesy of Smithsonian.[end-div]

Keeping Secrets in the Age of Technology

[div class=attrib]From the Guardian:[end-div]

With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”

I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”

So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)

I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.

These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.

Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.

Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.

Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.

Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.

Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.

It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?

[div class=attrib]Read the entire article after the jump.[end-div]

Eternal Damnation as Deterrent?

So, you think an all-seeing, all-knowing supreme deity encourages moral behavior and discourages crime? Think again.

[div class=attrib]From New Scientist:[end-div]

There’s nothing like the fear of eternal damnation to encourage low crime rates. But does belief in heaven and a forgiving god encourage lawbreaking? A new study suggests it might – although establishing a clear link between the two remains a challenge.

Azim Shariff at the University of Oregon in Eugene and his colleagues compared global data on people’s beliefs in the afterlife with worldwide crime data collated by the United Nations Office on Drugs and Crime. In total, Shariff’s team looked at data covering the beliefs of 143,000 individuals across 67 countries and from a variety of religious backgrounds.

In most of the countries assessed, people were more likely to report a belief in heaven than in hell. Using that information, the team could calculate the degree to which a country’s rate of belief in heaven outstrips its rate of belief in hell.

Even after the researchers had controlled for a host of crime-related cultural factors – including GDP, income inequality, population density and life expectancy – national crime rates were typically higher in countries with particularly strong beliefs in heaven but weak beliefs in hell.

Licence to steal

“Belief in a benevolent, forgiving god could license people to think they can get away with things,” says Shariff – although he stresses that this conclusion is speculative, and that the results do not necessarily imply causality between religious beliefs and crime rates.

“There are a number of possible causal pathways,” says Richard Sosis, an anthropologist at the University of Connecticut in Storrs, who was not involved in the study. The most likely interpretation is that there are intervening variables at the societal level – societies may have values that are similarly reflected in their legal and religious systems.

In a follow-up study, yet to be published, Shariff and Amber DeBono of Winston–Salem State University in North Carolina primed volunteers who had Christian beliefs by asking them to write variously about God’s forgiving nature, God’s punitive nature, a forgiving human, a punitive human, or a neutral subject. The volunteers were then asked to complete anagram puzzles for a monetary reward of a few cents per anagram.

God helps those who…

Participants were given the opportunity to commit petty theft, with no chance of being caught, by lying about the number of anagrams they had successfully completed. Shariff’s team found that those participants who had written about a forgiving god claimed nearly $2 more than they were entitled to under the rules of the game, whereas those in the other groups awarded themselves less than 50 cents more than they were entitled to.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A detail from the Chapmans’ Hell. Photograph: Andy Butterton/PA. Courtesy of Guardian.[end-div]

Communicating with the Comatose

[div class=attrib]From Scientific American:[end-div]

Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions.

Incredibly, he provided answers. A change in blood flow to certain parts of the man’s injured brain convinced Owen that patient 23 was conscious and able to communicate. It was the first time that anyone had exchanged information with someone in a vegetative state.

Patients in these states have emerged from a coma and seem awake. Some parts of their brains function, and they may be able to grind their teeth, grimace or make random eye movements. They also have sleep–wake cycles. But they show no awareness of their surroundings, and doctors have assumed that the parts of the brain needed for cognition, perception, memory and intention are fundamentally damaged. They are usually written off as lost.

Owen’s discovery, reported in 2010, caused a media furore. Medical ethicist Joseph Fins and neurologist Nicholas Schiff, both at Weill Cornell Medical College in New York, called it a “potential game changer for clinical practice”. The University of Western Ontario in London, Canada, soon lured Owen away from Cambridge with Can$20 million (US$19.5 million) in funding to make the techniques more reliable, cheaper, more accurate and more portable — all of which Owen considers essential if he is to help some of the hundreds of thousands of people worldwide in vegetative states. “It’s hard to open up a channel of communication with a patient and then not be able to follow up immediately with a tool for them and their families to be able to do this routinely,” he says.

Many researchers disagree with Owen’s contention that these individuals are conscious. But Owen takes a practical approach to applying the technology, hoping that it will identify patients who might respond to rehabilitation, direct the dosing of analgesics and even explore some patients’ feelings and desires. “Eventually we will be able to provide something that will be beneficial to patients and their families,” he says.

Still, he shies away from asking patients the toughest question of all — whether they wish life support to be ended — saying that it is too early to think about such applications. “The consequences of asking are very complicated, and we need to be absolutely sure that we know what to do with the answers before we go down this road,” he warns.

Lost and found
With short, reddish hair and beard, Owen is a polished speaker who is not afraid of publicity. His home page is a billboard of links to his television and radio appearances. He lectures to scientific and lay audiences with confidence and a touch of defensiveness.

Owen traces the roots of his experiments to the late 1990s, when he was asked to write a review of clinical applications for technologies such as fMRI. He says that he had a “weird crisis of confidence”. Neuroimaging had confirmed a lot of what was known from brain mapping studies, he says, but it was not doing anything new. “We would just tweak a psych test and see what happens,” says Owen. As for real clinical applications: “I realized there weren’t any. We all realized that.”

Owen wanted to find one. He and his colleagues got their chance in 1997, with a 26-year-old patient named Kate Bainbridge. A viral infection had put her in a coma — a condition that generally persists for two to four weeks, after which patients die, recover fully or, in rare cases, slip into a vegetative or a minimally conscious state — a more recently defined category characterized by intermittent hints of conscious activity.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]fMRI axial brain image. Image courtesy of Wikpedia.[end-div]

Happy Birthday, George Orwell

Eric Blair was born on this day, June 25, in 1903. Thirty years later Blair changed his name with the publication of his first book, Down and Out in Paris and London (1933). His preferred pen name, George Orwell, chosen for being “a good round English name” (in his words).

Your friendly editor at theDiagonal classes George Orwell as one of the most important literary figures of the 20th century. His numerous political writings, literary reviews, poems, newspaper columns and 7 novels should be compulsory reading for minds young and old. His furious intellectual honesty, keen eye for exposing hypocrisy and skepticism of power add further considerable weight to his literary legacy.

In 1946, two years before publication of one of the most important works of the 20th century, 1984, Orwell wrote a passage that summarizes his world view and rings ever true today:

Political language — and with variations this is true of all political parties, from Conservatives to Anarchists — is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.  (Politics and the English Language, 1946).

[div class=attrib]Image: Photograph of George Orwell which appears in an old acreditation for the Branch of the National Union of Journalists (BNUJ), 1933. Courtesy of Wikipedia.[end-div]

Letting Go of Regrets

[div class=attrib]From Mind Matters over at Scientific American:[end-div]

The poem “Maud Muller” by John Greenleaf Whittier aptly ends with the line, “For of all sad words of tongue or pen, The saddest are these: ‘It might have been!’” What if you had gone for the risky investment that you later found out made someone else rich, or if you had had the guts to ask that certain someone to marry you? Certainly, we’ve all had instances in our lives where hindsight makes us regret not sticking our neck out a bit more.

But new research suggests that when we are older these kinds of ‘if only!’ thoughts about the choices we made may not be so good for our mental health. One of the most important determinants of our emotional well being in our golden years might be whether we learn to stop worrying about what might have been.

In a new paper published in Science, researchers from the University Medical Center Hamburg-Eppendorf in Hamburg, Germany, report evidence from two experiments which suggest that one key to aging well might involve learning to let go of regrets about missed opportunities. Stafanie Brassen and her colleagues looked at how healthy young participants (mean age: 25.4 years), healthy older participants (65.8 years), and older participants who had developed depression for the first time later in life (65.6 years) dealt with regret, and found that the young and older depressed patients seemed to hold on to regrets about missed opportunities while the healthy older participants seemed to let them go.

To measure regret over missed opportunities, the researchers adapted an established risk taking task into a clever game in which the participants looked at eight wooden boxes lined up in a row on a computer screen and could choose to reveal the contents of the boxes one at a time, from left to right. Seven of the boxes had gold in them, which the participants would earn if they chose to open them. One box, however, had a devil in it. What happens if they open the box with the devil in it? They lose that round and any gold they earned so far with it.

Importantly, the participants could choose to cash out early and keep any gold they earned up to that point. Doing this would reveal the location of the devil and coincidently all of the gold they missed out on. Sometimes this wouldn’t be a big deal, because the devil would be in the next box. No harm, no foul.  But sometimes the devil might be several boxes away. In this case, you might have missed out on a lot of potential earnings, and this had the potential to induce feelings of regret.

In their first experiment, Brassen and colleagues had all of the participants play this ‘devil game’ during a functional magnetic resonance (fMRI) brain scan.  They wanted to test whether young participants, older depressed, and healthy older participants responded differently to missed opportunities during the game, and whether these differences might also be reflected in activity in one area of the brain called the ventral striatum (an area known to very active when we experience regret) and another area of the brain called the anterior cingulate (an area known to be active when controlling our emotions).

Brassen and her colleagues found that for healthy older participants, the area of the brain which is usually active during the experience of regret, the ventral striatum, was much less active during rounds of the game where they missed out on a lot of money, suggesting that the healthily aging brains were not processing regret in the same way the young and depressed older brains were. Also, when they looked at the emotion controlling center of the brain, the anterior cingulate, the researchers found that this area was much more active in the healthy older participants than the other two groups. Interestingly, Brassen and her colleagues found that the bigger the missed opportunity, the greater the activity in this area for healthy older participants, which suggests that their brains were actively mitigating their experience of regret.

[div class=attrib]Read the entire article after the jump.[end-div]

Growing Eyes in the Lab

[div class=attrib]From Nature:[end-div]

A stem-cell biologist has had an eye-opening success in his latest effort to mimic mammalian organ development in vitro. Yoshiki Sasai of the RIKEN Center for Developmental Biology (CBD) in Kobe, Japan, has grown the precursor of a human eye in the lab.

The structure, called an optic cup, is 550 micrometres in diameter and contains multiple layers of retinal cells including photoreceptors. The achievement has raised hopes that doctors may one day be able to repair damaged eyes in the clinic. But for researchers at the annual meeting of the International Society for Stem Cell Research in Yokohama, Japan, where Sasai presented the findings this week, the most exciting thing is that the optic cup developed its structure without guidance from Sasai and his team.

“The morphology is the truly extraordinary thing,” says Austin Smith, director of the Centre for Stem Cell Research at the University of Cambridge, UK.

Until recently, stem-cell biologists had been able to grow embryonic stem-cells only into two-dimensional sheets. But over the past four years, Sasai has used mouse embryonic stem cells to grow well-organized, three-dimensional cerebral-cortex1, pituitary-gland2 and optic-cup3 tissue. His latest result marks the first time that anyone has managed a similar feat using human cells.

Familiar patterns
The various parts of the human optic cup grew in mostly the same order as those in the mouse optic cup. This reconfirms a biological lesson: the cues for this complex formation come from inside the cell, rather than relying on external triggers.

In Sasai’s experiment, retinal precursor cells spontaneously formed a ball of epithelial tissue cells and then bulged outwards to form a bubble called an eye vesicle. That pliable structure then folded back on itself to form a pouch, creating the optic cup with an outer wall (the retinal epithelium) and an inner wall comprising layers of retinal cells including photoreceptors, bipolar cells and ganglion cells. “This resolves a long debate,” says Sasai, over whether the development of the optic cup is driven by internal or external cues.

There were some subtle differences in the timing of the developmental processes of the human and mouse optic cups. But the biggest difference was the size: the human optic cup had more than twice the diameter and ten times the volume of that of the mouse. “It’s large and thick,” says Sasai. The ratios, similar to those seen in development of the structure in vivo, are significant. “The fact that size is cell-intrinsic is tremendously interesting,” says Martin Pera, a stem-cell biologist at the University of Southern California, Los Angeles.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Discover Magazine.[end-div]

College Laundry

If you have attended college you will relate to the following strip that describes your laundry cycle. If you have not yet attended, please write to us if you deviate from your predestined path — a path that all your predecessors have taken.

 

[div class=attrib]Image courtesy of xkcd.com.[end-div]

Our Perception of Time

[div class=attrib]From Evolutionary Philosophy:[end-div]

We have learned to see time as if it appears in chunks – minutes, hours, days, and years. But if time comes in chunks how do we experience past memories in the present? How does the previous moment’s chunk of time connect to the chunk of the present moment?

Wait a minute. It will take an hour. He is five years old. These are all sentences that contain expressions of units of time. We are all tremendously comfortable with the idea that time comes in discrete units – but does it? William James and Charles Sanders Peirce thought not.

If moments of time were truly discrete, separate units lined up like dominoes in a row, how would it be possible to have a memory of a past event? What connects the present moment to all the past moments that have already gone by?

One answer to the question is to suppose the existence of a transcendental self. That means some self that exists over and above our experience and can connect all the moments together for us. Imagine moments in time that stick together like boxcars of a train. If you are in one boxcar – i.e. inside the present moment – how could you possibly know anything about the boxcar behind you – i.e. the moment past? The only way would be to see from outside of your boxcar – you would at least stick your head out of the window to see the boxcar behind you.

If the boxcar represents your experience of the present moment then we are saying that you would have to leave the present moment at least a little bit to be able to see what happened in the moment behind you. How can you leave the present moment? Where do you go if you leave your experience of the present moment? Where is the space that you exist in when you are outside of your experience? It would have to be a space that transcended your experience – a transcendental space outside of reality as we experience it. It would be a supernatural space and the part of you that existed in that space would be a supernatural extra-experiential you.

For those who had been raised in a Christian context this would not be so hard to except because this extra-experiential you would sound a great deal like the soul. In fact Immanuel Kant who first articulated the idea of a transcendental self was through his philosophy actively trying to reserve space for the human soul in an intellectual atmosphere that he saw as excessively materialistic.

William James and Charles Sanders Peirce believed in unity and therefore they could not accept the idea of a transcendental ego that would exist in some transcendent realm. In some of their thinking they were anticipating the later developments of quantum theory and non-locality.

William James described who we appear to travel through a river of time – and like all rivers the river ahead of us already exists before we arrive there. In the same way the future already exists now. Not in a pre-determined sense but at least as some potentiality. As we arrive at the future moment our arrival marks the passage from the fluid form that we call future to the definitive solid form that we experience as the past. We do not create time by passing through it; we simply freeze it in its tracks.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Addiction: Choice or Disease or Victim of Hijacking?

 

The debate concerning human addictions of all colors and forms rages on. Some would have us believe that addiction is a simple choice shaped by our free will; others would argue that addiction is a chronic disease. Yet, perhaps there may be another more nuanced explanation.

[div class=attrib]From the New York Times:[end-div]

Of all the philosophical discussions that surface in contemporary life, the question of free will — mainly, the debate over whether or not we have it — is certainly one of the most persistent.

That might seem odd, as the average person rarely seems to pause to reflect on whether their choices on, say, where they live, whom they marry, or what they eat for dinner, are their own or the inevitable outcome of a deterministic universe. Still, as James Atlas pointed out last month, the spate of “can’t help yourself” books would indicate that people are in fact deeply concerned with how much of their lives they can control. Perhaps that’s because, upon further reflection, we find that our understanding of free will lurks beneath many essential aspects of our existence.

One particularly interesting variation on this question appears in scientific, academic and therapeutic discussions about addiction. Many times, the question is framed as follows: “Is addiction a disease or a choice?”

The argument runs along these lines: If addiction is a disease, then in some ways it is out of our control and forecloses choices. A disease is a medical condition that develops outside of our control; it is, then, not a matter of choice. In the absence of choice, the addicted person is essentially relieved of responsibility. The addict has been overpowered by her addiction.

The counterargument describes addictive behavior as a choice. People whose use of drugs and alcohol leads to obvious problems but who continue to use them anyway are making choices to do so. Since those choices lead to addiction, blame and responsibility clearly rest on the addict’s shoulders. It then becomes more a matter of free will.

Recent scientific studies on the biochemical responses of the brain are currently tipping the scales toward the more deterministic view — of addiction as a disease. The structure of the brain’s reward system combined with certain biochemical responses and certain environments, they appear to show, cause people to become addicted.

In such studies, and in reports of them to news media, the term “the hijacked brain” often appears, along with other language that emphasizes the addict’s lack of choice in the matter. Sometimes the pleasure-reward system has been “commandeered.” Other times it “goes rogue.” These expressions are often accompanied by the conclusion that there are “addicted brains.”

The word “hijacked” is especially evocative; people often have a visceral reaction to it. I imagine that this is precisely why this term is becoming more commonly used in connection with addiction. But it is important to be aware of the effects of such language on our understanding.

When most people think of a hijacking, they picture a person, sometimes wearing a mask and always wielding some sort of weapon, who takes control of a car, plane or train. The hijacker may not himself drive or pilot the vehicle, but the violence involved leaves no doubt who is in charge. Someone can hijack a vehicle for a variety of reasons, but mostly it boils down to needing to escape or wanting to use the vehicle itself as a weapon in a greater plan. Hijacking is a means to an end; it is always and only oriented to the goals of the hijacker. Innocent victims are ripped from their normal lives by the violent intrusion of the hijacker.

In the “hijacked” view of addiction, the brain is the innocent victim of certain substances — alcohol, cocaine, nicotine or heroin, for example — as well as certain behaviors like eating, gambling or sexual activity. The drugs or the neurochemicals produced by the behaviors overpower and redirect the brain’s normal responses, and thus take control of (hijack) it. For addicted people, that martini or cigarette is the weapon-wielding hijacker who is going to compel certain behaviors.

To do this, drugs like alcohol and cocaine and behaviors like gambling light up the brain’s pleasure circuitry, often bringing a burst of euphoria. Other studies indicate that people who are addicted have lower dopamine and serotonin levels in their brains, which means that it takes more of a particular substance or behavior for them to experience pleasure or to reach a certain threshold of pleasure. People tend to want to maximize pleasure; we tend to do things that bring more of it. We also tend to chase it when it subsides, trying hard to recreate the same level of pleasure we have experienced in the past. It is not uncommon to hear addicts talking about wanting to experience the euphoria of a first high. Often they never reach it, but keep trying. All of this lends credence to the description of the brain as hijacked.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of CNN.[end-div]

The 100 Million Year Collision

Four billion, or so, years from now, our very own Milky Way galaxy is expected to begin a slow but enormous collision with its galactic sibling, the Andromeda galaxy. Cosmologists predict the ensuing galactic smash will take around 100 million years to complete. It’s a shame we’ll not be around to witness the spectacle.

[div class=attrib]From Scientific American:[end-div]

The galactic theme in the context of planets and life is an interesting one. Take our own particular circumstances. As unappealingly non-Copernican as it is, there is no doubt that the Milky Way galaxy today is ‘special’. This should not be confused with any notion that special galaxy=special humans, since it’s really not clear yet that the astrophysical specialness of the galaxy has significant bearing on the likelihood of us sitting here picking our teeth. Nonetheless, the scientific method being what it is, we need to pay attention to any and all observations with as little bias as possible – so asking the question of what a ‘special’ galaxy might mean for life is OK, just don’t get too carried away.

First of all the Milky Way galaxy is big. As spiral galaxies go it’s in the upper echelons of diameter and mass. In the relatively nearby universe, it and our nearest large galaxy, Andromeda, are the sumo’s in the room. This immediately makes it somewhat unusual, the great majority of galaxies in the observable universe are smaller. The relationship to Andromeda is also very particular. In effect the Milky Way and Andromeda are a binary pair, our mutual distortion of spacetime is resulting in us barreling together at about 80 miles a second. In about 4 billion years these two galaxies will begin a ponderous collision lasting for perhaps 100 million years or so. It will be a soft type of collision – individual stars are so tiny compared to the distances between them that they themselves are unlikely to collide, but the great masses of gas and dust in the two galaxies will smack together – triggering the formation of new stars and planetary systems.

Some dynamical models (including those in the most recent work based on Hubble telescope measurements) suggest that our solar system could be flung further away from the center of the merging galaxies, others indicate it could end up thrown towards the newly forming stellar core of a future Goliath galaxy (Milkomeda?). Does any of this matter for life? For us the answer may be moot. In about only 1 billion years the Sun will have grown luminous enough that the temperate climate we enjoy on the Earth may be long gone. In 3-4 billion years it may be luminous enough that Mars, if not utterly dried out and devoid of atmosphere by then, could sustain ‘habitable‘ temperatures. Depending on where the vagaries of gravitational dynamics take the solar system as Andromeda comes lumbering through, we might end up surrounded by the pop and crackle of supernova as the collision-induced formation of new massive stars gets underway. All in all it doesn’t look too good. But for other places, other solar systems that we see forming today, it could be a very different story.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Composition of Milky Way and Andromeda. Courtesy of NASA, ESA, Z. Levay and R. van der Marel (STScI), T. Hallas, and A. Mellinger).[end-div]

You as a Data Strip Mine: What Facebook Knows

China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.

However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.

A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.

[div class=attrib]From Technology Review:[end-div]

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).[end-div]

Zen and the Art of Meditation Messaging

Quite often you will be skimming a book or leafing through pages of your favorite magazine and you will recall having “seen” a specific word. However, you will not remember having read that page or section or having looked at that particular word. But, without fail, when you retrace your steps and look back you will find that specific word, that word that you did not consciously “see”. So, what’s going on?

[div class=attrib]From the New Scientist:[end-div]

MEDITATION increases our ability to tap into the hidden recesses of our brain that are usually outside the reach of our conscious awareness.

That’s according to Madelijn Strick of Utrecht University in the Netherlands and colleagues, who tested whether meditation has an effect on our ability to pick up subliminal messages.

The brain registers subliminal messages, but we are often unable to recall them consciously. To investigate, the team recruited 34 experienced practitioners of Zen meditation and randomly assigned them to either a meditation group or a control group. The meditation group was asked to meditate for 20 minutes in a session led by a professional Zen master. The control group was asked to merely relax for 20 minutes.

The volunteers were then asked 20 questions, each with three or four correct answers – for instance: “Name one of the four seasons”. Just before the subjects saw the question on a computer screen one potential answer – such as “spring” – flashed up for a subliminal 16 milliseconds.

The meditation group gave 6.8 answers, on average, that matched the subliminal words, whereas the control group gave just 4.9 (Consciousness and Cognition, DOI: 10.1016/j.concog.2012.02.010).

Strick thinks that the explanation lies in the difference between what the brain is paying attention to and what we are conscious of. Meditators are potentially accessing more of what the brain has paid attention to than non-meditators, she says.

“It is a truly exciting development that the second wave of rigorous, scientific meditation research is now yielding concrete results,” says Thomas Metzinger, at Johannes Gutenberg University in Mainz, Germany. “Meditation may be best seen as a process that literally expands the space of conscious experience.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Yoga.am.[end-div]

Good Grades and Good Drugs?

A sad story chronicling the rise in amphetamine use in the quest for good school grades. More frightening now is the increase in addiction of ever younger kids, and not for dubious goal of excelling at school. Many kids are just taking the drug to get high.

[div class=attrib]From the Telegraph:[end-div]

The New York Times has finally woken up to America’s biggest unacknowledged drug problem: the massive overprescription of the amphetamine drug Adderall for Attention Deficit Hyperactivity Disorder. Kids have been selling each other this powerful – and extremely moreish – mood enhancer for years, as ADHD diagnoses and prescriptions for the drug have shot up.

Now, children are snorting the stuff, breaking open the capsules and ingesting it using the time-honoured tool of a rolled-up bank note.

The NYT seems to think these teenage drug users are interested in boosting their grades. It claims that, for children without ADHD, “just one pill can jolt them with the energy focus to push through all-night homework binges and stay awake during exams afterward”.

Really? There are two problems with this.

First, the idea that ADHD kids are “normal” on Adderall and its methylphenidate alternative Ritalin – gentler in its effect but still a psychostimulant – is open to question. Reading this scorching article by the child psychologist Prof L Alan Sroufe, who says there’s no evidence that attention-deficit children are born with an organic disease, or that ADHD and non-ADHD kids react differently to their doctor-prescribed amphetamines. Yes, there’s an initial boost to concentration, but the effect wears off – and addiction often takes its place.

Second, the school pupils illicitly borrowing or buying Adderall aren’t necessarily doing it to concentrate on their work. They’re doing it to get high.

Adderall, with its mixture of amphetamine salts, has the ability to make you as euphoric as a line of cocaine – and keep you that way, particularly if it’s the slow-release version and you’re taking it for the first time. At least, that was my experience. Here’s what happened.

I was staying with a hospital consultant and his attorney wife in the East Bay just outside San Francisco. I’d driven overnight from Los Angeles after a flight from London; I was jetlagged, sleep-deprived and facing a deadline to write an article for the Spectator about, of all things, Bach cantatas.

Sitting in the courtyard garden with my laptop, I tapped and deleted one clumsy sentence after another. The sun was going down; my hostess saw me shivering and popped out with a blanket, a cup of herbal tea and ‘something to help you concentrate’.

I took the pill, didn’t notice any effect, and was glad when I was called in for dinner.

The dining room was a Californian take on the Second Empire. The lady next to me was a Southern Belle turned realtor, her eyelids already drooping from the effects of her third giant glass of Napa Valley chardonnay. She began to tell me about her divorce. Every time she refilled her glass, her new husband raised his eyes to heaven.

It felt as if I was stuck in an episode of Dallas, or a very bad Tennessee Williams play. But it didn’t matter in the least because, at some stage between the mozzarella salad and the grilled chicken, I’d become as high as a kite.

Adderall helps you concentrate, no doubt about it. I was riveted by the details of this woman’s alimony settlement. Even she, utterly self- obsessed as she was, was surprised by my gushing empathy. After dinner, I sat down at the kitchen table to finish the article. The head rush was beginning to wear off, but then, just as I started typing, a second wave of amphetamine pushed its way into my bloodstream. This was timed-release Adderall. Gratefully I plunged into 18th-century Leipzig, meticulously noting the catalogue numbers of cantatas. It was as if the great Johann Sebastian himself was looking over my shoulder. By the time I glanced at the clock, it was five in the morning. My pleasure at finishing the article was boosted by the dopamine high. What a lovely drug.

The blues didn’t hit me until the next day – and took the best part of a week to banish.

And this is what they give to nine-year-olds.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]From the New York Times:[end-div]

He steered into the high school parking lot, clicked off the ignition and scanned the scraps of his recent weeks. Crinkled chip bags on the dashboard. Soda cups at his feet. And on the passenger seat, a rumpled SAT practice book whose owner had been told since fourth grade he was headed to the Ivy League. Pencils up in 20 minutes.

The boy exhaled. Before opening the car door, he recalled recently, he twisted open a capsule of orange powder and arranged it in a neat line on the armrest. He leaned over, closed one nostril and snorted it.

Throughout the parking lot, he said, eight of his friends did the same thing.

The drug was not cocaine or heroin, but Adderall, an amphetamine prescribed for attention deficit hyperactivity disorder that the boy said he and his friends routinely shared to study late into the night, focus during tests and ultimately get the grades worthy of their prestigious high school in an affluent suburb of New York City. The drug did more than just jolt them awake for the 8 a.m. SAT; it gave them a tunnel focus tailor-made for the marathon of tests long known to make or break college applications.

“Everyone in school either has a prescription or has a friend who does,” the boy said.

At high schools across the United States, pressure over grades and competition for college admissions are encouraging students to abuse prescription stimulants, according to interviews with students, parents and doctors. Pills that have been a staple in some college and graduate school circles are going from rare to routine in many academically competitive high schools, where teenagers say they get them from friends, buy them from student dealers or fake symptoms to their parents and doctors to get prescriptions.

Of the more than 200 students, school officials, parents and others contacted for this article, about 40 agreed to share their experiences. Most students spoke on the condition that they be identified by only a first or middle name, or not at all, out of concern for their college prospects or their school systems’ reputations — and their own.

“It’s throughout all the private schools here,” said DeAnsin Parker, a New York psychologist who treats many adolescents from affluent neighborhoods like the Upper East Side. “It’s not as if there is one school where this is the culture. This is the culture.”

Observed Gary Boggs, a special agent for the Drug Enforcement Administration, “We’re seeing it all across the United States.”

The D.E.A. lists prescription stimulants like Adderall and Vyvanse (amphetamines) and Ritalin and Focalin (methylphenidates) as Class 2 controlled substances — the same as cocaine and morphine — because they rank among the most addictive substances that have a medical use. (By comparison, the long-abused anti-anxiety drug Valium is in the lower Class 4.) So they carry high legal risks, too, as few teenagers appreciate that merely giving a friend an Adderall or Vyvanse pill is the same as selling it and can be prosecuted as a felony.

While these medicines tend to calm people with A.D.H.D., those without the disorder find that just one pill can jolt them with the energy and focus to push through all-night homework binges and stay awake during exams afterward. “It’s like it does your work for you,” said William, a recent graduate of the Birch Wathen Lenox School on the Upper East Side of Manhattan.

But abuse of prescription stimulants can lead to depression and mood swings (from sleep deprivation), heart irregularities and acute exhaustion or psychosis during withdrawal, doctors say. Little is known about the long-term effects of abuse of stimulants among the young. Drug counselors say that for some teenagers, the pills eventually become an entry to the abuse of painkillers and sleep aids.

“Once you break the seal on using pills, or any of that stuff, it’s not scary anymore — especially when you’re getting A’s,” said the boy who snorted Adderall in the parking lot. He spoke from the couch of his drug counselor, detailing how he later became addicted to the painkiller Percocet and eventually heroin.

Paul L. Hokemeyer, a family therapist at Caron Treatment Centers in Manhattan, said: “Children have prefrontal cortexes that are not fully developed, and we’re changing the chemistry of the brain. That’s what these drugs do. It’s one thing if you have a real deficiency — the medicine is really important to those people — but not if your deficiency is not getting into Brown.”

The number of prescriptions for A.D.H.D. medications dispensed for young people ages 10 to 19 has risen 26 percent since 2007, to almost 21 million yearly, according to IMS Health, a health care information company — a number that experts estimate corresponds to more than two million individuals. But there is no reliable research on how many high school students take stimulants as a study aid. Doctors and teenagers from more than 15 schools across the nation with high academic standards estimated that the portion of students who do so ranges from 15 percent to 40 percent.

“They’re the A students, sometimes the B students, who are trying to get good grades,” said one senior at Lower Merion High School in Ardmore, a Philadelphia suburb, who said he makes hundreds of dollars a week selling prescription drugs, usually priced at $5 to $20 per pill, to classmates as young as freshmen. “They’re the quote-unquote good kids, basically.”

The trend was driven home last month to Nan Radulovic, a psychotherapist in Santa Monica, Calif. Within a few days, she said, an 11th grader, a ninth grader and an eighth grader asked for prescriptions for Adderall solely for better grades. From one girl, she recalled, it was not quite a request.

“If you don’t give me the prescription,” Dr. Radulovic said the girl told her, “I’ll just get it from kids at school.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Illegal use of Adderall is prevalent enough that many students seem to take it for granted. Courtesy of Minnesota Post / Flickr/ CC/ Hipsxxhearts.[end-div]

Thirty Books for the Under 30

The official start of summer in the northern hemisphere is just over a week away. So, it’s time to gather together some juicy reads for lazy days by the beach or under a sturdy shade tree. Flavorwire offers a classic list of 30 reads with a couple of surprises thrown in. And, we’ll qualify Flavorwire’s selection by adding that anyone over 30 should read these works as well.

[div class=attrib]From Flavorwire:[end-div]

Earlier this week, we stumbled across a list over at Divine Caroline of thirty books everyone should read before they’re thirty. While we totally agreed with some of the picks, we thought there were some essential reads missing, so we decided to put together a list of our own. We stuck to fiction for simplicity’s sake, and chose the books below on a variety of criteria, selecting enduring classics that have been informing new literature since their first printing, stories that speak specifically or most powerfully to younger readers, and books we simply couldn’t imagine reaching thirty without having read. Of course, we hope that you read more than thirty books by the time you hit your fourth decade, so this list is incomplete — but we had to stop somewhere. Click through to read the books we think everyone should read before their thirtieth birthday, and let us know which ones you would add in the comments.

Middlesex, Jeffrey Eugenides

Eugenides’s family epic of love, belonging and otherness is a must read for anyone who has ever had a family or felt like an outcast. So that’s pretty much everyone, we’d wager.

Ghost World, Daniel Clowes

Clowes writes some of the most essentially realistic teenagers we’ve ever come across, which is important when you are (or have ever been) a realistic teenager yourself.

On the Road, Jack Kerouac

Kerouac’s famous scroll must be read when it’s still likely to inspire exploration. Plus, then you’ll have ample time to develop your scorn towards it.

Their Eyes Were Watching God, Zora Neale Hurston

A seminal work in both African American and women’s literature — not to mention a riveting, electrifying and deeply moving read.

Cat’s Cradle, Kurt Vonnegut

Vonnegut’s hilarious, satirical fourth novel that earned him a Master’s in anthropology from the University of Chicago.

The Sun Also Rises, Ernest Hemingway

Think of him what you will, but everyone should read at least one Hemingway novel. In our experience, this one gets better the more you think about it, so we recommend reading it as early as possible.

The Road, Cormac McCarthy

The modern classic of post-apocalyptic novels, it’s also one of the best in a genre that’s only going to keep on exploding.

Maus, Art Spiegelman

A more perfect and affecting Holocaust book has never been written. And this one has pictures.

Ender’s Game, Orson Scott Card

One of the best science fiction novels of all time, recommended even for staunch realists. Serious, complicated and impossible to put down. Plus, Card’s masterpiece trusts in the power of children, something we all need to be reminded of once in a while.

Pride and Prejudice, Jane Austen

Yes, even for guys.

Middlesex, Jeffrey Eugenides

Eugenides’s family epic of love, belonging and otherness is a must read for anyone who has ever had a family or felt like an outcast. So that’s pretty much everyone, we’d wager.

Ghost World, Daniel Clowes

Clowes writes some of the most essentially realistic teenagers we’ve ever come across, which is important when you are (or have ever been) a realistic teenager yourself.

On the Road, Jack Kerouac

Kerouac’s famous scroll must be read when it’s still likely to inspire exploration. Plus, then you’ll have ample time to develop your scorn towards it.

Their Eyes Were Watching God, Zora Neale Hurston

A seminal work in both African American and women’s literature — not to mention a riveting, electrifying and deeply moving read.

Cat’s Cradle, Kurt Vonnegut

Vonnegut’s hilarious, satirical fourth novel that earned him a Master’s in anthropology from the University of Chicago.

[div class=attrib]Check out the entire list after the jump.[end-div]

D-School is the Place

Forget art school, engineering school, law school and B-school (business). For wannabe innovators the current place to be is D-school. Design school, that is.

Design school teaches a problem solving method known as “design thinking”. Before it was re-branded in corporatespeak this used to be known as “trial and error”.

Many corporations are finding this approach to be both a challenge and a boon; after all, even in 2012, not many businesses encourage their employees to fail.

[div class=attrib]From the Wall Street Journal:[end-div]

In 2007, Scott Cook, founder of Intuit Inc., the software company behind TurboTax, felt the company wasn’t innovating fast enough. So he decided to adopt an approach to product development that has grown increasingly popular in the corporate world: design thinking.

Loosely defined, design thinking is a problem-solving method that involves close observation of users or customers and a development process of extensive—often rapid—trial and error.

Mr. Cook said the initiative, termed “Design for Delight,” involves field research with customers to understand their “pain points”—an examination of what frustrates them in their offices and homes.

Intuit staffers then “painstorm” to come up with a variety of solutions to address the problems, and experiment with customers to find the best ones.

In one instance, a team of Intuit employees was studying how customers could take pictures of tax forms to reduce typing errors. Some younger customers, taking photos with their smartphones, were frustrated that they couldn’t just complete their taxes on their mobiles. Thus was born the mobile tax app SnapTax in 2010, which has been downloaded more than a million times in the past two years, the company said.

At SAP AG, hundreds of employees across departments work on challenges, such as building a raincoat out of a trash bag or designing a better coffee cup. The hope is that the sessions will train them in the tenets of design thinking, which they can then apply to their own business pursuits, said Carly Cooper, an SAP director who runs many of the sessions.

Last year, when SAP employees talked to sales representatives after closing deals, they found that one of the sales representatives’ biggest concerns was simply, when were they going to get paid. The insight led SAP to develop a new mobile product allowing salespeople to check on the status of their commissions.

[div class=attrib]Read the entire article after the jump.[end-div]

The 10,000 Year Clock

Aside from the ubiquitous plastic grocery bag will any human made artifact last 10,000 years? Before you answer, let’s qualify the question by mandating the artifact have some long-term value. That would seem to eliminate plastic bags, plastic toys embedded in fast food meals, and DVDs of reality “stars” ripped from YouTube. What does that leave? Most human made products consisting of metals or biodegradable components, such as paper and wood, will rust, rot or breakdown in 20-300 years. Even some plastics left exposed to sun and air will breakdown within a thousand years. Of course, buried deep in a landfill, plastic containers, styrofoam cups and throwaway diapers may remain with us for tens or hundreds of thousands of years.

Archaeological excavations show us that artifacts made of glass and ceramic would fit the bill — lasting well into the year 12012 and beyond. But, in the majority of cases we usually unearth fragments of things.

But what if some ingenious humans could build something that would still be around 10,000 years from now? Better still, build something that will still function as designed 10,000 years from now. This would represent an extraordinary feat of contemporary design and engineering. And, more importantly it would provide a powerful story for countless generations beginning with ours.

So, enter Danny Hillis and the Clock of the Long Now (also knows as the Millennium Clock or the 10,000 Year Clock). Danny Hillis is an inventor, scientist, and computer designer. He pioneered the concept of massively parallel computers.

In Hillis’ own words:

Ten thousand years – the life span I hope for the clock – is about as long as the history of human technology. We have fragments of pots that old. Geologically, it’s a blink of an eye. When you start thinking about building something that lasts that long, the real problem is not decay and corrosion, or even the power source. The real problem is people. If something becomes unimportant to people, it gets scrapped for parts; if it becomes important, it turns into a symbol and must eventually be destroyed. The only way to survive over the long run is to be made of materials large and worthless, like Stonehenge and the Pyramids, or to become lost. The Dead Sea Scrolls managed to survive by remaining lost for a couple millennia. Now that they’ve been located and preserved in a museum, they’re probably doomed. I give them two centuries – tops. The fate of really old things leads me to think that the clock should be copied and hidden.

Plans call for the 200 foot tall, 10,000 Year Clock to be installed inside a mountain in remote west Texas, with a second location in remote eastern Nevada. Design and engineering work on the clock, and preparation of the Clock’s Texas home are underway.

For more on the 10,000 Year Clock jump to the Long Now Foundation, here.

[div class=attrib]More from Rationally Speaking:[end-div]

I recently read Brian Hayes’ wonderful collection of mathematically oriented essays called Group Theory In The Bedroom, and Other Mathematical Diversions. Not surprisingly, the book contained plenty of philosophical musings too. In one of the essays, called “Clock of Ages,” Hayes describes the intricacies of clock building and he provides some interesting historical fodder.

For instance, we learn that in the sixteenth century Conrad Dasypodius, a Swiss mathematician, could have chosen to restore the old Clock of the Three Kings in Strasbourg Cathedral. Dasypodius, however, preferred to build a new clock of his own rather than maintain an old one. Over two centuries later, Jean-Baptiste Schwilgue was asked to repair the clock built by Dasypodius, but he decided to build a new and better clock which would last for 10,000 years.

Did you know that a large-scale project is underway to build another clock that will be able to run with minimal maintenance and interruption for ten millennia? It’s called The 10,000 Year Clock and its construction is sponsored by The Long Now Foundation. The 10,000 Year Clock is, however, being built for more than just its precision and durability. If the creators’ intentions are realized, then the clock will serve as a symbol to encourage long-term thinking about the needs and claims of future generations. Of course, if all goes to plan, our future descendants will be left to maintain it too. The interesting question is: will they want to?

If history is any indicator, then I think you know the answer. As Hayes puts it: “The fact is, winding and dusting and fixing somebody else’s old clock is boring. Building a brand-new clock of your own is much more fun, especially if you can pretend that it’s going to inspire awe and wonder for the ages to come. So why not have the fun now and let the future generations do the boring bit.” I think Hayes is right, it seems humans are, by nature, builders and not maintainers.

Projects like The 10,000 Year Clock are often undertaken with the noblest of environmental intentions, but the old proverb is relevant here: the road to hell is paved with good intentions. What I find troubling, then, is that much of the environmental do-goodery in the world may actually be making things worse. It’s often nothing more than a form of conspicuous consumption, which is a term coined by the economist and sociologist Thorstein Veblen. When it pertains specifically to “green” purchases, I like to call it being conspicuously environmental. Let’s use cars as an example. Obviously it depends on how the calculations are processed, but in many instances keeping and maintaining an old clunker is more environmentally friendly than is buying a new hybrid. I can’t help but think that the same must be true of building new clocks.

In his book, The Conundrum, David Owen writes: “How appealing would ‘green’ seem if it meant less innovation and fewer cool gadgets — not more?” Not very, although I suppose that was meant to be a rhetorical question. I enjoy cool gadgets as much as the next person, but it’s delusional to believe that conspicuous consumption is somehow a gift to the environment.

Using insights from evolutionary psychology and signaling theory, I think there is also another issue at play here. Buying conspicuously environmental goods, like a Prius, sends a signal to others that one cares about the environment. But if it’s truly the environment (and not signaling) that one is worried about, then surely less consumption must be better than more. The homeless person ironically has a lesser environmental impact than your average yuppie, yet he is rarely recognized as an environmental hero. Using this logic I can’t help but conclude that killing yourself might just be the most environmentally friendly act of all time (if it wasn’t blatantly obvious, this is a joke). The lesson here is that we shouldn’t confuse smug signaling with actually helping.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Prototype of the 10,000 Year Clock. Courtesy of the Long Now Foundation / Science Museum of London.[end-div]

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

[div class=attrib]From the New York Times:[end-div]

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

[div class=attrib]Read the entire list of 32 weird and wonderful innovations after the jump.[end-div]

[div class=attrib]Graphic courtesy of Chris Nosenzo / New York Times.[end-div]

Ray Bradbury’s Real World Dystopia

Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.

Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.

An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.

[div class=attrib]From the New York Times:[end-div]

IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.

Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.

Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.

Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”

Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.

“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”

Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?

The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)

But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

MondayPoem: McDonalds Is Impossible

According to Chelsea Martin’s website, “chelsea martin ‘studied’ art and writing at california college of the arts (though she holds no degree because she owes $300 in tuition)”.

[div]From Poetry Foundation:[end-div]

Chelsea Martin was 23 when she published her first collection, Everything Was Fine until Whatever (2009), a genre-blurring book of short fiction, nonfiction, prose, poetry, sketches, and memoir. She is also the author, most recently, of The Real Funny Thing about Apathy (2010).

By Chelsea Martin

– McDonalds is Impossible

Eating food from McDonald’s is mathematically impossible.
Because before you can eat it, you have to order it.
And before you can order it, you have to decide what you want.
And before you can decide what you want, you have to read the menu.
And before you can read the menu, you have to be in front of the menu.
And before you can be in front of the menu, you have to wait in line.
And before you can wait in line, you have to drive to the restaurant.
And before you can drive to the restaurant, you have to get in your car.
And before you can get in your car, you have to put clothes on.
And before you can put clothes on, you have to get out of bed.
And before you can get out of bed, you have to stop being so depressed.
And before you can stop being so depressed, you have to understand what depression is.
And before you can understand what depression is, you have to think clearly.
And before you can think clearly, you have to turn off the TV.
And before you can turn off the TV, you have to free your hands.
And before you can free your hands, you have to stop masturbating.
And before you can stop masturbating, you have to get off.
And before you can get off, you have to imagine someone you really like with his pants off, encouraging you to explore his enlarged genitalia.
And before you can imagine someone you really like with his pants off encouraging you to explore his enlarged genitalia, you have to imagine that person stroking your neck.
And before you can imagine that person stroking your neck, you have to imagine that person walking up to you looking determined.
And before you can imagine that person walking up to you looking determined, you have to choose who that person is.
And before you can choose who that person is, you have to like someone.
And before you can like someone, you have to interact with someone.
And before you can interact with someone, you have to introduce yourself.
And before you can introduce yourself, you have to be in a social situation.
And before you can be in a social situation, you have to be invited to something somehow.
And before you can be invited to something somehow, you have to receive a telephone call from a friend.
And before you can receive a telephone call from a friend, you have to make a reputation for yourself as being sort of fun.
And before you can make a reputation for yourself as being sort of fun, you have to be noticeably fun on several different occasions.
And before you can be noticeably fun on several different occasions, you have to be fun once in the presence of two or more people.
And before you can be fun once in the presence of two or more people, you have to be drunk.
And before you can be drunk, you have to buy alcohol.
And before you can buy alcohol, you have to want your psychological state to be altered.
And before you can want your psychological state to be altered, you have to recognize that your current psychological state is unsatisfactory.
And before you can recognize that your current psychological state is unsatisfactory, you have to grow tired of your lifestyle.
And before you can grow tired of your lifestyle, you have to repeat the same patterns over and over endlessly.
And before you can repeat the same patterns over and over endlessly, you have to lose a lot of your creativity.
And before you can lose a lot of your creativity, you have to stop reading books.
And before you can stop reading books, you have to think that you would benefit from reading less frequently.
And before you can think that you would benefit from reading less frequently, you have to be discouraged by the written word.
And before you can be discouraged by the written word, you have to read something that reinforces your insecurities.
And before you can read something that reinforces your insecurities, you have to have insecurities.
And before you can have insecurities, you have to be awake for part of the day.
And before you can be awake for part of the day, you have to feel motivation to wake up.
And before you can feel motivation to wake up, you have to dream of perfectly synchronized conversations with people you desire to talk to.
And before you can dream of perfectly synchronized conversations with people you desire to talk to, you have to have a general idea of what a perfectly synchronized conversation is.
And before you can have a general idea of what a perfectly synchronized conversation is, you have to watch a lot of movies in which people successfully talk to each other.
And before you can watch a lot of movies in which people successfully talk to each other, you have to have an interest in other people.
And before you can have an interest in other people, you have to have some way of benefiting from other people.
And before you can have some way of benefiting from other people, you have to have goals.
And before you can have goals, you have to want power.
And before you can want power, you have to feel greed.
And before you can feel greed, you have to feel more deserving than others.
And before you can feel more deserving than others, you have to feel a general disgust with the human population.
And before you can feel a general disgust with the human population, you have to be emotionally wounded.
And before you can be emotionally wounded, you have to be treated badly by someone you think you care about while in a naive, vulnerable state.
And before you can be treated badly by someone you think you care about while in a naive, vulnerable state, you have to feel inferior to that person.
And before you can feel inferior to that person, you have to watch him laughing and walking towards his drum kit with his shirt off and the sun all over him.
And before you can watch him laughing and walking towards his drum kit with his shirt off and the sun all over him, you have to go to one of his outdoor shows.
And before you can go to one of his outdoor shows, you have to pretend to know something about music.
And before you can pretend to know something about music, you have to feel embarrassed about your real interests.
And before you can feel embarrassed about your real interests, you have to realize that your interests are different from other people’s interests.
And before you can realize that your interests are different from other people’s interests, you have to be regularly misunderstood.
And before you can be regularly misunderstood, you have to be almost completely socially debilitated.
And before you can be almost completely socially debilitated, you have to be an outcast.
And before you can be an outcast, you have to be rejected by your entire group of friends.
And before you can be rejected by your entire group of friends, you have to be suffocatingly loyal to your friends.
And before you can be suffocatingly loyal to your friends, you have to be afraid of loss.
And before you can be afraid of loss, you have to lose something of value.
And before you can lose something of value, you have to realize that that thing will never change.
And before you can realize that that thing will never change, you have to have the same conversation with your grandmother forty or fifty times.
And before you can have the same conversation with your grandmother forty or fifty times, you have to have a desire to talk to her and form a meaningful relationship.
And before you can have a desire to talk to her and form a meaningful relationship, you have to love her.
And before you can love her, you have to notice the great tolerance she has for you.
And before you can notice the great tolerance she has for you, you have to break one of her favorite china teacups that her mother gave her and forget to apologize.
And before you can break one of her favorite china teacups that her mother gave her and forget to apologize, you have to insist on using the teacups for your imaginary tea party. And before you can insist on using the teacups for your imaginary tea party, you have to cultivate your imagination.
And before you can cultivate your imagination, you have to spend a lot of time alone.
And before you can spend a lot of time alone, you have to find ways to sneak away from your siblings.
And before you can find ways to sneak away from your siblings, you have to have siblings.
And before you can have siblings, you have to underwhelm your parents.
And before you can underwhelm your parents, you have to be quiet, polite and unnoticeable.
And before you can be quiet, polite and unnoticeable, you have to understand that it is possible to disappoint your parents.
And before you can understand that it is possible to disappoint your parents, you have to be harshly reprimanded.
And before you can be harshly reprimanded, you have to sing loudly at an inappropriate moment.
And before you can sing loudly at an inappropriate moment, you have to be happy.
And before you can be happy, you have to be able to recognize happiness.
And before you can be able to recognize happiness, you have to know distress.
And before you can know distress, you have to be watched by an insufficient babysitter for one week.
And before you can be watched by an insufficient babysitter for one week, you have to vomit on the other, more pleasant babysitter.
And before you can vomit on the other, more pleasant babysitter, you have to be sick.
And before you can be sick, you have to eat something you’re allergic to.
And before you can eat something you’re allergic to, you have to have allergies.
And before you can have allergies, you have to be born.
And before you can be born, you have to be conceived.
And before you can be conceived, your parents have to copulate.
And before your parents can copulate, they have to be attracted to one another.
And before they can be attracted to one another, they have to have common interests.
And before they can have common interests, they have to talk to each other.
And before they can talk to each other, they have to meet.
And before they can meet, they have to have in-school suspension on the same day.
And before they can have in-school suspension on the same day, they have to get caught sneaking off campus separately.
And before they can get caught sneaking off campus separately, they have to think of somewhere to go.
And before they can think of somewhere to go, they have to be familiar with McDonald’s.
And before they can be familiar with McDonald’s, they have to eat food from McDonald’s.
And eating food from McDonald’s is mathematically impossible.

Mutant Gravity and Dark Magnetism

Scientific consensus states that our universe is not only expanding, but expanding at an ever-increasing rate. So, sometime in the very distant future (tens of billions of years) our Milky Way galaxy will be mostly alone, accompanied only by its close galactic neighbors, such as Andromeda. All else in the universe will have receded beyond the horizon of visible light. And, yet for all the experimental evidence, no one knows the precise cause(s) of this acceleration or even of the expansion itself. But, there is no shortage of bold new theories.

[div class=attrib]From New Scientist:[end-div]

WE WILL be lonely in the late days of the cosmos. Its glittering vastness will slowly fade as countless galaxies retreat beyond the horizon of our vision. Tens of billions of years from now, only a dense huddle of nearby galaxies will be left, gazing out into otherwise blank space.

That gloomy future comes about because space is expanding ever faster, allowing far-off regions to slip across the boundary from which light has time to reach us. We call the author of these woes dark energy, but we are no nearer to discovering its identity. Might the culprit be a repulsive force that emerges from the energy of empty spaceMovie Camera, or perhaps a modification of gravity at the largest scales? Each option has its charms, but also profound problems.

But what if that mysterious force making off with the light of the cosmos is an alien echo of light itself? Light is just an expression of the force of electromagnetism, and vast electromagnetic waves of a kind forbidden by conventional physics, with wavelengths trillions of times larger than the observable universe, might explain dark energy’s baleful presence. That is the bold notion of two cosmologists who think that such waves could also account for the mysterious magnetic fields that we see threading through even the emptiest parts of our universe. Smaller versions could be emanating from black holes within our galaxy.

It is almost two decades since we realised that the universe is running away with itself. The discovery came from observations of supernovae that were dimmer, and so further away, than was expected, and earned its discoverers the Nobel prize in physics in 2011.

Prime suspect in the dark-energy mystery is the cosmological constant, an unchanging energy which might emerge from the froth of short-lived, virtual particles that according to quantum theory are fizzing about constantly in otherwise empty space.

Mutant gravity

To cause the cosmic acceleration we see, dark energy would need to have an energy density of about half a joule per cubic kilometre of space. When physicists try to tot up the energy of all those virtual particles, however, the answer comes to either exactly zero (which is bad), or something so enormous that empty space would rip all matter to shreds (which is very bad). In this latter case the answer is a staggering 120 orders of magnitude out, making it a shoo-in for the least accurate prediction in all of physics.

This stumbling block has sent some researchers down another path. They argue that in dark energy we are seeing an entirely new side to gravity. At distances of many billions of light years, it might turn from an attractive to a repulsive force.

But it is dangerous to be so cavalier with gravity. Einstein’s general theory of relativity describes gravity as the bending of space and time, and predicts the motions of planets and spacecraft in our own solar system with cast-iron accuracy. Try bending the theory to make it fit acceleration on a cosmic scale, and it usually comes unstuck closer to home.

That hasn’t stopped many physicists persevering along this route. Until recently, Jose Beltrán and Antonio Maroto were among them. In 2008 at the Complutense University of Madrid, Spain, they were playing with a particular version of a mutant gravity model called a vector-tensor theory, which they had found could mimic dark energy. Then came a sudden realisation. The new theory was supposed to be describing a strange version of gravity, but its equations bore an uncanny resemblance to some of the mathematics underlying another force. “They looked like electromagnetism,” says Beltrán, now based at the University of Geneva in Switzerland. “We started to think there could be a connection.”

So they decided to see what would happen if their mathematics described not masses and space-time, but magnets and voltages. That meant taking a fresh look at electromagnetism. Like most of nature’s fundamental forces, electromagnetism is best understood as a phenomenon in which things come chopped into little pieces, or quanta. In this case the quanta are photons: massless, chargeless particles carrying fluctuating electric and magnetic fields that point at right angles to their direction of motion.

Alien photons

This description, called quantum electrodynamics or QED, can explain a vast range of phenomena, from the behaviour of light to the forces that bind molecules together. QED has arguably been tested more precisely than any other physical theory, but it has a dark secret. It wants to spit out not only photons, but also two other, alien entities.

The first kind is a wave in which the electric field points along the direction of motion, rather than at right angles as it does with ordinary photons. This longitudinal mode moves rather like a sound wave in air. The second kind, called a temporal mode, has no magnetic field. Instead, it is a wave of pure electric potential, or voltage. Like all quantum entities, these waves come in particle packets, forming two new kinds of photon.

As we have never actually seen either of these alien photons in reality, physicists found a way to hide them. They are spirited away using a mathematical fix called the Lorenz condition, which means that all their attributes are always equal and opposite, cancelling each other out exactly. “They are there, but you cannot see them,” says Beltrán.

Beltrán and Maroto’s theory looked like electromagnetism, but without the Lorenz condition. So they worked through their equations to see what cosmological implications that might have.

The strange waves normally banished by the Lorenz condition may come into being as brief quantum fluctuations – virtual waves in the vacuum – and then disappear again. In the early moments of the universe, however, there is thought to have been an episode of violent expansion called inflation, which was driven by very powerful repulsive gravity. The force of this expansion grabbed all kinds of quantum fluctuations and amplified them hugely. It created ripples in the density of matter, for example, which eventually seeded galaxies and other structures in the universe.

Crucially, inflation could also have boosted the new electromagnetic waves. Beltrán and Maroto found that this process would leave behind vast temporal modes: waves of electric potential with wavelengths many orders of magnitude larger than the observable universe. These waves contain some energy but because they are so vast we do not perceive them as waves at all. So their energy would be invisible, dark… perhaps, dark energy?

Beltrán and Maroto called their idea dark magnetism (arxiv.org/abs/1112.1106). Unlike the cosmological constant, it may be able to explain the actual quantity of dark energy in the universe. The energy in those temporal modes depends on the exact time inflation started. One plausible moment is about 10 trillionths of a second after the big bang, when the universe cooled below a critical temperature and electromagnetism split from the weak nuclear force to become a force in its own right. Physics would have suffered a sudden wrench, enough perhaps to provide the impetus for inflation.

If inflation did happen at this “electroweak transition”, Beltrán and Maroto calculate that it would have produced temporal modes with an energy density close to that of dark energy. The correspondence is only within an order of magnitude, which may not seem all that precise. In comparison with the cosmological constant, however, it is mildly miraculous.

The theory might also explain the mysterious existence of large-scale cosmic magnetic fields. Within galaxies we see the unmistakable mark of magnetic fields as they twist the polarisation of light. Although the turbulent formation and growth of galaxies could boost a pre-existing field, is it not clear where that seed field would have come from.

Even more strangely, magnetic fields seem to have infiltrated the emptiest deserts of the cosmos. Their influence was noticed in 2010 by Andrii Neronov and Ievgen Vovk at the Geneva Observatory. Some distant galaxies emit blistering gamma rays with energies in the teraelectronvolt range. These hugely energetic photons should smack into background starlight on their way to us, creating electrons and positrons that in turn will boost other photons up to gamma energies of around 100 gigaelectronvolts. The trouble is that astronomers see relatively little of this secondary radiation. Neronov and Vovk suggest that is because a diffuse magnetic field is randomly bending the path of electrons and positrons, making their emission more diffuse (Science, vol 32, p 73).

“It is difficult to explain cosmic magnetic fields on the largest scales by conventional mechanisms,” says astrophysicist Larry Widrow of Queen’s University in Kingston, Ontario, Canada. “Their existence in the voids might signal an exotic mechanism.” One suggestion is that giant flaws in space-time called cosmic strings are whipping them up.

With dark magnetism, such a stringy solution would be superfluous. As well as the gigantic temporal modes, dark magnetism should also lead to smaller longitudinal waves bouncing around the cosmos. These waves could generate magnetism on the largest scales and in the emptiest voids.

To begin with, Beltrán and Maroto had some qualms. “It is always dangerous to modify a well-established theory,” says Beltrán. Cosmologist Sean Carroll at the California Institute of Technology in Pasadena, echoes this concern. “They are doing extreme violence to electromagnetism. There are all sorts of dangers that things might go wrong,” he says. Such meddling could easily throw up absurdities, predicting that electromagnetic forces are different from what we actually see.

The duo soon reassured themselves, however. Although the theory means that temporal and longitudinal modes can make themselves felt, the only thing that can generate them is an ultra-strong gravitational field such as the repulsive field that sprang up in the era of inflation. So within the atom, in all our lab experiments, and out there among the planets, electromagnetism carries on in just the same way as QED predicts.

Carroll is not convinced. “It seems like a long shot,” he says. But others are being won over. Gonzalo Olmo, a cosmologist at the University of Valencia, Spain, was initially sceptical but is now keen. “The idea is fantastic. If we quantise electromagnetic fields in an expanding universe, the effect follows naturally.”

So how might we tell whether the idea is correct? Dark magnetism is not that easy to test. It is almost unchanging, and would stretch space in almost exactly the same way as a cosmological constant, so we can’t tell the two ideas apart simply by watching how cosmic acceleration has changed over time.

Ancient mark

Instead, the theory might be challenged by peering deep into the cosmic microwave background, a sea of radiation emitted when the universe was less than 400,000 years old. Imprinted on this radiation are the original ripples of matter density caused by inflation, and it may bear another ancient mark. The turmoil of inflation should have energised gravitational waves, travelling warps in space-time that stretch and squeeze everything they pass through. These waves should affect the polarisation of cosmic microwaves in a distinctive way, which could tell us about the timing and the violence of inflation. The European Space Agency’s Planck spacecraft might just spot this signature. If Planck or a future mission finds that inflation happened before the electroweak transition, at a higher energy scale, then that would rule out dark magnetism in its current form.

Olmo thinks that the theory might anyhow need some numerical tweaking, so that might not be fatal, although it would be a blow to lose the link between the electroweak transition and the correct amount of dark energy.

One day, we might even be able to see the twisted light of dark magnetism. In its present incarnation with inflation at the electroweak scale, the longitudinal waves would all have wavelengths greater than a few hundred million kilometres, longer than the distance from Earth to the sun. Detecting a light wave efficiently requires an instrument not much smaller than the wavelength, but in the distant future it might just be possible to pick up such waves using space-based radio telescopes linked up across the solar system. If inflation kicked in earlier at an even higher energy, as suggested by Olmo, some of the longitudinal waves could be much shorter. That would bring them within reach of Earth-based technology. Beltrán suggests that they might be detected with the Square Kilometre Array – a massive radio instrument due to come on stream within the next decade.

If these dark electromagnetic waves can be created by strong gravitational fields, then they could also be produced by the strongest fields in the cosmos today, those generated around black holes. Beltrán suggests that waves may be emitted by the black hole at the centre of the Milky Way. They might be short enough for us to see – but they could easily be invisibly faint. Beltrán and Maroto are planning to do the calculations to find out.

One thing they have calculated from their theory is the voltage of the universe. The voltage of the vast temporal waves of electric potential started at zero when they were first created at the time of inflation, and ramped up steadily. Today, it has reached a pretty lively 1027 volts, or a billion billion gigavolts.

Just as well for us that it has nowhere to discharge. Unless, that is, some other strange quirk of cosmology brings a parallel universe nearby. The encounter would probably destroy the universe as we know it, but at least then our otherwise dark and lonely future would end with the mother of all lightning bolts.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Graphic courtesy of NASA / WMAP.[end-div]