Faux Fashion is More Than Skin-Deep

Some innovative research shows that we are generally more inclined to cheat others if we are clad in counterfeit designer clothing or carrying faux accessories.

From Scientific American:

Let me tell you the story of my debut into the world of fashion. When Jennifer Wideman Green (a friend of mine from graduate school) ended up living in New York City, she met a number of people in the fashion industry. Through her I met Freeda Fawal-Farah, who worked for Harper’s Bazaar. A few months later Freeda invited me to give a talk at the magazine, and because it was such an atypical crowd for me, I agreed.

I found myself on a stage before an auditorium full of fashion mavens. Each woman was like an exhibit in a museum: her jewelry, her makeup, and, of course, her stunning shoes. I talked about how people make decisions, how we compare prices when we are trying to figure out how much something is worth, how we compare ourselves to others, and so on. They laughed when I hoped they would, asked thoughtful questions, and offered plenty of their own interesting ideas. When I finished the talk, Valerie Salembier, the publisher of Harper’s Bazaar, came onstage, hugged and thanked me—and gave me a stylish black Prada overnight bag.

I headed downtown to my next meeting. I had some time to kill, so I decided to take a walk. As I wandered, I couldn’t help thinking about my big black leather bag with its large Prada logo. I debated with myself: should I carry my new bag with the logo facing outward? That way, other people could see and admire it (or maybe just wonder how someone wearing jeans and red sneakers could possibly have procured it). Or should I carry it with the logo facing toward me, so that no one could recognize that it was a Prada? I decided on the latter and turned the bag around.

Even though I was pretty sure that with the logo hidden no one realized it was a Prada bag, and despite the fact that I don’t think of myself as someone who cares about fashion, something felt different to me. I was continuously aware of the brand on the bag. I was wearing Prada! And it made me feel different; I stood a little straighter and walked with a bit more swagger. I wondered what would happen if I wore Ferrari underwear. Would I feel more invigorated? More confident? More agile? Faster?

I continued walking and passed through Chinatown, which was bustling with activity. Not far away, I spotted an attractive young couple in their twenties taking in the scene. A Chinese man approached them. “Handbags, handbags!” he called, tilting his head to indicate the direction of his small shop. After a moment or two, the woman asked the Chinese man, “You have Prada?”

The vendor nodded. I watched as she conferred with her partner. He smiled at her, and they followed the man to his stand.

The Prada they were referring to, of course, was not actually Prada. Nor were the $5 “designer” sunglasses on display in his stand really Dolce&Gabbana. And the Armani perfumes displayed over by the street food stands? Fakes too.

From Ermine to Armani

Going back a way, ancient Roman law included a set of regulations called sumptuary laws, which filtered down through the centuries into the laws of nearly all European nations. Among other things, the laws dictated who could wear what, according to their station and class. For example, in Renaissance England, only the nobility could wear certain kinds of fur, fabrics, laces, decorative beading per square foot, and so on, while those in the gentry could wear decisively less appealing clothing. (The poorest were generally excluded from the law, as there was little point in regulating musty burlap, wool, and hair shirts.) People who “dressed above their station” were silently, but directly, lying to those around them. And those who broke the law were often hit with fines and other punishments.

What may seem to be an absurd degree of obsessive compulsion on the part of the upper crust was in reality an effort to ensure that people were what they signaled themselves to be; the system was designed to eliminate disorder and confusion. Although our current sartorial class system is not as rigid as it was in the past, the desire to signal success and individuality is as strong today as ever.

When thinking about my experience with the Prada bag, I wondered whether there were other psychological forces related to fakes that go beyond external signaling. There I was in Chinatown holding my real Prada bag, watching the woman emerge from the shop holding her fake one. Despite the fact that I had neither picked out nor paid for mine, it felt to me that there was a substantial difference between the way I related to my bag and the way she related to hers.

More generally, I started wondering about the relationship between what we wear and how we behave, and it made me think about a concept that social scientists call self-signaling. The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people— inferring who we are and what we like from our actions.

For example, imagine that you see a beggar on the street. Rather than ignoring him or giving him money, you decide to buy him a sandwich. The action in itself does not define who you are, your morality, or your character, but you interpret the deed as evidence of your compassionate and charitable character. Now, armed with this “new” information, you start believing more intensely in your own benevolence. That’s self-signaling at work.

The same principle could also apply to fashion accessories. Carrying a real Prada bag—even if no one else knows it is real—could make us think and act a little differently than if we were carrying a counterfeit one. Which brings us to the questions: Does wearing counterfeit products somehow make us feel less legitimate? Is it possible that accessorizing with fakes might affect us in unexpected and negative ways?

Calling All Chloés

I decided to call Freeda and tell her about my recent interest in high fashion. During our conversation, Freeda promised to convince a fashion designer to lend me some items to use in some experiments. A few weeks later, I received a package from the Chloé label containing twenty handbags and twenty pairs of sunglasses. The statement accompanying the package told me that the handbags were estimated to be worth around $40,000 and the sunglasses around $7,000. (The rumor about this shipment quickly traveled around Duke, and I became popular among the fashion-minded crowd.)

With those hot commodities in hand, Francesca Gino, Mike Norton (both professors at Harvard University), and I set about testing whether participants who wore fake products would feel and behave differently from those wearing authentic ones. If our participants felt that wearing fakes would broadcast (even to themselves) a less honorable self-image, we wondered whether they might start thinking of themselves as somewhat less honest. And with this tainted self-concept in mind, would they be more likely to continue down the road of dishonesty?

Using the lure of Chloé accessories, we enlisted many female MBA students for our experiment. We assigned each woman to one of three conditions: authentic, fake or no information. In the authentic condition, we told participants that they would be donning real Chloé designer sunglasses. In the fake condition, we told them that they would be wearing counterfeit sunglasses that looked identical to those made by Chloé (in actuality all the products we used were the real McCoy). Finally, in the no-information condition, we didn’t say anything about the authenticity of the sunglasses.

Once the women donned their sunglasses, we directed them to the hallway, where we asked them to look at different posters and out the windows so that they could later evaluate the quality and experience of looking through their sunglasses. Soon after, we called them into another room for another task.

In this task, the participants were given 20 sets of 12 numbers (3.42, 7.32 and so on), and they were asked to find in each set the two numbers that add up to 10. They had five minutes to solve as many as possible and were paid for each correct answer. We set up the test so that the women could cheat—report that they solved more sets than they did (after shredding their worksheet and all the evidence)—while allowing us to figure out who cheated and by how much (by rigging the shredders so that they only cut the sides of the paper).

Over the years we carried out many versions of this experiment, and we repeatedly find that a lot of people cheated by a few questions. This experiment was not different in this regard, but what was particularly interesting was the effect of wearing counterfeits. While “only” 30 percent of the participants in the authentic condition reported solving more matrices than they actually had, 74 percent of those in the fake condition reported solving more matrices than they actually had. These results gave rise to another interesting question. Did the presumed fakeness of the product make the women cheat more than they naturally would? Or did the genuine Chloé label make them behave more honestly than they would otherwise?

This is why we also had a no-information condition, in which we didn’t mention anything about whether the sunglasses were real or fake. In that condition 42 percent of the women cheated. That result was between the other two, but it was much closer to the authentic condition (in fact, the two conditions were not statistically different from each other). These results suggest that wearing a genuine product does not increase our honesty (or at least not by much). But once we knowingly put on a counterfeit product, moral constraints loosen to some degree, making it easier for us to take further steps down the path of dishonesty.

The moral of the story? If you, your friend, or someone you are dating wears counterfeit products, be careful! Another act of dishonesty may be closer than you expect.

Up to No Good

These results led us to another question: if wearing counterfeits changes the way we view our own behavior, does it also cause us to be more suspicious of others? To find out, we asked another group of participants to put on what we told them were either real or counterfeit Chloé sunglasses. This time, we asked them to fill out a rather long survey with their sunglasses on. In this survey, we included three sets of questions. The questions in set A asked participants to estimate the likelihood that people they know might engage in various ethically questionable behaviors such as standing in the express line with too many groceries. The questions in set B asked them to estimate the likelihood that when people say particular phrases, including “Sorry, I’m late. Traffic was terrible,” they are lying. Set C presented participants with two scenarios depicting someone who has the opportunity to behave dishonestly, and asked them to estimate the likelihood that the person in the scenario would take the opportunity to cheat.

What were the results? You guessed it. When reflecting on the behavior of people they know, participants in the counterfeit condition judged their acquaintances to be more likely to behave dishonestly than did participants in the authentic condition. They also interpreted the list of common excuses as more likely to be lies, and judged the actor in the two scenarios as being more likely to choose the shadier option. We concluded that counterfeit products not only tend to make us more dishonest; they also cause us to view others as less than honest as well.

Read the entire article after the jump.

Send to Kindle

La Macchina: The Machine as Art, for Caffeine Addicts

You may not know their names, but Desiderio Pavoni and Luigi Bezzerra are to coffee as are Steve Jobs and Steve Wozniak to computers. Modern day espresso machines owe all to the innovative design and business savvy of this early 20th century Italian duo.

From Smithsonian:

For many coffee drinkers, espresso is coffee. It is the purest distillation of the coffee bean, the literal essence of a bean. In another sense, it is also the first instant coffee. Before espresso, it could take up to five minutes –five minutes!– for a cup of coffee to brew. But what exactly is espresso and how did it come to dominate our morning routines? Although many people are familiar with espresso these days thanks to the Starbucksification of the world, there is often still some confusion over what it actually is – largely due to “espresso roasts” available on supermarket shelves everywhere. First, and most importantly, espresso is not a roasting method. It is neither a bean nor a blend. It is a method of preparation. More specifically, it is a preparation method in which highly-pressurized hot water is forced over coffee grounds to produce a very concentrated coffee drink with a deep, robust flavor. While there is no standardized process for pulling a shot of espresso, Italian coffeemaker Illy’s definition of the authentic espresso seems as good a measure as any:

A jet of hot water at 88°-93°C (190°-200°F) passes under a pressure of nine or more atmospheres through a seven-gram (.25 oz) cake-like layer of ground and tamped coffee. Done right, the result is a concentrate of not more than 30 ml (one oz) of pure sensorial pleasure.

For those of you who, like me, are more than a few years out of science class, nine atmospheres of pressure is the equivalent to nine times the amount of pressure normally exerted by the earth’s atmosphere. As you might be able to tell from the precision of Illy’s description, good espresso is good chemistry. It’s all about precision and consistency and finding the perfect balance between grind, temperature, and pressure. Espresso happens at the molecular level. This is why technology has been such an important part of the historical development of espresso and a key to the ongoing search for the perfect shot. While espresso was never designed per se, the machines –or Macchina– that make our cappuccinos and lattes have a history that stretches back more than a century.

In the 19th century, coffee was a huge business in Europe with cafes flourishing across the continent. But coffee brewing was a slow process and, as is still the case today, customers often had to wait for their brew. Seeing an opportunity, inventors across Europe began to explore ways of using steam machines to reduce brewing time – this was, after all, the age of steam. Though there were surely innumerable patents and prototypes, the invention of the machine and the method that would lead to espresso is usually attributed to Angelo Moriondo of Turin, Italy, who was granted a patent in 1884 for “new steam machinery for the economic and instantaneous confection of coffee beverage.” The machine consisted of a large boiler, heated to 1.5 bars of pressure, that pushed water through a large bed of coffee grounds on demand, with a second boiler producing steam that would flash the bed of coffee and complete the brew. Though Moriondo’s invention was the first coffee machine to use both water and steam, it was purely a bulk brewer created for the Turin General Exposition. Not much more is known about Moriondo, due in large part to what we might think of today as a branding failure. There were never any “Moriondo” machines, there are no verifiable machines still in existence, and there aren’t even photographs of his work. With the exception of his patent, Moriondo has been largely lost to history. The two men who would improve on Morinodo’s design to produce a single serving espresso would not make that same mistake.

Luigi Bezzerra and Desiderio Pavoni were the Steve Wozniak and Steve Jobs of espresso. Milanese manufacturer and “maker of liquors” Luigi Bezzera had the know-how. He invented single-shot espresso in the early years of the 20th century while looking for a method of quickly brewing coffee directly into the cup. He made several improvements to Moriondo’s machine, introduced the portafilter, multiple brewheads, and many other innovations still associated with espresso machines today. In Bezzera’s original patent, a large boiler with built-in burner chambers filled with water was heated until it pushed water and steam through a tamped puck of ground coffee. The mechanism through which the heated water passed also functioned as heat radiators, lowering the temperature of the water from 250°F in the boiler to the ideal brewing temperature of approximately 195°F (90°C). Et voila, espresso. For the first time, a cup of coffee was brewed to order in a matter of seconds. But Bezzera’s machine was heated over an open flame, which made it difficult to control pressure and temperature, and nearly impossible to to produce a consistent shot. And consistency is key in the world of espresso. Bezzera designed and built a few prototypes of his machine but his beverage remained largely unappreciated because he didn’t have any money to expand his business or any idea how to market the machine. But he knew someone who did. Enter Desiderio Pavoni.

Read the entire article after the jump.

Image: A 1910 Ideale espresso machine. Courtesy of Smithsonian.

Send to Kindle

Keeping Secrets in the Age of Technology

From the Guardian:

With the benefit of hindsight, life as I knew it came to an end in late 1994, round Seal’s house. We used to live round the corner from each other and if he was in between supermodels I’d pop over to watch a bit of Formula 1 on his pop star-sized flat-screen telly. I was probably on the sofa reading Vogue (we had that in common, albeit for different reasons) while he was “mucking about” on his computer (then the actual technical term for anything non-work-related, vis-à-vis computers), when he said something like: “Kate, have a look at this thing called the World Wide Web. It’s going to be massive!”

I can’t remember what we looked at then, at the tail-end of what I now nostalgically refer to as “The Tipp-Ex Years” – maybe The Well, accessed by Web Crawler – but whatever it was, it didn’t do it for me: “Information dual carriageway!” I said (trust me, this passed for witty in the 1990s). “Fancy a pizza?”

So there we are: Seal introduced me to the interweb. And although I remain a bit of a petrol-head and (nothing if not brand-loyal) own an iPad, an iPhone and two Macs, I am still basically rubbish at “modern”. Pre-Leveson, when I was writing a novel involving a phone-hacking scandal, my only concern was whether or not I’d come up with a plot that was: a) vaguely plausible and/or interesting, and b) technically possible. (A very nice man from Apple assured me that it was.)

I would gladly have used semaphore, telegrams or parchment scrolls delivered by magic owls to get the point across. Which is that ever since people started chiselling cuneiform on to big stones they’ve been writing things that will at some point almost certainly be misread and/or misinterpreted by someone else. But the speed of modern technology has made the problem rather more immediate. Confusing your public tweets with your Direct Messages and begging your young lover to take-me-now-cos-im-gagging-4-u? They didn’t have to worry about that when they were issuing decrees at Memphis on a nice bit of granodiorite.

These days the mis-sent (or indeed misread) text is still a relatively intimate intimation of an affair, while the notorious “reply all” email is the stuff of tired stand-up comedy. The boundary-less tweet is relatively new – and therefore still entertaining – territory, as evidenced most recently by American model Melissa Stetten, who, sitting on a plane next to a (married) soap actor called Brian Presley, tweeted as he appeared to hit on her.

Whenever and wherever words are written, somebody, somewhere will want to read them. And if those words are not meant to be read they very often will be – usually by the “wrong” people. A 2010 poll announced that six in 10 women would admit to regularly snooping on their partner’s phone, Twitter, or Facebook, although history doesn’t record whether the other four in 10 were then subjected to lie-detector tests.

Our compelling, self-sabotaging desire to snoop is usually informed by… well, if not paranoia, exactly, then insecurity, which in turn is more revealing about us than the words we find. If we seek out bad stuff – in a partner’s text, an ex’s Facebook status or best friend’s Twitter timeline – we will surely find it. And of course we don’t even have to make much effort to find the stuff we probably oughtn’t. Employers now routinely snoop on staff, and while this says more about the paranoid dynamic between boss classes and foot soldiers than we’d like, I have little sympathy for the employee who tweets their hangover status with one hand while phoning in “sick” with the other.

Take Google Maps: the more information we are given, the more we feel we’ve been gifted a licence to snoop. It’s the kind of thing we might be protesting about on the streets of Westminster were we not too busy invading our own privacy, as per the recent tweet-spat between Mr and Mrs Ben Goldsmith.

Technology feeds an increasing yet non-specific social unease – and that uneasiness inevitably trickles down to our more intimate relationships. For example, not long ago, I was blown out via text for a lunch date with a friend (“arrrgh, urgent deadline! SO SOZ!”), whose “urgent deadline” (their Twitter timeline helpfully revealed) turned out to involve lunch with someone else.

Did I like my friend any less when I found this out? Well yes, a tiny bit – until I acknowledged that I’ve done something similar 100 times but was “cleverer” at covering my tracks. Would it have been easier for my friend to tell me the truth? Arguably. Should I ever have looked at their Twitter timeline? Well, I had sought to confirm my suspicion that they weren’t telling the truth, so given that my paranoia gremlin was in charge it was no wonder I didn’t like what it found.

It is, of course, the paranoia gremlin that is in charge when we snoop – or are snooped upon – by partners, while “trust” is far more easily undermined than it has ever been. The randomly stumbled-across text (except they never are, are they?) is our generation’s lipstick-on-the-collar. And while Foursquare may say that your partner is in the pub, is that enough to stop you checking their Twitter/Facebook/emails/texts?

Read the entire article after the jump.

Send to Kindle

Eternal Damnation as Deterrent?

So, you think an all-seeing, all-knowing supreme deity encourages moral behavior and discourages crime? Think again.

From New Scientist:

There’s nothing like the fear of eternal damnation to encourage low crime rates. But does belief in heaven and a forgiving god encourage lawbreaking? A new study suggests it might – although establishing a clear link between the two remains a challenge.

Azim Shariff at the University of Oregon in Eugene and his colleagues compared global data on people’s beliefs in the afterlife with worldwide crime data collated by the United Nations Office on Drugs and Crime. In total, Shariff’s team looked at data covering the beliefs of 143,000 individuals across 67 countries and from a variety of religious backgrounds.

In most of the countries assessed, people were more likely to report a belief in heaven than in hell. Using that information, the team could calculate the degree to which a country’s rate of belief in heaven outstrips its rate of belief in hell.

Even after the researchers had controlled for a host of crime-related cultural factors – including GDP, income inequality, population density and life expectancy – national crime rates were typically higher in countries with particularly strong beliefs in heaven but weak beliefs in hell.

Licence to steal

“Belief in a benevolent, forgiving god could license people to think they can get away with things,” says Shariff – although he stresses that this conclusion is speculative, and that the results do not necessarily imply causality between religious beliefs and crime rates.

“There are a number of possible causal pathways,” says Richard Sosis, an anthropologist at the University of Connecticut in Storrs, who was not involved in the study. The most likely interpretation is that there are intervening variables at the societal level – societies may have values that are similarly reflected in their legal and religious systems.

In a follow-up study, yet to be published, Shariff and Amber DeBono of Winston–Salem State University in North Carolina primed volunteers who had Christian beliefs by asking them to write variously about God’s forgiving nature, God’s punitive nature, a forgiving human, a punitive human, or a neutral subject. The volunteers were then asked to complete anagram puzzles for a monetary reward of a few cents per anagram.

God helps those who…

Participants were given the opportunity to commit petty theft, with no chance of being caught, by lying about the number of anagrams they had successfully completed. Shariff’s team found that those participants who had written about a forgiving god claimed nearly $2 more than they were entitled to under the rules of the game, whereas those in the other groups awarded themselves less than 50 cents more than they were entitled to.

Read the entire article after the jump.

Image: A detail from the Chapmans’ Hell. Photograph: Andy Butterton/PA. Courtesy of Guardian.

Send to Kindle

Communicating with the Comatose

From Scientific American:

Adrian Owen still gets animated when he talks about patient 23. The patient was only 24 years old when his life was devastated by a car accident. Alive but unresponsive, he had been languishing in what neurologists refer to as a vegetative state for five years, when Owen, a neuro-scientist then at the University of Cambridge, UK, and his colleagues at the University of Liège in Belgium, put him into a functional magnetic resonance imaging (fMRI) machine and started asking him questions.

Incredibly, he provided answers. A change in blood flow to certain parts of the man’s injured brain convinced Owen that patient 23 was conscious and able to communicate. It was the first time that anyone had exchanged information with someone in a vegetative state.

Patients in these states have emerged from a coma and seem awake. Some parts of their brains function, and they may be able to grind their teeth, grimace or make random eye movements. They also have sleep–wake cycles. But they show no awareness of their surroundings, and doctors have assumed that the parts of the brain needed for cognition, perception, memory and intention are fundamentally damaged. They are usually written off as lost.

Owen’s discovery, reported in 2010, caused a media furore. Medical ethicist Joseph Fins and neurologist Nicholas Schiff, both at Weill Cornell Medical College in New York, called it a “potential game changer for clinical practice”. The University of Western Ontario in London, Canada, soon lured Owen away from Cambridge with Can$20 million (US$19.5 million) in funding to make the techniques more reliable, cheaper, more accurate and more portable — all of which Owen considers essential if he is to help some of the hundreds of thousands of people worldwide in vegetative states. “It’s hard to open up a channel of communication with a patient and then not be able to follow up immediately with a tool for them and their families to be able to do this routinely,” he says.

Many researchers disagree with Owen’s contention that these individuals are conscious. But Owen takes a practical approach to applying the technology, hoping that it will identify patients who might respond to rehabilitation, direct the dosing of analgesics and even explore some patients’ feelings and desires. “Eventually we will be able to provide something that will be beneficial to patients and their families,” he says.

Still, he shies away from asking patients the toughest question of all — whether they wish life support to be ended — saying that it is too early to think about such applications. “The consequences of asking are very complicated, and we need to be absolutely sure that we know what to do with the answers before we go down this road,” he warns.

Lost and found
With short, reddish hair and beard, Owen is a polished speaker who is not afraid of publicity. His home page is a billboard of links to his television and radio appearances. He lectures to scientific and lay audiences with confidence and a touch of defensiveness.

Owen traces the roots of his experiments to the late 1990s, when he was asked to write a review of clinical applications for technologies such as fMRI. He says that he had a “weird crisis of confidence”. Neuroimaging had confirmed a lot of what was known from brain mapping studies, he says, but it was not doing anything new. “We would just tweak a psych test and see what happens,” says Owen. As for real clinical applications: “I realized there weren’t any. We all realized that.”

Owen wanted to find one. He and his colleagues got their chance in 1997, with a 26-year-old patient named Kate Bainbridge. A viral infection had put her in a coma — a condition that generally persists for two to four weeks, after which patients die, recover fully or, in rare cases, slip into a vegetative or a minimally conscious state — a more recently defined category characterized by intermittent hints of conscious activity.

Read the entire article after the jump.

fMRI axial brain image. Image courtesy of Wikpedia.

Send to Kindle

Happy Birthday, George Orwell

Eric Blair was born on this day, June 25, in 1903. Thirty years later Blair changed his name with the publication of his first book, Down and Out in Paris and London (1933). His preferred pen name, George Orwell, chosen for being “a good round English name” (in his words).

Your friendly editor at theDiagonal classes George Orwell as one of the most important literary figures of the 20th century. His numerous political writings, literary reviews, poems, newspaper columns and 7 novels should be compulsory reading for minds young and old. His furious intellectual honesty, keen eye for exposing hypocrisy and skepticism of power add further considerable weight to his literary legacy.

In 1946, two years before publication of one of the most important works of the 20th century, 1984, Orwell wrote a passage that summarizes his world view and rings ever true today:

Political language — and with variations this is true of all political parties, from Conservatives to Anarchists — is designed to make lies sound truthful and murder respectable, and to give an appearance of solidity to pure wind.  (Politics and the English Language, 1946).

Image: Photograph of George Orwell which appears in an old acreditation for the Branch of the National Union of Journalists (BNUJ), 1933. Courtesy of Wikipedia.

Send to Kindle

Letting Go of Regrets

From Mind Matters over at Scientific American:

The poem “Maud Muller” by John Greenleaf Whittier aptly ends with the line, “For of all sad words of tongue or pen, The saddest are these: ‘It might have been!’” What if you had gone for the risky investment that you later found out made someone else rich, or if you had had the guts to ask that certain someone to marry you? Certainly, we’ve all had instances in our lives where hindsight makes us regret not sticking our neck out a bit more.

But new research suggests that when we are older these kinds of ‘if only!’ thoughts about the choices we made may not be so good for our mental health. One of the most important determinants of our emotional well being in our golden years might be whether we learn to stop worrying about what might have been.

In a new paper published in Science, researchers from the University Medical Center Hamburg-Eppendorf in Hamburg, Germany, report evidence from two experiments which suggest that one key to aging well might involve learning to let go of regrets about missed opportunities. Stafanie Brassen and her colleagues looked at how healthy young participants (mean age: 25.4 years), healthy older participants (65.8 years), and older participants who had developed depression for the first time later in life (65.6 years) dealt with regret, and found that the young and older depressed patients seemed to hold on to regrets about missed opportunities while the healthy older participants seemed to let them go.

To measure regret over missed opportunities, the researchers adapted an established risk taking task into a clever game in which the participants looked at eight wooden boxes lined up in a row on a computer screen and could choose to reveal the contents of the boxes one at a time, from left to right. Seven of the boxes had gold in them, which the participants would earn if they chose to open them. One box, however, had a devil in it. What happens if they open the box with the devil in it? They lose that round and any gold they earned so far with it.

Importantly, the participants could choose to cash out early and keep any gold they earned up to that point. Doing this would reveal the location of the devil and coincidently all of the gold they missed out on. Sometimes this wouldn’t be a big deal, because the devil would be in the next box. No harm, no foul.  But sometimes the devil might be several boxes away. In this case, you might have missed out on a lot of potential earnings, and this had the potential to induce feelings of regret.

In their first experiment, Brassen and colleagues had all of the participants play this ‘devil game’ during a functional magnetic resonance (fMRI) brain scan.  They wanted to test whether young participants, older depressed, and healthy older participants responded differently to missed opportunities during the game, and whether these differences might also be reflected in activity in one area of the brain called the ventral striatum (an area known to very active when we experience regret) and another area of the brain called the anterior cingulate (an area known to be active when controlling our emotions).

Brassen and her colleagues found that for healthy older participants, the area of the brain which is usually active during the experience of regret, the ventral striatum, was much less active during rounds of the game where they missed out on a lot of money, suggesting that the healthily aging brains were not processing regret in the same way the young and depressed older brains were. Also, when they looked at the emotion controlling center of the brain, the anterior cingulate, the researchers found that this area was much more active in the healthy older participants than the other two groups. Interestingly, Brassen and her colleagues found that the bigger the missed opportunity, the greater the activity in this area for healthy older participants, which suggests that their brains were actively mitigating their experience of regret.

Read the entire article after the jump.

Send to Kindle

Growing Eyes in the Lab

From Nature:

A stem-cell biologist has had an eye-opening success in his latest effort to mimic mammalian organ development in vitro. Yoshiki Sasai of the RIKEN Center for Developmental Biology (CBD) in Kobe, Japan, has grown the precursor of a human eye in the lab.

The structure, called an optic cup, is 550 micrometres in diameter and contains multiple layers of retinal cells including photoreceptors. The achievement has raised hopes that doctors may one day be able to repair damaged eyes in the clinic. But for researchers at the annual meeting of the International Society for Stem Cell Research in Yokohama, Japan, where Sasai presented the findings this week, the most exciting thing is that the optic cup developed its structure without guidance from Sasai and his team.

“The morphology is the truly extraordinary thing,” says Austin Smith, director of the Centre for Stem Cell Research at the University of Cambridge, UK.

Until recently, stem-cell biologists had been able to grow embryonic stem-cells only into two-dimensional sheets. But over the past four years, Sasai has used mouse embryonic stem cells to grow well-organized, three-dimensional cerebral-cortex1, pituitary-gland2 and optic-cup3 tissue. His latest result marks the first time that anyone has managed a similar feat using human cells.

Familiar patterns
The various parts of the human optic cup grew in mostly the same order as those in the mouse optic cup. This reconfirms a biological lesson: the cues for this complex formation come from inside the cell, rather than relying on external triggers.

In Sasai’s experiment, retinal precursor cells spontaneously formed a ball of epithelial tissue cells and then bulged outwards to form a bubble called an eye vesicle. That pliable structure then folded back on itself to form a pouch, creating the optic cup with an outer wall (the retinal epithelium) and an inner wall comprising layers of retinal cells including photoreceptors, bipolar cells and ganglion cells. “This resolves a long debate,” says Sasai, over whether the development of the optic cup is driven by internal or external cues.

There were some subtle differences in the timing of the developmental processes of the human and mouse optic cups. But the biggest difference was the size: the human optic cup had more than twice the diameter and ten times the volume of that of the mouse. “It’s large and thick,” says Sasai. The ratios, similar to those seen in development of the structure in vivo, are significant. “The fact that size is cell-intrinsic is tremendously interesting,” says Martin Pera, a stem-cell biologist at the University of Southern California, Los Angeles.

Read the entire article after the jump.

Image courtesy of Discover Magazine.

Send to Kindle

Our Perception of Time

From Evolutionary Philosophy:

We have learned to see time as if it appears in chunks – minutes, hours, days, and years. But if time comes in chunks how do we experience past memories in the present? How does the previous moment’s chunk of time connect to the chunk of the present moment?

Wait a minute. It will take an hour. He is five years old. These are all sentences that contain expressions of units of time. We are all tremendously comfortable with the idea that time comes in discrete units – but does it? William James and Charles Sanders Peirce thought not.

If moments of time were truly discrete, separate units lined up like dominoes in a row, how would it be possible to have a memory of a past event? What connects the present moment to all the past moments that have already gone by?

One answer to the question is to suppose the existence of a transcendental self. That means some self that exists over and above our experience and can connect all the moments together for us. Imagine moments in time that stick together like boxcars of a train. If you are in one boxcar – i.e. inside the present moment – how could you possibly know anything about the boxcar behind you – i.e. the moment past? The only way would be to see from outside of your boxcar – you would at least stick your head out of the window to see the boxcar behind you.

If the boxcar represents your experience of the present moment then we are saying that you would have to leave the present moment at least a little bit to be able to see what happened in the moment behind you. How can you leave the present moment? Where do you go if you leave your experience of the present moment? Where is the space that you exist in when you are outside of your experience? It would have to be a space that transcended your experience – a transcendental space outside of reality as we experience it. It would be a supernatural space and the part of you that existed in that space would be a supernatural extra-experiential you.

For those who had been raised in a Christian context this would not be so hard to except because this extra-experiential you would sound a great deal like the soul. In fact Immanuel Kant who first articulated the idea of a transcendental self was through his philosophy actively trying to reserve space for the human soul in an intellectual atmosphere that he saw as excessively materialistic.

William James and Charles Sanders Peirce believed in unity and therefore they could not accept the idea of a transcendental ego that would exist in some transcendent realm. In some of their thinking they were anticipating the later developments of quantum theory and non-locality.

William James described who we appear to travel through a river of time – and like all rivers the river ahead of us already exists before we arrive there. In the same way the future already exists now. Not in a pre-determined sense but at least as some potentiality. As we arrive at the future moment our arrival marks the passage from the fluid form that we call future to the definitive solid form that we experience as the past. We do not create time by passing through it; we simply freeze it in its tracks.

Read the entire article after the jump.

Image courtesy of Google search.

Send to Kindle

Addiction: Choice or Disease or Victim of Hijacking?

 

The debate concerning human addictions of all colors and forms rages on. Some would have us believe that addiction is a simple choice shaped by our free will; others would argue that addiction is a chronic disease. Yet, perhaps there may be another more nuanced explanation.

From the New York Times:

Of all the philosophical discussions that surface in contemporary life, the question of free will — mainly, the debate over whether or not we have it — is certainly one of the most persistent.

That might seem odd, as the average person rarely seems to pause to reflect on whether their choices on, say, where they live, whom they marry, or what they eat for dinner, are their own or the inevitable outcome of a deterministic universe. Still, as James Atlas pointed out last month, the spate of “can’t help yourself” books would indicate that people are in fact deeply concerned with how much of their lives they can control. Perhaps that’s because, upon further reflection, we find that our understanding of free will lurks beneath many essential aspects of our existence.

One particularly interesting variation on this question appears in scientific, academic and therapeutic discussions about addiction. Many times, the question is framed as follows: “Is addiction a disease or a choice?”

The argument runs along these lines: If addiction is a disease, then in some ways it is out of our control and forecloses choices. A disease is a medical condition that develops outside of our control; it is, then, not a matter of choice. In the absence of choice, the addicted person is essentially relieved of responsibility. The addict has been overpowered by her addiction.

The counterargument describes addictive behavior as a choice. People whose use of drugs and alcohol leads to obvious problems but who continue to use them anyway are making choices to do so. Since those choices lead to addiction, blame and responsibility clearly rest on the addict’s shoulders. It then becomes more a matter of free will.

Recent scientific studies on the biochemical responses of the brain are currently tipping the scales toward the more deterministic view — of addiction as a disease. The structure of the brain’s reward system combined with certain biochemical responses and certain environments, they appear to show, cause people to become addicted.

In such studies, and in reports of them to news media, the term “the hijacked brain” often appears, along with other language that emphasizes the addict’s lack of choice in the matter. Sometimes the pleasure-reward system has been “commandeered.” Other times it “goes rogue.” These expressions are often accompanied by the conclusion that there are “addicted brains.”

The word “hijacked” is especially evocative; people often have a visceral reaction to it. I imagine that this is precisely why this term is becoming more commonly used in connection with addiction. But it is important to be aware of the effects of such language on our understanding.

When most people think of a hijacking, they picture a person, sometimes wearing a mask and always wielding some sort of weapon, who takes control of a car, plane or train. The hijacker may not himself drive or pilot the vehicle, but the violence involved leaves no doubt who is in charge. Someone can hijack a vehicle for a variety of reasons, but mostly it boils down to needing to escape or wanting to use the vehicle itself as a weapon in a greater plan. Hijacking is a means to an end; it is always and only oriented to the goals of the hijacker. Innocent victims are ripped from their normal lives by the violent intrusion of the hijacker.

In the “hijacked” view of addiction, the brain is the innocent victim of certain substances — alcohol, cocaine, nicotine or heroin, for example — as well as certain behaviors like eating, gambling or sexual activity. The drugs or the neurochemicals produced by the behaviors overpower and redirect the brain’s normal responses, and thus take control of (hijack) it. For addicted people, that martini or cigarette is the weapon-wielding hijacker who is going to compel certain behaviors.

To do this, drugs like alcohol and cocaine and behaviors like gambling light up the brain’s pleasure circuitry, often bringing a burst of euphoria. Other studies indicate that people who are addicted have lower dopamine and serotonin levels in their brains, which means that it takes more of a particular substance or behavior for them to experience pleasure or to reach a certain threshold of pleasure. People tend to want to maximize pleasure; we tend to do things that bring more of it. We also tend to chase it when it subsides, trying hard to recreate the same level of pleasure we have experienced in the past. It is not uncommon to hear addicts talking about wanting to experience the euphoria of a first high. Often they never reach it, but keep trying. All of this lends credence to the description of the brain as hijacked.

Read the entire article after the jump.

Image courtesy of CNN.

Send to Kindle

The 100 Million Year Collision

Four billion, or so, years from now, our very own Milky Way galaxy is expected to begin a slow but enormous collision with its galactic sibling, the Andromeda galaxy. Cosmologists predict the ensuing galactic smash will take around 100 million years to complete. It’s a shame we’ll not be around to witness the spectacle.

From Scientific American:

The galactic theme in the context of planets and life is an interesting one. Take our own particular circumstances. As unappealingly non-Copernican as it is, there is no doubt that the Milky Way galaxy today is ‘special’. This should not be confused with any notion that special galaxy=special humans, since it’s really not clear yet that the astrophysical specialness of the galaxy has significant bearing on the likelihood of us sitting here picking our teeth. Nonetheless, the scientific method being what it is, we need to pay attention to any and all observations with as little bias as possible – so asking the question of what a ‘special’ galaxy might mean for life is OK, just don’t get too carried away.

First of all the Milky Way galaxy is big. As spiral galaxies go it’s in the upper echelons of diameter and mass. In the relatively nearby universe, it and our nearest large galaxy, Andromeda, are the sumo’s in the room. This immediately makes it somewhat unusual, the great majority of galaxies in the observable universe are smaller. The relationship to Andromeda is also very particular. In effect the Milky Way and Andromeda are a binary pair, our mutual distortion of spacetime is resulting in us barreling together at about 80 miles a second. In about 4 billion years these two galaxies will begin a ponderous collision lasting for perhaps 100 million years or so. It will be a soft type of collision – individual stars are so tiny compared to the distances between them that they themselves are unlikely to collide, but the great masses of gas and dust in the two galaxies will smack together – triggering the formation of new stars and planetary systems.

Some dynamical models (including those in the most recent work based on Hubble telescope measurements) suggest that our solar system could be flung further away from the center of the merging galaxies, others indicate it could end up thrown towards the newly forming stellar core of a future Goliath galaxy (Milkomeda?). Does any of this matter for life? For us the answer may be moot. In about only 1 billion years the Sun will have grown luminous enough that the temperate climate we enjoy on the Earth may be long gone. In 3-4 billion years it may be luminous enough that Mars, if not utterly dried out and devoid of atmosphere by then, could sustain ‘habitable‘ temperatures. Depending on where the vagaries of gravitational dynamics take the solar system as Andromeda comes lumbering through, we might end up surrounded by the pop and crackle of supernova as the collision-induced formation of new massive stars gets underway. All in all it doesn’t look too good. But for other places, other solar systems that we see forming today, it could be a very different story.

Read the entire article after the jump.

Image: Composition of Milky Way and Andromeda. Courtesy of NASA, ESA, Z. Levay and R. van der Marel (STScI), T. Hallas, and A. Mellinger).

Send to Kindle

You as a Data Strip Mine: What Facebook Knows

China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.

However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.

A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.

From Technology Review:

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

Read the entire article after the jump.

Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).

Send to Kindle

Zen and the Art of Meditation Messaging

Quite often you will be skimming a book or leafing through pages of your favorite magazine and you will recall having “seen” a specific word. However, you will not remember having read that page or section or having looked at that particular word. But, without fail, when you retrace your steps and look back you will find that specific word, that word that you did not consciously “see”. So, what’s going on?

From the New Scientist:

MEDITATION increases our ability to tap into the hidden recesses of our brain that are usually outside the reach of our conscious awareness.

That’s according to Madelijn Strick of Utrecht University in the Netherlands and colleagues, who tested whether meditation has an effect on our ability to pick up subliminal messages.

The brain registers subliminal messages, but we are often unable to recall them consciously. To investigate, the team recruited 34 experienced practitioners of Zen meditation and randomly assigned them to either a meditation group or a control group. The meditation group was asked to meditate for 20 minutes in a session led by a professional Zen master. The control group was asked to merely relax for 20 minutes.

The volunteers were then asked 20 questions, each with three or four correct answers – for instance: “Name one of the four seasons”. Just before the subjects saw the question on a computer screen one potential answer – such as “spring” – flashed up for a subliminal 16 milliseconds.

The meditation group gave 6.8 answers, on average, that matched the subliminal words, whereas the control group gave just 4.9 (Consciousness and Cognition, DOI: 10.1016/j.concog.2012.02.010).

Strick thinks that the explanation lies in the difference between what the brain is paying attention to and what we are conscious of. Meditators are potentially accessing more of what the brain has paid attention to than non-meditators, she says.

“It is a truly exciting development that the second wave of rigorous, scientific meditation research is now yielding concrete results,” says Thomas Metzinger, at Johannes Gutenberg University in Mainz, Germany. “Meditation may be best seen as a process that literally expands the space of conscious experience.”

Read the entire article after the jump.

Image courtesy of Yoga.am.

Send to Kindle

Good Grades and Good Drugs?

A sad story chronicling the rise in amphetamine use in the quest for good school grades. More frightening now is the increase in addiction of ever younger kids, and not for dubious goal of excelling at school. Many kids are just taking the drug to get high.

From the Telegraph:

The New York Times has finally woken up to America’s biggest unacknowledged drug problem: the massive overprescription of the amphetamine drug Adderall for Attention Deficit Hyperactivity Disorder. Kids have been selling each other this powerful – and extremely moreish – mood enhancer for years, as ADHD diagnoses and prescriptions for the drug have shot up.

Now, children are snorting the stuff, breaking open the capsules and ingesting it using the time-honoured tool of a rolled-up bank note.

The NYT seems to think these teenage drug users are interested in boosting their grades. It claims that, for children without ADHD, “just one pill can jolt them with the energy focus to push through all-night homework binges and stay awake during exams afterward”.

Really? There are two problems with this.

First, the idea that ADHD kids are “normal” on Adderall and its methylphenidate alternative Ritalin – gentler in its effect but still a psychostimulant – is open to question. Reading this scorching article by the child psychologist Prof L Alan Sroufe, who says there’s no evidence that attention-deficit children are born with an organic disease, or that ADHD and non-ADHD kids react differently to their doctor-prescribed amphetamines. Yes, there’s an initial boost to concentration, but the effect wears off – and addiction often takes its place.

Second, the school pupils illicitly borrowing or buying Adderall aren’t necessarily doing it to concentrate on their work. They’re doing it to get high.

Adderall, with its mixture of amphetamine salts, has the ability to make you as euphoric as a line of cocaine – and keep you that way, particularly if it’s the slow-release version and you’re taking it for the first time. At least, that was my experience. Here’s what happened.

I was staying with a hospital consultant and his attorney wife in the East Bay just outside San Francisco. I’d driven overnight from Los Angeles after a flight from London; I was jetlagged, sleep-deprived and facing a deadline to write an article for the Spectator about, of all things, Bach cantatas.

Sitting in the courtyard garden with my laptop, I tapped and deleted one clumsy sentence after another. The sun was going down; my hostess saw me shivering and popped out with a blanket, a cup of herbal tea and ‘something to help you concentrate’.

I took the pill, didn’t notice any effect, and was glad when I was called in for dinner.

The dining room was a Californian take on the Second Empire. The lady next to me was a Southern Belle turned realtor, her eyelids already drooping from the effects of her third giant glass of Napa Valley chardonnay. She began to tell me about her divorce. Every time she refilled her glass, her new husband raised his eyes to heaven.

It felt as if I was stuck in an episode of Dallas, or a very bad Tennessee Williams play. But it didn’t matter in the least because, at some stage between the mozzarella salad and the grilled chicken, I’d become as high as a kite.

Adderall helps you concentrate, no doubt about it. I was riveted by the details of this woman’s alimony settlement. Even she, utterly self- obsessed as she was, was surprised by my gushing empathy. After dinner, I sat down at the kitchen table to finish the article. The head rush was beginning to wear off, but then, just as I started typing, a second wave of amphetamine pushed its way into my bloodstream. This was timed-release Adderall. Gratefully I plunged into 18th-century Leipzig, meticulously noting the catalogue numbers of cantatas. It was as if the great Johann Sebastian himself was looking over my shoulder. By the time I glanced at the clock, it was five in the morning. My pleasure at finishing the article was boosted by the dopamine high. What a lovely drug.

The blues didn’t hit me until the next day – and took the best part of a week to banish.

And this is what they give to nine-year-olds.

Read the entire article after the jump.

From the New York Times:

He steered into the high school parking lot, clicked off the ignition and scanned the scraps of his recent weeks. Crinkled chip bags on the dashboard. Soda cups at his feet. And on the passenger seat, a rumpled SAT practice book whose owner had been told since fourth grade he was headed to the Ivy League. Pencils up in 20 minutes.

The boy exhaled. Before opening the car door, he recalled recently, he twisted open a capsule of orange powder and arranged it in a neat line on the armrest. He leaned over, closed one nostril and snorted it.

Throughout the parking lot, he said, eight of his friends did the same thing.

The drug was not cocaine or heroin, but Adderall, an amphetamine prescribed for attention deficit hyperactivity disorder that the boy said he and his friends routinely shared to study late into the night, focus during tests and ultimately get the grades worthy of their prestigious high school in an affluent suburb of New York City. The drug did more than just jolt them awake for the 8 a.m. SAT; it gave them a tunnel focus tailor-made for the marathon of tests long known to make or break college applications.

“Everyone in school either has a prescription or has a friend who does,” the boy said.

At high schools across the United States, pressure over grades and competition for college admissions are encouraging students to abuse prescription stimulants, according to interviews with students, parents and doctors. Pills that have been a staple in some college and graduate school circles are going from rare to routine in many academically competitive high schools, where teenagers say they get them from friends, buy them from student dealers or fake symptoms to their parents and doctors to get prescriptions.

Of the more than 200 students, school officials, parents and others contacted for this article, about 40 agreed to share their experiences. Most students spoke on the condition that they be identified by only a first or middle name, or not at all, out of concern for their college prospects or their school systems’ reputations — and their own.

“It’s throughout all the private schools here,” said DeAnsin Parker, a New York psychologist who treats many adolescents from affluent neighborhoods like the Upper East Side. “It’s not as if there is one school where this is the culture. This is the culture.”

Observed Gary Boggs, a special agent for the Drug Enforcement Administration, “We’re seeing it all across the United States.”

The D.E.A. lists prescription stimulants like Adderall and Vyvanse (amphetamines) and Ritalin and Focalin (methylphenidates) as Class 2 controlled substances — the same as cocaine and morphine — because they rank among the most addictive substances that have a medical use. (By comparison, the long-abused anti-anxiety drug Valium is in the lower Class 4.) So they carry high legal risks, too, as few teenagers appreciate that merely giving a friend an Adderall or Vyvanse pill is the same as selling it and can be prosecuted as a felony.

While these medicines tend to calm people with A.D.H.D., those without the disorder find that just one pill can jolt them with the energy and focus to push through all-night homework binges and stay awake during exams afterward. “It’s like it does your work for you,” said William, a recent graduate of the Birch Wathen Lenox School on the Upper East Side of Manhattan.

But abuse of prescription stimulants can lead to depression and mood swings (from sleep deprivation), heart irregularities and acute exhaustion or psychosis during withdrawal, doctors say. Little is known about the long-term effects of abuse of stimulants among the young. Drug counselors say that for some teenagers, the pills eventually become an entry to the abuse of painkillers and sleep aids.

“Once you break the seal on using pills, or any of that stuff, it’s not scary anymore — especially when you’re getting A’s,” said the boy who snorted Adderall in the parking lot. He spoke from the couch of his drug counselor, detailing how he later became addicted to the painkiller Percocet and eventually heroin.

Paul L. Hokemeyer, a family therapist at Caron Treatment Centers in Manhattan, said: “Children have prefrontal cortexes that are not fully developed, and we’re changing the chemistry of the brain. That’s what these drugs do. It’s one thing if you have a real deficiency — the medicine is really important to those people — but not if your deficiency is not getting into Brown.”

The number of prescriptions for A.D.H.D. medications dispensed for young people ages 10 to 19 has risen 26 percent since 2007, to almost 21 million yearly, according to IMS Health, a health care information company — a number that experts estimate corresponds to more than two million individuals. But there is no reliable research on how many high school students take stimulants as a study aid. Doctors and teenagers from more than 15 schools across the nation with high academic standards estimated that the portion of students who do so ranges from 15 percent to 40 percent.

“They’re the A students, sometimes the B students, who are trying to get good grades,” said one senior at Lower Merion High School in Ardmore, a Philadelphia suburb, who said he makes hundreds of dollars a week selling prescription drugs, usually priced at $5 to $20 per pill, to classmates as young as freshmen. “They’re the quote-unquote good kids, basically.”

The trend was driven home last month to Nan Radulovic, a psychotherapist in Santa Monica, Calif. Within a few days, she said, an 11th grader, a ninth grader and an eighth grader asked for prescriptions for Adderall solely for better grades. From one girl, she recalled, it was not quite a request.

“If you don’t give me the prescription,” Dr. Radulovic said the girl told her, “I’ll just get it from kids at school.”

Read the entire article here.

Image: Illegal use of Adderall is prevalent enough that many students seem to take it for granted. Courtesy of Minnesota Post / Flickr/ CC/ Hipsxxhearts.

Send to Kindle

Thirty Books for the Under 30

The official start of summer in the northern hemisphere is just over a week away. So, it’s time to gather together some juicy reads for lazy days by the beach or under a sturdy shade tree. Flavorwire offers a classic list of 30 reads with a couple of surprises thrown in. And, we’ll qualify Flavorwire’s selection by adding that anyone over 30 should read these works as well.

From Flavorwire:

Earlier this week, we stumbled across a list over at Divine Caroline of thirty books everyone should read before they’re thirty. While we totally agreed with some of the picks, we thought there were some essential reads missing, so we decided to put together a list of our own. We stuck to fiction for simplicity’s sake, and chose the books below on a variety of criteria, selecting enduring classics that have been informing new literature since their first printing, stories that speak specifically or most powerfully to younger readers, and books we simply couldn’t imagine reaching thirty without having read. Of course, we hope that you read more than thirty books by the time you hit your fourth decade, so this list is incomplete — but we had to stop somewhere. Click through to read the books we think everyone should read before their thirtieth birthday, and let us know which ones you would add in the comments.

Middlesex, Jeffrey Eugenides

Eugenides’s family epic of love, belonging and otherness is a must read for anyone who has ever had a family or felt like an outcast. So that’s pretty much everyone, we’d wager.

Ghost World, Daniel Clowes

Clowes writes some of the most essentially realistic teenagers we’ve ever come across, which is important when you are (or have ever been) a realistic teenager yourself.

On the Road, Jack Kerouac

Kerouac’s famous scroll must be read when it’s still likely to inspire exploration. Plus, then you’ll have ample time to develop your scorn towards it.

Their Eyes Were Watching God, Zora Neale Hurston

A seminal work in both African American and women’s literature — not to mention a riveting, electrifying and deeply moving read.

Cat’s Cradle, Kurt Vonnegut

Vonnegut’s hilarious, satirical fourth novel that earned him a Master’s in anthropology from the University of Chicago.

The Sun Also Rises, Ernest Hemingway

Think of him what you will, but everyone should read at least one Hemingway novel. In our experience, this one gets better the more you think about it, so we recommend reading it as early as possible.

The Road, Cormac McCarthy

The modern classic of post-apocalyptic novels, it’s also one of the best in a genre that’s only going to keep on exploding.

Maus, Art Spiegelman

A more perfect and affecting Holocaust book has never been written. And this one has pictures.

Ender’s Game, Orson Scott Card

One of the best science fiction novels of all time, recommended even for staunch realists. Serious, complicated and impossible to put down. Plus, Card’s masterpiece trusts in the power of children, something we all need to be reminded of once in a while.

Pride and Prejudice, Jane Austen

Yes, even for guys.

Middlesex, Jeffrey Eugenides

Eugenides’s family epic of love, belonging and otherness is a must read for anyone who has ever had a family or felt like an outcast. So that’s pretty much everyone, we’d wager.

Ghost World, Daniel Clowes

Clowes writes some of the most essentially realistic teenagers we’ve ever come across, which is important when you are (or have ever been) a realistic teenager yourself.

On the Road, Jack Kerouac

Kerouac’s famous scroll must be read when it’s still likely to inspire exploration. Plus, then you’ll have ample time to develop your scorn towards it.

Their Eyes Were Watching God, Zora Neale Hurston

A seminal work in both African American and women’s literature — not to mention a riveting, electrifying and deeply moving read.

Cat’s Cradle, Kurt Vonnegut

Vonnegut’s hilarious, satirical fourth novel that earned him a Master’s in anthropology from the University of Chicago.

Check out the entire list after the jump.

Send to Kindle

D-School is the Place

Forget art school, engineering school, law school and B-school (business). For wannabe innovators the current place to be is D-school. Design school, that is.

Design school teaches a problem solving method known as “design thinking”. Before it was re-branded in corporatespeak this used to be known as “trial and error”.

Many corporations are finding this approach to be both a challenge and a boon; after all, even in 2012, not many businesses encourage their employees to fail.

From the Wall Street Journal:

In 2007, Scott Cook, founder of Intuit Inc., the software company behind TurboTax, felt the company wasn’t innovating fast enough. So he decided to adopt an approach to product development that has grown increasingly popular in the corporate world: design thinking.

Loosely defined, design thinking is a problem-solving method that involves close observation of users or customers and a development process of extensive—often rapid—trial and error.

Mr. Cook said the initiative, termed “Design for Delight,” involves field research with customers to understand their “pain points”—an examination of what frustrates them in their offices and homes.

Intuit staffers then “painstorm” to come up with a variety of solutions to address the problems, and experiment with customers to find the best ones.

In one instance, a team of Intuit employees was studying how customers could take pictures of tax forms to reduce typing errors. Some younger customers, taking photos with their smartphones, were frustrated that they couldn’t just complete their taxes on their mobiles. Thus was born the mobile tax app SnapTax in 2010, which has been downloaded more than a million times in the past two years, the company said.

At SAP AG, hundreds of employees across departments work on challenges, such as building a raincoat out of a trash bag or designing a better coffee cup. The hope is that the sessions will train them in the tenets of design thinking, which they can then apply to their own business pursuits, said Carly Cooper, an SAP director who runs many of the sessions.

Last year, when SAP employees talked to sales representatives after closing deals, they found that one of the sales representatives’ biggest concerns was simply, when were they going to get paid. The insight led SAP to develop a new mobile product allowing salespeople to check on the status of their commissions.

Read the entire article after the jump.

Send to Kindle

The 10,000 Year Clock

Aside from the ubiquitous plastic grocery bag will any human made artifact last 10,000 years? Before you answer, let’s qualify the question by mandating the artifact have some long-term value. That would seem to eliminate plastic bags, plastic toys embedded in fast food meals, and DVDs of reality “stars” ripped from YouTube. What does that leave? Most human made products consisting of metals or biodegradable components, such as paper and wood, will rust, rot or breakdown in 20-300 years. Even some plastics left exposed to sun and air will breakdown within a thousand years. Of course, buried deep in a landfill, plastic containers, styrofoam cups and throwaway diapers may remain with us for tens or hundreds of thousands of years.

Archaeological excavations show us that artifacts made of glass and ceramic would fit the bill — lasting well into the year 12012 and beyond. But, in the majority of cases we usually unearth fragments of things.

But what if some ingenious humans could build something that would still be around 10,000 years from now? Better still, build something that will still function as designed 10,000 years from now. This would represent an extraordinary feat of contemporary design and engineering. And, more importantly it would provide a powerful story for countless generations beginning with ours.

So, enter Danny Hillis and the Clock of the Long Now (also knows as the Millennium Clock or the 10,000 Year Clock). Danny Hillis is an inventor, scientist, and computer designer. He pioneered the concept of massively parallel computers.

In Hillis’ own words:

Ten thousand years – the life span I hope for the clock – is about as long as the history of human technology. We have fragments of pots that old. Geologically, it’s a blink of an eye. When you start thinking about building something that lasts that long, the real problem is not decay and corrosion, or even the power source. The real problem is people. If something becomes unimportant to people, it gets scrapped for parts; if it becomes important, it turns into a symbol and must eventually be destroyed. The only way to survive over the long run is to be made of materials large and worthless, like Stonehenge and the Pyramids, or to become lost. The Dead Sea Scrolls managed to survive by remaining lost for a couple millennia. Now that they’ve been located and preserved in a museum, they’re probably doomed. I give them two centuries – tops. The fate of really old things leads me to think that the clock should be copied and hidden.

Plans call for the 200 foot tall, 10,000 Year Clock to be installed inside a mountain in remote west Texas, with a second location in remote eastern Nevada. Design and engineering work on the clock, and preparation of the Clock’s Texas home are underway.

For more on the 10,000 Year Clock jump to the Long Now Foundation, here.

More from Rationally Speaking:

I recently read Brian Hayes’ wonderful collection of mathematically oriented essays called Group Theory In The Bedroom, and Other Mathematical Diversions. Not surprisingly, the book contained plenty of philosophical musings too. In one of the essays, called “Clock of Ages,” Hayes describes the intricacies of clock building and he provides some interesting historical fodder.

For instance, we learn that in the sixteenth century Conrad Dasypodius, a Swiss mathematician, could have chosen to restore the old Clock of the Three Kings in Strasbourg Cathedral. Dasypodius, however, preferred to build a new clock of his own rather than maintain an old one. Over two centuries later, Jean-Baptiste Schwilgue was asked to repair the clock built by Dasypodius, but he decided to build a new and better clock which would last for 10,000 years.

Did you know that a large-scale project is underway to build another clock that will be able to run with minimal maintenance and interruption for ten millennia? It’s called The 10,000 Year Clock and its construction is sponsored by The Long Now Foundation. The 10,000 Year Clock is, however, being built for more than just its precision and durability. If the creators’ intentions are realized, then the clock will serve as a symbol to encourage long-term thinking about the needs and claims of future generations. Of course, if all goes to plan, our future descendants will be left to maintain it too. The interesting question is: will they want to?

If history is any indicator, then I think you know the answer. As Hayes puts it: “The fact is, winding and dusting and fixing somebody else’s old clock is boring. Building a brand-new clock of your own is much more fun, especially if you can pretend that it’s going to inspire awe and wonder for the ages to come. So why not have the fun now and let the future generations do the boring bit.” I think Hayes is right, it seems humans are, by nature, builders and not maintainers.

Projects like The 10,000 Year Clock are often undertaken with the noblest of environmental intentions, but the old proverb is relevant here: the road to hell is paved with good intentions. What I find troubling, then, is that much of the environmental do-goodery in the world may actually be making things worse. It’s often nothing more than a form of conspicuous consumption, which is a term coined by the economist and sociologist Thorstein Veblen. When it pertains specifically to “green” purchases, I like to call it being conspicuously environmental. Let’s use cars as an example. Obviously it depends on how the calculations are processed, but in many instances keeping and maintaining an old clunker is more environmentally friendly than is buying a new hybrid. I can’t help but think that the same must be true of building new clocks.

In his book, The Conundrum, David Owen writes: “How appealing would ‘green’ seem if it meant less innovation and fewer cool gadgets — not more?” Not very, although I suppose that was meant to be a rhetorical question. I enjoy cool gadgets as much as the next person, but it’s delusional to believe that conspicuous consumption is somehow a gift to the environment.

Using insights from evolutionary psychology and signaling theory, I think there is also another issue at play here. Buying conspicuously environmental goods, like a Prius, sends a signal to others that one cares about the environment. But if it’s truly the environment (and not signaling) that one is worried about, then surely less consumption must be better than more. The homeless person ironically has a lesser environmental impact than your average yuppie, yet he is rarely recognized as an environmental hero. Using this logic I can’t help but conclude that killing yourself might just be the most environmentally friendly act of all time (if it wasn’t blatantly obvious, this is a joke). The lesson here is that we shouldn’t confuse smug signaling with actually helping.

Read the entire article after the jump.

Image: Prototype of the 10,000 Year Clock. Courtesy of the Long Now Foundation / Science Museum of London.

Send to Kindle

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

From the New York Times:

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

Read the entire list of 32 weird and wonderful innovations after the jump.

Graphic courtesy of Chris Nosenzo / New York Times.

Send to Kindle

Ray Bradbury’s Real World Dystopia

Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.

Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.

An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.

From the New York Times:

IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.

Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.

Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.

Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”

Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.

“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”

Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?

The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)

But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.

Read the entire article after the jump.

Image courtesy of Technorati.

Send to Kindle

MondayPoem: McDonalds Is Impossible

According to Chelsea Martin’s website, “chelsea martin ‘studied’ art and writing at california college of the arts (though she holds no degree because she owes $300 in tuition)”.

From Poetry Foundation:

Chelsea Martin was 23 when she published her first collection, Everything Was Fine until Whatever (2009), a genre-blurring book of short fiction, nonfiction, prose, poetry, sketches, and memoir. She is also the author, most recently, of The Real Funny Thing about Apathy (2010).

By Chelsea Martin

– McDonalds is Impossible

Eating food from McDonald’s is mathematically impossible.
Because before you can eat it, you have to order it.
And before you can order it, you have to decide what you want.
And before you can decide what you want, you have to read the menu.
And before you can read the menu, you have to be in front of the menu.
And before you can be in front of the menu, you have to wait in line.
And before you can wait in line, you have to drive to the restaurant.
And before you can drive to the restaurant, you have to get in your car.
And before you can get in your car, you have to put clothes on.
And before you can put clothes on, you have to get out of bed.
And before you can get out of bed, you have to stop being so depressed.
And before you can stop being so depressed, you have to understand what depression is.
And before you can understand what depression is, you have to think clearly.
And before you can think clearly, you have to turn off the TV.
And before you can turn off the TV, you have to free your hands.
And before you can free your hands, you have to stop masturbating.
And before you can stop masturbating, you have to get off.
And before you can get off, you have to imagine someone you really like with his pants off, encouraging you to explore his enlarged genitalia.
And before you can imagine someone you really like with his pants off encouraging you to explore his enlarged genitalia, you have to imagine that person stroking your neck.
And before you can imagine that person stroking your neck, you have to imagine that person walking up to you looking determined.
And before you can imagine that person walking up to you looking determined, you have to choose who that person is.
And before you can choose who that person is, you have to like someone.
And before you can like someone, you have to interact with someone.
And before you can interact with someone, you have to introduce yourself.
And before you can introduce yourself, you have to be in a social situation.
And before you can be in a social situation, you have to be invited to something somehow.
And before you can be invited to something somehow, you have to receive a telephone call from a friend.
And before you can receive a telephone call from a friend, you have to make a reputation for yourself as being sort of fun.
And before you can make a reputation for yourself as being sort of fun, you have to be noticeably fun on several different occasions.
And before you can be noticeably fun on several different occasions, you have to be fun once in the presence of two or more people.
And before you can be fun once in the presence of two or more people, you have to be drunk.
And before you can be drunk, you have to buy alcohol.
And before you can buy alcohol, you have to want your psychological state to be altered.
And before you can want your psychological state to be altered, you have to recognize that your current psychological state is unsatisfactory.
And before you can recognize that your current psychological state is unsatisfactory, you have to grow tired of your lifestyle.
And before you can grow tired of your lifestyle, you have to repeat the same patterns over and over endlessly.
And before you can repeat the same patterns over and over endlessly, you have to lose a lot of your creativity.
And before you can lose a lot of your creativity, you have to stop reading books.
And before you can stop reading books, you have to think that you would benefit from reading less frequently.
And before you can think that you would benefit from reading less frequently, you have to be discouraged by the written word.
And before you can be discouraged by the written word, you have to read something that reinforces your insecurities.
And before you can read something that reinforces your insecurities, you have to have insecurities.
And before you can have insecurities, you have to be awake for part of the day.
And before you can be awake for part of the day, you have to feel motivation to wake up.
And before you can feel motivation to wake up, you have to dream of perfectly synchronized conversations with people you desire to talk to.
And before you can dream of perfectly synchronized conversations with people you desire to talk to, you have to have a general idea of what a perfectly synchronized conversation is.
And before you can have a general idea of what a perfectly synchronized conversation is, you have to watch a lot of movies in which people successfully talk to each other.
And before you can watch a lot of movies in which people successfully talk to each other, you have to have an interest in other people.
And before you can have an interest in other people, you have to have some way of benefiting from other people.
And before you can have some way of benefiting from other people, you have to have goals.
And before you can have goals, you have to want power.
And before you can want power, you have to feel greed.
And before you can feel greed, you have to feel more deserving than others.
And before you can feel more deserving than others, you have to feel a general disgust with the human population.
And before you can feel a general disgust with the human population, you have to be emotionally wounded.
And before you can be emotionally wounded, you have to be treated badly by someone you think you care about while in a naive, vulnerable state.
And before you can be treated badly by someone you think you care about while in a naive, vulnerable state, you have to feel inferior to that person.
And before you can feel inferior to that person, you have to watch him laughing and walking towards his drum kit with his shirt off and the sun all over him.
And before you can watch him laughing and walking towards his drum kit with his shirt off and the sun all over him, you have to go to one of his outdoor shows.
And before you can go to one of his outdoor shows, you have to pretend to know something about music.
And before you can pretend to know something about music, you have to feel embarrassed about your real interests.
And before you can feel embarrassed about your real interests, you have to realize that your interests are different from other people’s interests.
And before you can realize that your interests are different from other people’s interests, you have to be regularly misunderstood.
And before you can be regularly misunderstood, you have to be almost completely socially debilitated.
And before you can be almost completely socially debilitated, you have to be an outcast.
And before you can be an outcast, you have to be rejected by your entire group of friends.
And before you can be rejected by your entire group of friends, you have to be suffocatingly loyal to your friends.
And before you can be suffocatingly loyal to your friends, you have to be afraid of loss.
And before you can be afraid of loss, you have to lose something of value.
And before you can lose something of value, you have to realize that that thing will never change.
And before you can realize that that thing will never change, you have to have the same conversation with your grandmother forty or fifty times.
And before you can have the same conversation with your grandmother forty or fifty times, you have to have a desire to talk to her and form a meaningful relationship.
And before you can have a desire to talk to her and form a meaningful relationship, you have to love her.
And before you can love her, you have to notice the great tolerance she has for you.
And before you can notice the great tolerance she has for you, you have to break one of her favorite china teacups that her mother gave her and forget to apologize.
And before you can break one of her favorite china teacups that her mother gave her and forget to apologize, you have to insist on using the teacups for your imaginary tea party. And before you can insist on using the teacups for your imaginary tea party, you have to cultivate your imagination.
And before you can cultivate your imagination, you have to spend a lot of time alone.
And before you can spend a lot of time alone, you have to find ways to sneak away from your siblings.
And before you can find ways to sneak away from your siblings, you have to have siblings.
And before you can have siblings, you have to underwhelm your parents.
And before you can underwhelm your parents, you have to be quiet, polite and unnoticeable.
And before you can be quiet, polite and unnoticeable, you have to understand that it is possible to disappoint your parents.
And before you can understand that it is possible to disappoint your parents, you have to be harshly reprimanded.
And before you can be harshly reprimanded, you have to sing loudly at an inappropriate moment.
And before you can sing loudly at an inappropriate moment, you have to be happy.
And before you can be happy, you have to be able to recognize happiness.
And before you can be able to recognize happiness, you have to know distress.
And before you can know distress, you have to be watched by an insufficient babysitter for one week.
And before you can be watched by an insufficient babysitter for one week, you have to vomit on the other, more pleasant babysitter.
And before you can vomit on the other, more pleasant babysitter, you have to be sick.
And before you can be sick, you have to eat something you’re allergic to.
And before you can eat something you’re allergic to, you have to have allergies.
And before you can have allergies, you have to be born.
And before you can be born, you have to be conceived.
And before you can be conceived, your parents have to copulate.
And before your parents can copulate, they have to be attracted to one another.
And before they can be attracted to one another, they have to have common interests.
And before they can have common interests, they have to talk to each other.
And before they can talk to each other, they have to meet.
And before they can meet, they have to have in-school suspension on the same day.
And before they can have in-school suspension on the same day, they have to get caught sneaking off campus separately.
And before they can get caught sneaking off campus separately, they have to think of somewhere to go.
And before they can think of somewhere to go, they have to be familiar with McDonald’s.
And before they can be familiar with McDonald’s, they have to eat food from McDonald’s.
And eating food from McDonald’s is mathematically impossible.

Send to Kindle

Mutant Gravity and Dark Magnetism

Scientific consensus states that our universe is not only expanding, but expanding at an ever-increasing rate. So, sometime in the very distant future (tens of billions of years) our Milky Way galaxy will be mostly alone, accompanied only by its close galactic neighbors, such as Andromeda. All else in the universe will have receded beyond the horizon of visible light. And, yet for all the experimental evidence, no one knows the precise cause(s) of this acceleration or even of the expansion itself. But, there is no shortage of bold new theories.

From New Scientist:

WE WILL be lonely in the late days of the cosmos. Its glittering vastness will slowly fade as countless galaxies retreat beyond the horizon of our vision. Tens of billions of years from now, only a dense huddle of nearby galaxies will be left, gazing out into otherwise blank space.

That gloomy future comes about because space is expanding ever faster, allowing far-off regions to slip across the boundary from which light has time to reach us. We call the author of these woes dark energy, but we are no nearer to discovering its identity. Might the culprit be a repulsive force that emerges from the energy of empty spaceMovie Camera, or perhaps a modification of gravity at the largest scales? Each option has its charms, but also profound problems.

But what if that mysterious force making off with the light of the cosmos is an alien echo of light itself? Light is just an expression of the force of electromagnetism, and vast electromagnetic waves of a kind forbidden by conventional physics, with wavelengths trillions of times larger than the observable universe, might explain dark energy’s baleful presence. That is the bold notion of two cosmologists who think that such waves could also account for the mysterious magnetic fields that we see threading through even the emptiest parts of our universe. Smaller versions could be emanating from black holes within our galaxy.

It is almost two decades since we realised that the universe is running away with itself. The discovery came from observations of supernovae that were dimmer, and so further away, than was expected, and earned its discoverers the Nobel prize in physics in 2011.

Prime suspect in the dark-energy mystery is the cosmological constant, an unchanging energy which might emerge from the froth of short-lived, virtual particles that according to quantum theory are fizzing about constantly in otherwise empty space.

Mutant gravity

To cause the cosmic acceleration we see, dark energy would need to have an energy density of about half a joule per cubic kilometre of space. When physicists try to tot up the energy of all those virtual particles, however, the answer comes to either exactly zero (which is bad), or something so enormous that empty space would rip all matter to shreds (which is very bad). In this latter case the answer is a staggering 120 orders of magnitude out, making it a shoo-in for the least accurate prediction in all of physics.

This stumbling block has sent some researchers down another path. They argue that in dark energy we are seeing an entirely new side to gravity. At distances of many billions of light years, it might turn from an attractive to a repulsive force.

But it is dangerous to be so cavalier with gravity. Einstein’s general theory of relativity describes gravity as the bending of space and time, and predicts the motions of planets and spacecraft in our own solar system with cast-iron accuracy. Try bending the theory to make it fit acceleration on a cosmic scale, and it usually comes unstuck closer to home.

That hasn’t stopped many physicists persevering along this route. Until recently, Jose Beltrán and Antonio Maroto were among them. In 2008 at the Complutense University of Madrid, Spain, they were playing with a particular version of a mutant gravity model called a vector-tensor theory, which they had found could mimic dark energy. Then came a sudden realisation. The new theory was supposed to be describing a strange version of gravity, but its equations bore an uncanny resemblance to some of the mathematics underlying another force. “They looked like electromagnetism,” says Beltrán, now based at the University of Geneva in Switzerland. “We started to think there could be a connection.”

So they decided to see what would happen if their mathematics described not masses and space-time, but magnets and voltages. That meant taking a fresh look at electromagnetism. Like most of nature’s fundamental forces, electromagnetism is best understood as a phenomenon in which things come chopped into little pieces, or quanta. In this case the quanta are photons: massless, chargeless particles carrying fluctuating electric and magnetic fields that point at right angles to their direction of motion.

Alien photons

This description, called quantum electrodynamics or QED, can explain a vast range of phenomena, from the behaviour of light to the forces that bind molecules together. QED has arguably been tested more precisely than any other physical theory, but it has a dark secret. It wants to spit out not only photons, but also two other, alien entities.

The first kind is a wave in which the electric field points along the direction of motion, rather than at right angles as it does with ordinary photons. This longitudinal mode moves rather like a sound wave in air. The second kind, called a temporal mode, has no magnetic field. Instead, it is a wave of pure electric potential, or voltage. Like all quantum entities, these waves come in particle packets, forming two new kinds of photon.

As we have never actually seen either of these alien photons in reality, physicists found a way to hide them. They are spirited away using a mathematical fix called the Lorenz condition, which means that all their attributes are always equal and opposite, cancelling each other out exactly. “They are there, but you cannot see them,” says Beltrán.

Beltrán and Maroto’s theory looked like electromagnetism, but without the Lorenz condition. So they worked through their equations to see what cosmological implications that might have.

The strange waves normally banished by the Lorenz condition may come into being as brief quantum fluctuations – virtual waves in the vacuum – and then disappear again. In the early moments of the universe, however, there is thought to have been an episode of violent expansion called inflation, which was driven by very powerful repulsive gravity. The force of this expansion grabbed all kinds of quantum fluctuations and amplified them hugely. It created ripples in the density of matter, for example, which eventually seeded galaxies and other structures in the universe.

Crucially, inflation could also have boosted the new electromagnetic waves. Beltrán and Maroto found that this process would leave behind vast temporal modes: waves of electric potential with wavelengths many orders of magnitude larger than the observable universe. These waves contain some energy but because they are so vast we do not perceive them as waves at all. So their energy would be invisible, dark… perhaps, dark energy?

Beltrán and Maroto called their idea dark magnetism (arxiv.org/abs/1112.1106). Unlike the cosmological constant, it may be able to explain the actual quantity of dark energy in the universe. The energy in those temporal modes depends on the exact time inflation started. One plausible moment is about 10 trillionths of a second after the big bang, when the universe cooled below a critical temperature and electromagnetism split from the weak nuclear force to become a force in its own right. Physics would have suffered a sudden wrench, enough perhaps to provide the impetus for inflation.

If inflation did happen at this “electroweak transition”, Beltrán and Maroto calculate that it would have produced temporal modes with an energy density close to that of dark energy. The correspondence is only within an order of magnitude, which may not seem all that precise. In comparison with the cosmological constant, however, it is mildly miraculous.

The theory might also explain the mysterious existence of large-scale cosmic magnetic fields. Within galaxies we see the unmistakable mark of magnetic fields as they twist the polarisation of light. Although the turbulent formation and growth of galaxies could boost a pre-existing field, is it not clear where that seed field would have come from.

Even more strangely, magnetic fields seem to have infiltrated the emptiest deserts of the cosmos. Their influence was noticed in 2010 by Andrii Neronov and Ievgen Vovk at the Geneva Observatory. Some distant galaxies emit blistering gamma rays with energies in the teraelectronvolt range. These hugely energetic photons should smack into background starlight on their way to us, creating electrons and positrons that in turn will boost other photons up to gamma energies of around 100 gigaelectronvolts. The trouble is that astronomers see relatively little of this secondary radiation. Neronov and Vovk suggest that is because a diffuse magnetic field is randomly bending the path of electrons and positrons, making their emission more diffuse (Science, vol 32, p 73).

“It is difficult to explain cosmic magnetic fields on the largest scales by conventional mechanisms,” says astrophysicist Larry Widrow of Queen’s University in Kingston, Ontario, Canada. “Their existence in the voids might signal an exotic mechanism.” One suggestion is that giant flaws in space-time called cosmic strings are whipping them up.

With dark magnetism, such a stringy solution would be superfluous. As well as the gigantic temporal modes, dark magnetism should also lead to smaller longitudinal waves bouncing around the cosmos. These waves could generate magnetism on the largest scales and in the emptiest voids.

To begin with, Beltrán and Maroto had some qualms. “It is always dangerous to modify a well-established theory,” says Beltrán. Cosmologist Sean Carroll at the California Institute of Technology in Pasadena, echoes this concern. “They are doing extreme violence to electromagnetism. There are all sorts of dangers that things might go wrong,” he says. Such meddling could easily throw up absurdities, predicting that electromagnetic forces are different from what we actually see.

The duo soon reassured themselves, however. Although the theory means that temporal and longitudinal modes can make themselves felt, the only thing that can generate them is an ultra-strong gravitational field such as the repulsive field that sprang up in the era of inflation. So within the atom, in all our lab experiments, and out there among the planets, electromagnetism carries on in just the same way as QED predicts.

Carroll is not convinced. “It seems like a long shot,” he says. But others are being won over. Gonzalo Olmo, a cosmologist at the University of Valencia, Spain, was initially sceptical but is now keen. “The idea is fantastic. If we quantise electromagnetic fields in an expanding universe, the effect follows naturally.”

So how might we tell whether the idea is correct? Dark magnetism is not that easy to test. It is almost unchanging, and would stretch space in almost exactly the same way as a cosmological constant, so we can’t tell the two ideas apart simply by watching how cosmic acceleration has changed over time.

Ancient mark

Instead, the theory might be challenged by peering deep into the cosmic microwave background, a sea of radiation emitted when the universe was less than 400,000 years old. Imprinted on this radiation are the original ripples of matter density caused by inflation, and it may bear another ancient mark. The turmoil of inflation should have energised gravitational waves, travelling warps in space-time that stretch and squeeze everything they pass through. These waves should affect the polarisation of cosmic microwaves in a distinctive way, which could tell us about the timing and the violence of inflation. The European Space Agency’s Planck spacecraft might just spot this signature. If Planck or a future mission finds that inflation happened before the electroweak transition, at a higher energy scale, then that would rule out dark magnetism in its current form.

Olmo thinks that the theory might anyhow need some numerical tweaking, so that might not be fatal, although it would be a blow to lose the link between the electroweak transition and the correct amount of dark energy.

One day, we might even be able to see the twisted light of dark magnetism. In its present incarnation with inflation at the electroweak scale, the longitudinal waves would all have wavelengths greater than a few hundred million kilometres, longer than the distance from Earth to the sun. Detecting a light wave efficiently requires an instrument not much smaller than the wavelength, but in the distant future it might just be possible to pick up such waves using space-based radio telescopes linked up across the solar system. If inflation kicked in earlier at an even higher energy, as suggested by Olmo, some of the longitudinal waves could be much shorter. That would bring them within reach of Earth-based technology. Beltrán suggests that they might be detected with the Square Kilometre Array – a massive radio instrument due to come on stream within the next decade.

If these dark electromagnetic waves can be created by strong gravitational fields, then they could also be produced by the strongest fields in the cosmos today, those generated around black holes. Beltrán suggests that waves may be emitted by the black hole at the centre of the Milky Way. They might be short enough for us to see – but they could easily be invisibly faint. Beltrán and Maroto are planning to do the calculations to find out.

One thing they have calculated from their theory is the voltage of the universe. The voltage of the vast temporal waves of electric potential started at zero when they were first created at the time of inflation, and ramped up steadily. Today, it has reached a pretty lively 1027 volts, or a billion billion gigavolts.

Just as well for us that it has nowhere to discharge. Unless, that is, some other strange quirk of cosmology brings a parallel universe nearby. The encounter would probably destroy the universe as we know it, but at least then our otherwise dark and lonely future would end with the mother of all lightning bolts.

Read the entire article after the jump.

Graphic courtesy of NASA / WMAP.

Send to Kindle

High Fructose Corn Syrup = Corn Sugar?

Hats off to the global agro-industrial complex that feeds most of the Earth’s inhabitants. With high fructose corn syrup (HFCS) getting an increasingly bad rap for helping to expand our waistlines and catalyze our diabetes, the industry is becoming more creative.

However, it’s only the type of “creativity” that a cynic would come to expect from a faceless, trillion dollar industry; it’s not a fresh, natural innovation. The industry wants to rename HFCS to “corn sugar”, making it sound healthier and more natural in the process.

From the New York Times:

The United States Food and Drug Administration has rejected a request from the Corn Refiners Association to change the name of high-fructose corn syrup.

The association, which represents the companies that make the syrup, had petitioned the F.D.A. in September 2010 to begin calling the much-maligned sweetener “corn sugar.” The request came on the heels of a national advertising campaign promoting the syrup as a natural ingredient made from corn.

But in a letter, Michael M. Landa, director of the Center for Food Safety and Applied Nutrition at the F.D.A., denied the petition, saying that the term “sugar” is used only for food “that is solid, dried and crystallized.”

“HFCS is an aqueous solution sweetener derived from corn after enzymatic hydrolysis of cornstarch, followed by enzymatic conversion of glucose (dextrose) to fructose,” the letter stated. “Thus, the use of the term ‘sugar’ to describe HFCS, a product that is a syrup, would not accurately identify or describe the basic nature of the food or its characterizing properties.”

In addition, the F.D.A. concluded that the term “corn sugar” has been used to describe the sweetener dextrose and therefore should not be used to describe high-fructose corn syrup. The agency also said the term “corn sugar” could pose a risk to consumers who have been advised to avoid fructose because of a hereditary fructose intolerance or fructose malabsorption.

Read the entire article after the jump.

Image: Fructose vs. D-Glucose Structural Formulae. Courtesy of Wikipedia.

Send to Kindle

Ray Bradbury – His Books Will Not Burn

“Monday burn Millay, Wednesday Whitman, Friday Faulkner, burn ’em to ashes, then burn the ashes. That’s our official slogan.” [From Fahrenheit 451].

Ray Bradbury left our planet on June 5. He was 91 years old.

Yet, a part of him lives on Mars. A digital copy of Bradbury’s “The Martian Chronicles”, along with works by other science fiction authors, reached the Martian northern plains in 2008, courtesy of NASA’s Phoenix Mars Lander spacecraft.

Ray Bradbury is likely to be best-remembered for his seminal science fiction work, Fahrenheit 451. The literary community will remember him as one of the world’s preeminent authors of short-stories and novellas. In fact, he also wrote plays, screenplays, children’s books and works of literary criticism. Many of his over 400 works, dating from the 1950’s to the present day, have greatly influenced contemporary writers and artists. He had a supreme gift for melding poetry with prose, dark vision with humor and social commentary with imagined worlds. Bradbury received the U.S. National Medal of Arts in 2004.

He will be missed; his books will not burn.

From the New York Times:

By many estimations Mr. Bradbury was the writer most responsible for bringing modern science fiction into the literary mainstream. His name would appear near the top of any list of major science-fiction writers of the 20th century, beside those of Isaac Asimov, Arthur C. Clarke, Robert A. Heinlein and the Polish author Stanislaw Lem. His books have been taught in schools and colleges, where many a reader has been introduced to them decades after they first appeared. Many have said his stories fired their own imaginations.

More than eight million copies of his books have been sold in 36 languages. They include the short-story collections “The Martian Chronicles,” “The Illustrated Man” and “The Golden Apples of the Sun,” and the novels “Fahrenheit 451” and “Something Wicked This Way Comes.”

Though none won a Pulitzer Prize, Mr. Bradbury received a Pulitzer citation in 2007 “for his distinguished, prolific and deeply influential career as an unmatched author of science fiction and fantasy.”

His writing career stretched across 70 years, to the last weeks of his life. The New Yorker published an autobiographical essay by him in its June 4th double issue devoted to science fiction. There he recalled his “hungry imagination” as a boy in Illinois.

“It was one frenzy after one elation after one enthusiasm after one hysteria after another,” he wrote, noting, “You rarely have such fevers later in life that fill your entire day with emotion.”

Mr. Bradbury sold his first story to a magazine called Super Science Stories in his early 20s. By 30 he had made his reputation with “The Martian Chronicles,” a collection of thematically linked stories published in 1950.

The book celebrated the romance of space travel while condemning the social abuses that modern technology had made possible, and its impact was immediate and lasting. Critics who had dismissed science fiction as adolescent prattle praised “Chronicles” as stylishly written morality tales set in a future that seemed just around the corner.

Mr. Bradbury was hardly the first writer to represent science and technology as a mixed bag of blessings and abominations. The advent of the atomic bomb in 1945 left many Americans deeply ambivalent toward science. The same “super science” that had ended World War II now appeared to threaten the very existence of civilization. Science-fiction writers, who were accustomed to thinking about the role of science in society, had trenchant things to say about the nuclear threat.

But the audience for science fiction, published mostly in pulp magazines, was small and insignificant. Mr. Bradbury looked to a larger audience: the readers of mass-circulation magazines like Mademoiselle and The Saturday Evening Post. These readers had no patience for the technical jargon of the science fiction pulps. So he eliminated the jargon; he packaged his troubling speculations about the future in an appealing blend of cozy colloquialisms and poetic metaphors.

Though his books, particularly “The Martian Chronicles,” became a staple of high school and college English courses, Mr. Bradbury himself disdained formal education. He went so far as to attribute his success as a writer to his never having gone to college.

Instead, he read everything he could get his hands on: Edgar Allan Poe, Jules Verne, H. G. Wells, Edgar Rice Burroughs, Thomas Wolfe, Ernest Hemingway. He paid homage to them in 1971 in the essay “How Instead of Being Educated in College, I Was Graduated From Libraries.” (Late in life he took an active role in fund-raising efforts for public libraries in Southern California.)

Mr. Bradbury referred to himself as an “idea writer,” by which he meant something quite different from erudite or scholarly. “I have fun with ideas; I play with them,” he said. “ I’m not a serious person, and I don’t like serious people. I don’t see myself as a philosopher. That’s awfully boring.”

He added, “My goal is to entertain myself and others.”

He described his method of composition as “word association,” often triggered by a favorite line of poetry.

Mr. Bradbury’s passion for books found expression in his dystopian novel “Fahrenheit 451,” published in 1953. But he drew his primary inspiration from his childhood. He boasted that he had total recall of his earliest years, including the moment of his birth. Readers had no reason to doubt him.As for the protagonists of his stories, no matter how far they journeyed from home, they learned that they could never escape the past.

Read the entire article after the jump.

Image: Ray Bradbury, 1975. Courtesy of Wikipedia.

Send to Kindle

The Most Beautiful Railway Stations

From Flavorwire:

In 1972, Pulitzer Prize-winning author, and The New York Times’ very first architecture critic, Ada Louise Huxtable observed that “nothing was more up-to-date when it was built, or is more obsolete today, than the railroad station.” A comment on the emerging age of the jetliner and a swanky commercial air travel industry that made the behemoth train stations of the time appear as cumbersome relics of an outdated industrial era, we don’t think the judgment holds up today — at all. Like so many things that we wrote off in favor of what was seemingly more modern and efficient (ahem, vinyl records and Polaroid film), the train station is back and better than ever. So, we’re taking the time to look back at some of the greatest stations still standing.

See other beautiful stations and read the entire article after the jump.

Image: Grand Central Terminal — New York City, New York. Courtesy of Flavorwire.

Send to Kindle

FOMO: An Important New Acronym

FOMO is an increasing “problem” for college students and other young adults. Interestingly, and somewhat ironically, FOMO seems to be a more chronic issue in a culture mediated by online social networks. So, what is FOMO? And do you have it?

From the Washington Post:

Over the past academic year, there has been an explosion of new or renewed campus activities, pop culture phenomena, tech trends, generational shifts, and social movements started by or significantly impacting students. Most can be summed up in a single word.

As someone who monitors student life and student media daily, I’ve noticed a small number of words appearing more frequently, prominently or controversially during the past two semesters on campuses nationwide. Some were brand-new. Others were redefined or reached a tipping point of interest or popularity. And still others showed a remarkable staying power, carrying over from semesters and years past.

I’ve selected 15 as finalists for what I am calling the “2011-2012 College Word of the Year Contest.” Okay, a few are actually acronyms or short phrases. But altogether the terms — whether short-lived or seemingly permanent — offer a unique glimpse at what students participated in, talked about, fretted over, and fought for this past fall and spring.

As Time Magazine’s Touré confirms, “The words we coalesce around as a society say so much about who we are. The language is a mirror that reflects our collective soul.”

Let’s take a quick look in the collegiate rearview mirror. In alphabetical order, here are my College Word of the Year finalists.

1) Boomerangers: Right after commencement, a growing number of college graduates are heading home, diploma in hand and futures on hold. They are the boomerangers, young 20-somethings who are spending their immediate college afterlife in hometown purgatory. A majority move back into their childhood bedroom due to poor employment or graduate school prospects or to save money so they can soon travel internationally, engage in volunteer work or launch their own business.

A brief homestay has long been an option favored by some fresh graduates, but it’s recently reemerged in the media as a defining activity of the current student generation.

“Graduation means something completely different than it used to 30 years ago,” student columnist Madeline Hennings wrote in January for the Collegiate Times at Virginia Tech. “At my age, my parents were already engaged, planning their wedding, had jobs, and thinking about starting a family. Today, the economy is still recovering, and more students are moving back in with mom and dad.”

2) Drunkorexia: This five-syllable word has become the most publicized new disorder impacting college students. Many students, researchers and health professionals consider it a dangerous phenomenon. Critics, meanwhile, dismiss it as a media-driven faux-trend. And others contend it is nothing more than a fresh label stamped onto an activity that students have been carrying out for years.

The affliction, which leaves students hungry and at times hung over, involves “starving all day to drink at night.” As a March report in Daily Pennsylvanian at the University of Pennsylvania further explained, it centers on students “bingeing or skipping meals in order to either compensate for alcohol calories consumed later at night, or to get drunk faster… At its most severe, it is a combination of an eating disorder and alcohol dependency.”

4) FOMO: Students are increasingly obsessed with being connected — to their high-tech devices, social media chatter and their friends during a night, weekend or roadtrip in which something worthy of a Facebook status update or viral YouTube video might occur.  (For an example of the latter, check out this young woman “tree dancing“ during a recent music festival.)

This ever-present emotional-digital anxiety now has a defining acronym: FOMO or Fear of Missing Out.  Recent Georgetown University graduate Kinne Chapin confirmed FOMO “is a widespread problem on college campuses. Each weekend, I have a conversation with a friend of mine in which one of us expresses the following: ‘I’m not really in the mood to go out, but I feel like I should.’ Even when we’d rather catch up on sleep or melt our brain with some reality television, we feel compelled to seek bigger and better things from our weekend. We fear that if we don’t partake in every Saturday night’s fever, something truly amazing will happen, leaving us hopelessly behind.”

Read the entire article after the jump.

Image courtesy of Urban Dictionary.

Send to Kindle

Why Daydreaming is Good

Most of us, editor of theDiagonal included, have known this for a while. We’ve known that letting the mind wander aimlessly is crucial to creativity and problem-solving.

From Wired:

It’s easy to underestimate boredom. The mental condition, after all, is defined by its lack of stimulation; it’s the mind at its most apathetic. This is why the poet Joseph Brodsky described boredom as a “psychological Sahara,” a cognitive desert “that starts right in your bedroom and spurns the horizon.” The hands of the clock seem to stop; the stream of consciousness slows to a drip. We want to be anywhere but here.

However, as Brodsky also noted, boredom and its synonyms can also become a crucial tool of creativity. “Boredom is your window,” the poet declared. “Once this window opens, don’t try to shut it; on the contrary, throw it wide open.”

Brodsky was right. The secret isn’t boredom per se: It’s how boredom makes us think. When people are immersed in monotony, they automatically lapse into a very special form of brain activity: mind-wandering. In a culture obsessed with efficiency, mind-wandering is often derided as a lazy habit, the kind of thinking we rely on when we don’t really want to think. (Freud regarded mind-wandering as an example of “infantile” thinking.) It’s a sign of procrastination, not productivity.

In recent years, however, neuroscience has dramatically revised our views of mind-wandering. For one thing, it turns out that the mind wanders a ridiculous amount. Last year, the Harvard psychologists Daniel Gilbert and Matthew A. Killingsworth published a fascinating paper in Science documenting our penchant for disappearing down the rabbit hole of our own mind. The scientists developed an iPhone app that contacted 2,250 volunteers at random intervals, asking them about their current activity and levels of happiness. It turns out that people were engaged in mind-wandering 46.9 percent of the time. In fact, the only activity in which their minds were not constantly wandering was love making. They were able to focus for that.

What’s happening inside the brain when the mind wanders? A lot. In 2009, a team led by Kalina Christoff of UBC and Jonathan Schooler of UCSB used “experience sampling” inside an fMRI machine to capture the brain in the midst of a daydream. (This condition is easy to induce: After subjects were given an extremely tedious task, they started to mind-wander within seconds.) Although it’s been known for nearly a decade that mind wandering is a metabolically intense process — your cortex consumes lots of energy when thinking to itself — this study further helped to clarify the sequence of mental events:

Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions—two brain systems that so far have been assumed to work in opposition—suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Two things worth noting here. The first is the reference to the default network. The name is literal: We daydream so easily and effortlessly that it appears to be our default mode of thought. The second is the simultaneous activation in executive and default regions, suggesting that mind wandering isn’t quite as mindless as we’d long imagined. (That’s why it seems to require so much executive activity.) Instead, a daydream seems to exist in the liminal space between sleep dreaming and focused attentiveness, in which we are still awake but not really present.

Last week, a team of Austrian scientists expanded on this result in PLoS ONE. By examining 17 patients with unresponsive wakefulness syndrome (UWS), 8 patients in a minimally conscious state (MCS), and 25 healthy controls, the researchers were able to detect the brain differences along this gradient of consciousness. The key difference was an inability among the most unresponsive patients to “deactivate” their default network. This suggests that these poor subjects were trapped within a daydreaming loop, unable to exercise their executive regions to pay attention to the world outside. (Problems with the deactivation of the default network have also been observed in patients with Alzheimer’s and schizophrenia.) The end result is that their mind’s eye is always focused inwards.

Read the entire article after the jump.

Image: A daydreaming gentleman; from an original 1912 postcard published in Germany. Courtesy of Wikipedia.

Send to Kindle

Killer Ideas

It’s possible that most households on the planet have one. It’s equally possible that most humans have used one — excepting members of PETA (People for the Ethical Treatment of Animals) and other tolerant souls.

United States Patent 640,790 covers a simple and effective technology, invented by Robert Montgomery. The patent for a “Fly Killer”, or fly swatter as it is now more commonly known, was issued in 1900.

Sometimes the simplest design is the most pervasive and effective.

From the New York Times:

The first modern fly-destruction device was invented in 1900 by Robert R. Montgomery, an entrepreneur based in Decatur, Ill. Montgomery was issued Patent No. 640,790 for the Fly-Killer, a “cheap device of unusual elasticity and durability” made of wire netting, “preferably oblong,” attached to a handle. The material of the handle remained unspecified, but the netting was crucial: it reduced wind drag, giving the swatter a “whiplike swing.” By 1901, Montgomery’s invention was advertised in Ladies’ Home Journal as a tool that “kills without crushing” and “soils nothing,” unlike, say, a rolled-up newspaper might.

Montgomery sold the patent rights in 1903 to an industrialist named John L. Bennett, who later invented the beer can. Bennett improved the design — stitching around the edge of the netting to keep it from fraying — but left the name.

The various fly-killing implements on the market at the time got the name “swatter” from Samuel Crumbine, secretary of the Kansas Board of Health. In 1905, he titled one of his fly bulletins, which warned of flyborne diseases, “Swat the Fly,” after a chant he heard at a ballgame. Crumbine took an invention known as the Fly Bat — a screen attached to a yardstick — and renamed it the Fly Swatter, which became the generic term we use today.

Fly-killing technology has advanced to include fly zappers (electrified tennis rackets that roast flies on contact) and fly guns (spinning discs that mulch insects). But there will always be less techy solutions: flypaper (sticky tape that traps the bugs), Fly Bottles (glass containers lined with an attractive liquid substance) and the Venus’ flytrap (a plant that eats insects).

During a 2009 CNBC interview, President Obama killed a fly with his bare hands, triumphantly exclaiming, “I got the sucker!” PETA was less gleeful, calling it a public “execution” and sending the White House a device that traps flies so that they may be set free.

But for the rest of us, as the product blogger Sean Byrne notes, “it’s hard to beat the good old-fashioned fly swatter.”

Read the entire article after the jump.

Image courtesy of Goodgrips.

Send to Kindle