Vampire Wedding and the Moral Molecule

Attend a wedding. Gather the hundred or so guests, and take their blood. Take samples that is. Then, measure the levels of a hormone called oxytocin. This is where neuroeconomist Paul Zak’s story beings — around a molecular messenger thought to be responsible for facilitating trust and empathy in all our intimate relationships.

[div class=attrib]From “The Moral Molecule” by Paul J. Zak, to be published May 10, courtesy of the Wall Street Journal:[end-div]

Could a single molecule—one chemical substance—lie at the very center of our moral lives?

Research that I have done over the past decade suggests that a chemical messenger called oxytocin accounts for why some people give freely of themselves and others are coldhearted louts, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and why women tend to be nicer and more generous than men. In our blood and in the brain, oxytocin appears to be the chemical elixir that creates bonds of trust not just in our intimate relationships but also in our business dealings, in politics and in society at large.

Known primarily as a female reproductive hormone, oxytocin controls contractions during labor, which is where many women encounter it as Pitocin, the synthetic version that doctors inject in expectant mothers to induce delivery. Oxytocin is also responsible for the calm, focused attention that mothers lavish on their babies while breast-feeding. And it is abundant, too, on wedding nights (we hope) because it helps to create the warm glow that both women and men feel during sex, a massage or even a hug.

Since 2001, my colleagues and I have conducted a number of experiments showing that when someone’s level of oxytocin goes up, he or she responds more generously and caringly, even with complete strangers. As a benchmark for measuring behavior, we relied on the willingness of our subjects to share real money with others in real time. To measure the increase in oxytocin, we took their blood and analyzed it. Money comes in conveniently measurable units, which meant that we were able to quantify the increase in generosity by the amount someone was willing to share. We were then able to correlate these numbers with the increase in oxytocin found in the blood.

Later, to be certain that what we were seeing was true cause and effect, we sprayed synthetic oxytocin into our subjects’ nasal passages—a way to get it directly into their brains. Our conclusion: We could turn the behavioral response on and off like a garden hose. (Don’t try this at home: Oxytocin inhalers aren’t available to consumers in the U.S.)

More strikingly, we found that you don’t need to shoot a chemical up someone’s nose, or have sex with them, or even give them a hug in order to create the surge in oxytocin that leads to more generous behavior. To trigger this “moral molecule,” all you have to do is give someone a sign of trust. When one person extends himself to another in a trusting way—by, say, giving money—the person being trusted experiences a surge in oxytocin that makes her less likely to hold back and less likely to cheat. Which is another way of saying that the feeling of being trusted makes a person more…trustworthy. Which, over time, makes other people more inclined to trust, which in turn…

If you detect the makings of an endless loop that can feed back onto itself, creating what might be called a virtuous circle—and ultimately a more virtuous society—you are getting the idea.

Obviously, there is more to it, because no one chemical in the body functions in isolation, and other factors from a person’s life experience play a role as well. Things can go awry. In our studies, we found that a small percentage of subjects never shared any money; analysis of their blood indicated that their oxytocin receptors were malfunctioning. But for everyone else, oxytocin orchestrates the kind of generous and caring behavior that every culture endorses as the right way to live—the cooperative, benign, pro-social way of living that every culture on the planet describes as “moral.” The Golden Rule is a lesson that the body already knows, and when we get it right, we feel the rewards immediately.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]CPK model of the Oxitocin molecule C43H66N12O12S2. Courtesy of Wikipedia.[end-div]

Corporatespeak: Lingua Franca of the Internet

Author Lewis Lapham reminds us of the phrase made (in)famous by Emperor Charles V:

“I speak Spanish to God, Italian to women, French to men, and German to my horse.”

So, what of the language of the internet? Again, Lapham offers a fitting and damning summary, this time courtesy of a lesser mortal, critic George Steiner:

“The true catastrophe of Babel is not the scattering of tongues. It is the reduction of human speech to a handful of planetary, ‘multinational’ tongues…Anglo-American standardized vocabularies” and grammar shaped by “military technocratic megalomania” and “the imperatives of commercial greed.”

More from the keyboard of Lewis Lapham on how the communicative promise of the internet is being usurped by commerce and the “lowest common denominator”.

[div class=attrib]From TomDispatch:[end-div]

But in which language does one speak to a machine, and what can be expected by way of response? The questions arise from the accelerating datastreams out of which we’ve learned to draw the breath of life, posed in consultation with the equipment that scans the flesh and tracks the spirit, cues the ATM, the GPS, and the EKG, arranges the assignations on Match.com and the high-frequency trades at Goldman Sachs, catalogs the pornography and drives the car, tells us how and when and where to connect the dots and thus recognize ourselves as human beings.

Why then does it come to pass that the more data we collect—from Google, YouTube, and Facebook—the less likely we are to know what it means?

The conundrum is in line with the late Marshall McLuhan’s noticing 50 years ago the presence of “an acoustic world,” one with “no continuity, no homogeneity, no connections, no stasis,” a new “information environment of which humanity has no experience whatever.” He published Understanding Media in 1964, proceeding from the premise that “we become what we behold,” that “we shape our tools, and thereafter our tools shape us.”

Media were to be understood as “make-happen agents” rather than as “make-aware agents,” not as art or philosophy but as systems comparable to roads and waterfalls and sewers. Content follows form; new means of communication give rise to new structures of feeling and thought.

To account for the transference of the idioms of print to those of the electronic media, McLuhan examined two technological revolutions that overturned the epistemological status quo. First, in the mid-15th century, Johannes Gutenberg’s invention of moveable type, which deconstructed the illuminated wisdom preserved on manuscript in monasteries, encouraged people to organize their perceptions of the world along the straight lines of the printed page. Second, in the 19th and 20th centuries, the applications of electricity (telegraph, telephone, radio, movie camera, television screen, eventually the computer), favored a sensibility that runs in circles, compressing or eliminating the dimensions of space and time, narrative dissolving into montage, the word replaced with the icon and the rebus.

Within a year of its publication, Understanding Media acquired the standing of Holy Scripture and made of its author the foremost oracle of the age. The New York Herald Tribune proclaimed him “the most important thinker since Newton, Darwin, Freud, Einstein, and Pavlov.” Although never at a loss for Delphic aphorism—”The electric light is pure information”; “In the electric age, we wear all mankind as our skin”—McLuhan assumed that he had done nothing more than look into the window of the future at what was both obvious and certain.

[div class=attrib]Read the entire article following the jump.[end-div]

Language as a Fluid Construct

Peter Ludlow, professor of philosophy at Northwestern University, has authored a number of fascinating articles on the philosophy of language and linguistics. Here he discusses his view of language as a dynamic, living organism. Literalists take note.

[div class=attrib]From the New York Times:[end-div]

There is a standard view about language that one finds among philosophers, language departments, pundits and politicians.  It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks.  I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict.  Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.

Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use.  According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages.  Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.

Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place.  That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.

I’d I tell my friend that I don’t care where I teach so long as the school is in a city.  My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.”  In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.

Word meanings are dynamic, but they are also underdetermined.  What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.”  We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.

This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.”  The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Leif Parsons / New York Times.[end-div]

Your Brain Today

Progress in neuroscience continues to accelerate, and one of the principal catalysts of this progress is neuroscientist David Eagleman. We excerpt a recent article about Eagleman’s research, into amongst other things, synaesthesia, sensory substitution, time perception, neurochemical basis for attraction, and consciousness.

[div class=attrib]From the Telegraph:[end-div]

It ought to be quite intimidating, talking to David Eagleman. He is one of the world’s leading neuroscientists, after all, known for his work on time perception, synaesthesia and the use of neurology in criminal justice. But as anyone who has read his best-selling books or listened to his TED talks online will know, he has a gift for communicating complicated ideas in an accessible and friendly way — Brian Cox with an American accent.

He lives in Houston, Texas, with his wife and their two-month-old baby. When we Skype each other, he is sitting in a book-lined study and he doesn’t look as if his nights are being too disturbed by mewling. No bags under his eyes. In fact, with his sideburns and black polo shirt he looks much younger than his 41 years, positively boyish. His enthusiasm for his subject is boyish, too, as he warns me, he “speaks fast”.

He sure does. And he waves his arms around. We are talking about the minute calibrations and almost instantaneous assessments the brain makes when members of the opposite sex meet, one of many brain-related subjects covered in his book Incognito: The Secret Lives of the Brain, which is about to be published in paperback.

“Men are consistently more attracted to women with dilated eyes,” he says. “Because that corresponds with sexual excitement.”

Still, I say, not exactly a romantic discovery, is it? How does this theory go down with his wife? “Well she’s a neuroscientist like me so we joke about it all the time, like when I grow a beard. Women will always say they don’t like beards, but when you do the study it turns out they do, and the reason is it’s a secondary sex characteristic that indicates sexual development, the thing that separates the men from the boys.”

Indeed, according to Eagleman, we mostly run on unconscious autopilot. Our neural systems have been carved by natural selection to solve problems that were faced by our ancestors. Which brings me to another of his books, Why The Net Matters. As the father of children who spend a great deal of their time on the internet, I want to know if he thinks it is changing their brains.

“It certainly is,” he says, “especially in the way we seek information. When we were growing up it was all about ‘just in case’ information, the Battle of Hastings and so on. Now it is ‘just in time’ learning, where a kid looks something up online if he needs to know about it. This means kids today are becoming less good at memorising, but in other ways their method of learning is superior to ours because it targets neurotransmitters in the brain, ones that are related to curiosity, emotional salience and interactivity. So I think there might be some real advantages to where this is going. Kids are becoming faster at searching for information. When you or I read, our eyes scan down the page, but for a Generation-Y kid, their eyes will have a different set of movements, top, then side, then bottom and that is the layout of webpages.”

In many ways Eagleman’s current status as “the poster boy of science’s most fashionable field” (as the neuroscientist was described in a recent New Yorker profile) seems entirely apt given his own upbringing. His mother was a biology teacher, his father a psychiatrist who was often called upon to evaluate insanity pleas. Yet Eagleman says he wasn’t drawn to any of this. “Growing up, I didn’t see my career path coming at all, because in tenth grade I always found biology gross, dissecting rats and frogs. But in college I started reading about the brain and then I found myself consuming anything I could on the subject. I became hooked.”

Eagleman’s mother has described him as an “unusual child”. He wrote his first words at two, and at 12 he was explaining Einstein’s theory of relativity to her. He also liked to ask for a list of 400 random objects then repeat them back from memory, in reverse order. At Rice University, Houston, he majored in electrical engineering, but then took a sabbatical, joined the Israeli army as a volunteer, spent a semester at Oxford studying political science and literature and finally moved to LA to try and become a stand-up comedian. It didn’t work out and so he returned to Rice, this time to study neurolinguistics. After this came his doctorate and his day job as a professor running a laboratory at Baylor College of Medicine, Houston (he does his book writing at night, doesn’t have hobbies and has never owned a television).

I ask if he has encountered any snobbery within the scientific community for being an academic who has “dumbed down” by writing popular science books that spend months on the New York Times bestseller list? “I have to tell you, that was one of my concerns, and I can definitely find evidence of that. Online, people will sometimes say terrible things about me, but they are the exceptions that illustrate a more benevolent rule. I give talks on university campuses and the students there tell me they read my books because they synthesise large swathes of data in a readable way.”

He actually thinks there is an advantage for scientists in making their work accessible to non-scientists. “I have many tens of thousands of neuroscience details in my head and the process of writing about them and trying to explain them to an eighth grader makes them become clearer in my own mind. It crystallises them.”

I tell him that my copy of Incognito is heavily annotated and there is one passage where I have simply written a large exclamation mark. It concerns Eric Weihenmayer who, in 2001, became the first blind person to climb Mount Everest. Today he climbs with a grid of more than six hundred tiny electrodes in his mouth. This device allows him to see with his tongue. Although the tongue is normally a taste organ, its moisture and chemical environment make it a good brain-machine interface when a tingly electrode grid is laid on its surface. The grid translates a video input into patterns of electrical pulses, allowing the tongue to discern qualities usually ascribed to vision such as distance, shape, direction of movement and size.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of ALAMY / Telegraph.[end-div]

Cocktail Party Science and Multitasking


The hit drama Mad Men shows us that cocktail parties can be fun — colorful drinks and colorful conversations with a host of very colorful characters. Yet cocktail parties also highlight one of our limitations, the inability to multitask. We are single-threaded animals despite the constant and simultaneous bombardment for our attention from all directions, and to all our senses.

Melinda Beck over at the WSJ Health Journal summarizes recent research that shows the deleterious effects of our attempts to multitask — why it’s so hard and why it’s probably not a good idea anyway, especially while driving.

[div class=attrib]From the Wall Street Journal:[end-div]

You’re at a party. Music is playing. Glasses are clinking. Dozens of conversations are driving up the decibel level. Yet amid all those distractions, you can zero in on the one conversation you want to hear.

This ability to hyper-focus on one stream of sound amid a cacophony of others is what researchers call the “cocktail-party effect.” Now, scientists at the University of California in San Francisco have pinpointed where that sound-editing process occurs in the brain—in the auditory cortex just behind the ear, not in areas of higher thought. The auditory cortex boosts some sounds and turns down others so that when the signal reaches the higher brain, “it’s as if only one person was speaking alone,” says principle investigator Edward Chang.

These findings, published in the journal Nature last week, underscore why people aren’t very good at multitasking—our brains are wired for “selective attention” and can focus on only one thing at a time. That innate ability has helped humans survive in a world buzzing with visual and auditory stimulation. But we keep trying to push the limits with multitasking, sometimes with tragic consequences. Drivers talking on cellphones, for example, are four times as likely to get into traffic accidents as those who aren’t.

Many of those accidents are due to “inattentional blindness,” in which people can, in effect, turn a blind eye to things they aren’t focusing on. Images land on our retinas and are either boosted or played down in the visual cortex before being passed to the brain, just as the auditory cortex filters sounds, as shown in the Nature study last week. “It’s a push-pull relationship—the more we focus on one thing, the less we can focus on others,” says Diane M. Beck, an associate professor of psychology at the University of Illinois.

That people can be completely oblivious to things in their field of vision was demonstrated famously in the “Invisible Gorilla experiment” devised at Harvard in the 1990s. Observers are shown a short video of youths tossing a basketball and asked to count how often the ball is passed by those wearing white. Afterward, the observers are asked several questions, including, “Did you see the gorilla?” Typically, about half the observers failed to notice that someone in a gorilla suit walked through the scene. They’re usually flabbergasted because they’re certain they would have noticed something like that.

“We largely see what we expect to see,” says Daniel Simons, one of the study’s creators and now a professor of psychology at the University of Illinois. As he notes in his subsequent book, “The Invisible Gorilla,” the more attention a task demands, the less attention we can pay to other things in our field of vision. That’s why pilots sometimes fail to notice obstacles on runways and radiologists may overlook anomalies on X-rays, especially in areas they aren’t scrutinizing.

And it isn’t just that sights and sounds compete for the brain’s attention. All the sensory inputs vie to become the mind’s top priority.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Getty Images / Wall Street Journal.[end-div]

Tilt: The World in Miniature

Tilt-shift photography has been around for quite a while, primarily as a tool in high-end architectural photography. More recently with the advent of more affordable lens attachments for consumer cameras and through software post-processing, including Photoshop and Instagram, tilt-shift is becoming more mainstream.

Tilt-shift is a combination of two movements. Photographers tilt, or rotate, the lens plane relative to the image to control which part of an image retains focus. Then, they shift the perspective to re-position the subject in the image (this usually has the effect of reducing the convergence of parallel lines). When used appropriately, tilt-shift delivers a highly selective focus, and the resulting images give the illusion of a miniaturized landscape.

[div class=attrib]More tilt-shift photographs from the Telegraph after the jump.[end-div]

[div class=attrib]Image: Brighton beach, on the south coast of Sussex, England. Courtesy of the Telegraph.[end-div]

Religious Art: From Faith or For Money?

Over the centuries many notable artists have painted religious scenes initiated or influenced by a very deep religious conviction; some painted to give voice to their own spirituality, others to mirror the faith of their time and community. However, others simply painted for fame or fortune, or both, or to remain in good stead with their wealthy patrons and landlords.

This bring us to another thoughtful article from Jonathan Jones over at the Guardian.

[div class=attrib]From the Guardian:[end-div]

“To paint the things of Christ you must live with Christ,” said the 15th-century artist Fra Angelico. He knew what he was talking about – he was a Dominican monk of such exemplary virtue that in 1982 he was officially beatified by Pope John Paul II. He was also a truly great religious artist whose frescoes at San Marco in Florence have influenced modern artists such as Mark Rothko. But is all holy art that holy?

From the dark ages to the end of the 17th century, the vast majority of artistic commissions in Europe were religious. Around 1700 this somehow stopped, at least when it came to art anyone cares to look at now. The great artists of the 18th century, and since, worked for secular patrons and markets. But in all those centuries when Christianity defined art, its genres, its settings, its content, was every painter and sculptor totally sincerely faithful in every work of art? Or were some of them just doing what they had to do and finding pleasure in the craft?

This question relates to another. What is it like to live in a world where everyone is religious? It is often said it was impossible to even imagine atheism in the middle ages and the Renaissance. This is so different from modern times that people do not even try to imagine it. Modern Christians blithely imagine a connection when actually a universal church meant a mentality so different from modern “faith” that today’s believers are as remote from it as today’s non-believers. Among other things it meant that while some artists “lived with Christ” and made art that searched their souls, others enjoyed the colours, the drama, the rich effects of religious paintings without thinking too deeply about the meaning.

Here are two contrasting examples from the National Gallery. Zurbarán’s painting of St Francis in Meditation (1635-9) is a harrowing and profoundly spiritual work. The face of a kneeling friar is barely glimpsed in a darkness that speaks of inner searching, of the long night of the soul. This is a true Christian masterpiece. But compare it to Carlo Crivelli’s painting The Annunciation (1486) in the same museum. Crivelli’s picture is a feast for the eye. Potted plants, a peacock, elaborately decorated classical buildings – and is that a gherkin just added in at the front of the scene? – add up to a materialistic cornucopia of visual interest. What is the religious function of such detail? Art historians, who sometimes seem to be high on piety, will point to the allegorical meaning of everyday objects in Renaissance art. But that’s all nonsense. I am not saying the allegories do not exist – I am saying they do not matter much to the artist, his original audience or us. In reality, Crivelli is enjoying himself, enjoying the world, and he paints religious scenes because that’s what he got paid to paint.

By smothering the art of the past in a piety that in some cases may be woefully misplaced, its guardians do it a disservice. Is Crivelli a Christian artist? Not in any sense that is meaningful today. He loves the things of this life, not the next.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Annunciation with St Emidius, Crivelli Carlo, 1486. National Gallery, London. Courtesy of Wikipedia / National Gallery.[end-div]

Loneliness in the Age of Connectedness

Online social networks are a boon to researchers. As never before, social scientists are probing our connections, our innermost thoughts now made public, our networks of friends, and our loneliness. Some academics point to the likes of Facebook for making our increasingly shallow “friendships” a disposable and tradable commodity, and ironically facilitating isolation from more intimate and deeper connections. Others see Facebook merely as a mirror — we have, quite simply, made ourselves lonely, and our social networks instantly and starkly expose our isolation for all to see and “like”.

An insightful article by novelist Stephen Marche over at The Atlantic examines our self-imposed loneliness.

[div class=attrib]From the Atlantic:[end-div]

Yvette Vickers, a former Playboy playmate and B-movie star, best known for her role in Attack of the 50 Foot Woman, would have been 83 last August, but nobody knows exactly how old she was when she died. According to the Los Angeles coroner’s report, she lay dead for the better part of a year before a neighbor and fellow actress, a woman named Susan Savage, noticed cobwebs and yellowing letters in her mailbox, reached through a broken window to unlock the door, and pushed her way through the piles of junk mail and mounds of clothing that barricaded the house. Upstairs, she found Vickers’s body, mummified, near a heater that was still running. Her computer was on too, its glow permeating the empty space.

The Los Angeles Times posted a story headlined “Mummified Body of Former Playboy Playmate Yvette Vickers Found in Her Benedict Canyon Home,” which quickly went viral. Within two weeks, by Technorati’s count, Vickers’s lonesome death was already the subject of 16,057 Facebook posts and 881 tweets. She had long been a horror-movie icon, a symbol of Hollywood’s capacity to exploit our most basic fears in the silliest ways; now she was an icon of a new and different kind of horror: our growing fear of loneliness. Certainly she received much more attention in death than she did in the final years of her life. With no children, no religious group, and no immediate social circle of any kind, she had begun, as an elderly woman, to look elsewhere for companionship. Savage later told Los Angeles magazine that she had searched Vickers’s phone bills for clues about the life that led to such an end. In the months before her grotesque death, Vickers had made calls not to friends or family but to distant fans who had found her through fan conventions and Internet sites.

Vickers’s web of connections had grown broader but shallower, as has happened for many of us. We are living in an isolation that would have been unimaginable to our ancestors, and yet we have never been more accessible. Over the past three decades, technology has delivered to us a world in which we need not be out of contact for a fraction of a moment. In 2010, at a cost of $300 million, 800 miles of fiber-optic cable was laid between the Chicago Mercantile Exchange and the New York Stock Exchange to shave three milliseconds off trading times. Yet within this world of instant and absolute communication, unbounded by limits of time or space, we suffer from unprecedented alienation. We have never been more detached from one another, or lonelier. In a world consumed by ever more novel modes of socializing, we have less and less actual society. We live in an accelerating contradiction: the more connected we become, the lonelier we are. We were promised a global village; instead we inhabit the drab cul-de-sacs and endless freeways of a vast suburb of information.

At the forefront of all this unexpectedly lonely interactivity is Facebook, with 845 million users and $3.7 billion in revenue last year. The company hopes to raise $5 billion in an initial public offering later this spring, which will make it by far the largest Internet IPO in history. Some recent estimates put the company’s potential value at $100 billion, which would make it larger than the global coffee industry—one addiction preparing to surpass the other. Facebook’s scale and reach are hard to comprehend: last summer, Facebook became, by some counts, the first Web site to receive 1 trillion page views in a month. In the last three months of 2011, users generated an average of 2.7 billion “likes” and comments every day. On whatever scale you care to judge Facebook—as a company, as a culture, as a country—it is vast beyond imagination.

Despite its immense popularity, or more likely because of it, Facebook has, from the beginning, been under something of a cloud of suspicion. The depiction of Mark Zuckerberg, in The Social Network, as a bastard with symptoms of Asperger’s syndrome, was nonsense. But it felt true. It felt true to Facebook, if not to Zuckerberg. The film’s most indelible scene, the one that may well have earned it an Oscar, was the final, silent shot of an anomic Zuckerberg sending out a friend request to his ex-girlfriend, then waiting and clicking and waiting and clicking—a moment of superconnected loneliness preserved in amber. We have all been in that scene: transfixed by the glare of a screen, hungering for response.

When you sign up for Google+ and set up your Friends circle, the program specifies that you should include only “your real friends, the ones you feel comfortable sharing private details with.” That one little phrase, Your real friends—so quaint, so charmingly mothering—perfectly encapsulates the anxieties that social media have produced: the fears that Facebook is interfering with our real friendships, distancing us from each other, making us lonelier; and that social networking might be spreading the very isolation it seemed designed to conquer.

Facebook arrived in the middle of a dramatic increase in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of greater connection seem deeply attractive. Americans are more solitary than ever before. In 1950, less than 10 percent of American households contained only one person. By 2010, nearly 27 percent of households had just one person. Solitary living does not guarantee a life of unhappiness, of course. In his recent book about the trend toward living alone, Eric Klinenberg, a sociologist at NYU, writes: “Reams of published research show that it’s the quality, not the quantity of social interaction, that best predicts loneliness.” True. But before we begin the fantasies of happily eccentric singledom, of divorcées dropping by their knitting circles after work for glasses of Drew Barrymore pinot grigio, or recent college graduates with perfectly articulated, Steampunk-themed, 300-square-foot apartments organizing croquet matches with their book clubs, we should recognize that it is not just isolation that is rising sharply. It’s loneliness, too. And loneliness makes us miserable.

We know intuitively that loneliness and being alone are not the same thing. Solitude can be lovely. Crowded parties can be agony. We also know, thanks to a growing body of research on the topic, that loneliness is not a matter of external conditions; it is a psychological state. A 2005 analysis of data from a longitudinal study of Dutch twins showed that the tendency toward loneliness has roughly the same genetic component as other psychological problems such as neuroticism or anxiety.

[div class=attrib]Kindly read the entire article after the momentary jump.[end-div]

[div class=attrib]Photograph courtesy of Phillip Toledano / The Atlantic.[end-div]

Hitchcock

Alfred Hitchcock was a pioneer of modern cinema. His finely crafted movies introduced audiences to new levels of suspense, sexuality and violence. His work raised cinema to the level of great art.

This summer in London, the British Film Institute (BFI) is celebrating all things Hitchcockian by showing all 58 of his works, including newly restored prints of his early silent films, such as Blackmail.

[div class=attrib]From the Guardian:[end-div]

Alfred Hitchcock is to be celebrated like never before this summer, with a retrospective of all his surviving films and the premieres of his newly restored silent films – including Blackmail, which will be shown outside the British Museum.

The BFI on Tuesday announced details of its biggest ever project: celebrating the genius of a man who, it said, was as important to modern cinema as Picasso to modern art or Le Corbusier to modern architecture. Heather Stewart, the BFI’s creative director, said: “The idea of popular cinema somehow being capable of being great art at the same time as being entertaining is still a problem for some people. Shakespeare is on the national curriculum, Hitchcock is not.”

One of the highlights of the season will be the culmination of a three-year project to fully restore nine of the director’s silent films. It will involve The Pleasure Garden, Hitchcock’s first, being shown at Wilton’s Music Hall; The Ring at Hackney Empire, and Blackmail outside the British Museum, where the film’s climactic chase scene was filmed in 1929, both inside the building and on the roof.

Stewart said the restorations were spectacular and overdue. “We would find it very strange if we could not see Shakespeare’s early plays performed, or read Dickens’s early novels. But we’ve been quite satisfied as a nation that Hitchcock’s early films have not been seen in good quality prints on the big screen, even though – like Shakespearean and Dickensian – Hitchcockian has entered our language.”

The films, with new scores by composers including Nitin Sawhney, Daniel Patrick Cohen and Soweto Kinch, will be shown the London 2012 Festival, the finale of the Cultural Olympiad.

Between August and October the BFI will show all 58 surviving Hitchcock films including his many films made in the UK – The 39 Steps, for example, and The Lady Vanishes – and those from his Hollywood years, from Rebecca in 1940 to Vertigo in 1957, The Birds in 1963 and his penultimate film, Frenzy, in 1972.

[div class=attrib]See more stills here, and read the entire article after the jump.[end-div]

[div class=attrib]Image: Robert Donat in The 39 Steps (1935), often hailed as the best of four film versions of John Buchan’s novel. Courtesy of BFI / Guardian.[end-div]

Wedding Photography

If you’ve been through a marriage or other formal ceremony you probably have an album of images that beautifully captured the day. You, significant other, family and select friends will browse through the visual memories every so often. Doubtless you will have hired, for a quite handsome sum, a professional photographer and/or videographer to record all the important instants. However, somewhere you, or your photographer, will have a selection of “outtakes” that should never see the light of day, such as those described below.

[div class=attrib]From the Daily Telegraph:[end-div]

Thomas and Anneka Geary commissioned professional photographers Ian McCloskey and Nikki Carter £750 to cover what should have been the best day of their lives.

But they were stunned when the pictures arrived and included out of focus shots of the couple, the back of guests’ heads and a snap of the bride’s mother whose face was completely obscured by her hat.

Astonishingly, the photographers even failed to take a single frame of the groom’s parents.

One snap of the couple signing the marriage register also appears to feature a ghostly hand clutching a toy motorbike where the snappers tried to edit out Anneka’s three-year-old nephew Harry who was standing in the background.

The pictures of the evening do, which hosted 120 guests, were also taken without flash because one of the photographers complained about being epileptic.

[div class=attrib]Read the entire article and browse through more images after the jump.[end-div]

[div class=attrib]Image: Tom, 32, a firefighter for Warwickshire Fire Service, said: “We received a CD from the wedding photographers but at first we thought it was a joke. Just about all of the pictures were out of focus or badly lit or just plain weird.” Courtesy of Daily Telegraph, Westgate Photography / SWNS.[end-div]

The Evolutionary Benefits of Middle Age

David Bainbridge, author of “Middle Age: A Natural History”, examines the benefits of middle age. Yes, really. For those of us in “middle age” it’s not surprising to see that this period is not limited to decline, disease and senility. Rather, it’s a pre-programmed redistribution of physical and mental resources designed to cope with our ever-increasing life spans.

[div class=attrib]From David Bainbridge over at New Scientist:[end-div]

As a 42-year-old man born in England, I can expect to live for about another 38 years. In other words, I can no longer claim to be young. I am, without doubt, middle-aged.

To some people that is a depressing realization. We are used to dismissing our fifth and sixth decades as a negative chapter in our lives, perhaps even a cause for crisis. But recent scientific findings have shown just how important middle age is for every one of us, and how crucial it has been to the success of our species. Middle age is not just about wrinkles and worry. It is not about getting old. It is an ancient, pivotal episode in the human life span, preprogrammed into us by natural selection, an exceptional characteristic of an exceptional species.

Compared with other animals, humans have a very unusual pattern to our lives. We take a very long time to grow up, we are long-lived, and most of us stop reproducing halfway through our life span. A few other species have some elements of this pattern, but only humans have distorted the course of their lives in such a dramatic way. Most of that distortion is caused by the evolution of middle age, which adds two decades that most other animals simply do not get.

An important clue that middle age isn’t just the start of a downward spiral is that it does not bear the hallmarks of general, passive decline. Most body systems deteriorate very little during this stage of life. Those that do, deteriorate in ways that are very distinctive, are rarely seen in other species and are often abrupt.

For example, our ability to focus on nearby objects declines in a predictable way: Farsightedness is rare at 35 but universal at 50. Skin elasticity also decreases reliably and often surprisingly abruptly in early middle age. Patterns of fat deposition change in predictable, stereotyped ways. Other systems, notably cognition, barely change.

Each of these changes can be explained in evolutionary terms. In general, it makes sense to invest in the repair and maintenance only of body systems that deliver an immediate fitness benefit — that is, those that help to propagate your genes. As people get older, they no longer need spectacular visual acuity or mate-attracting, unblemished skin. Yet they do need their brains, and that is why we still invest heavily in them during middle age.

As for fat — that wonderfully efficient energy store that saved the lives of many of our hard-pressed ancestors — its role changes when we are no longer gearing up to produce offspring, especially in women. As the years pass, less fat is stored in depots ready to meet the demands of reproduction — the breasts, hips and thighs — or under the skin, where it gives a smooth, youthful appearance. Once our babymaking days are over, fat is stored in larger quantities and also stored more centrally, where it is easiest to carry about. That way, if times get tough we can use it for our own survival, thus freeing up food for our younger relatives.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Middle Age Couple Laughing. Courtesy of Cindi Matthews / Flickr.[end-div]

Heavy Metal Density

Heavy Metal in the musical sense, not as in elements, such as iron or manganese, is really popular in Finland and Iceland. It even pops up in Iran and Saudia Arabia.

[div class=attrib]Frank Jacobs over at Strange Maps tells us more.[end-div]

This map reflects the number of heavy metal bands per 100,000 inhabitants for each country in the world. It codes the result on a colour temperature scale, with blue indicating low occurrence, and red high occurrence. The data for this map is taken from the extensive Encyclopaedia Metallum, an online archive of metal music that lists bands per country, and provides some background by listing their subgenre (Progressive Death Metal, Symphonic Gothic Metal, Groove Metal, etc).

Even if you barely know your Def Leppard from your Deep Purple, you won’t be surprised by the obvious point of this map: Scandinavia is the world capital of heavy metal music. Leaders of the pack are Finland and Sweden, coloured with the hottest shade of red. With 2,825 metal bands listed in the Encyclopaedia Metallum, the figure for Finland works out to 54.3 bands per 100,000 Finns (for a total of 5.2 million inhabitants). Second is Sweden, with a whopping 3,398 band entries. For 9.1 million Swedes, that amounts to 37.3 metal bands per 100,000 inhabitants.

The next-hottest shade of red is coloured in by Norway and Iceland. The Icelandic situation is interesting: with only 71 bands listed, the country seems not particulary metal-oriented. But the total population of the North Atlantic island is a mere 313,000. Which produces a result of 22.6 metal bands per 100,000 inhabitants. That’s almost the double, relatively speaking, of Denmark, which has a score of 12.9 (708 metal bands for 5.5 million Danes)

The following shades of colour, from dark orange to light yellow, are almost all found in North America, Europe and Australasia. A notable addition to this list of usual suspects are Israel, and the three countries of Latin America’s Southern Cone: Chile, Argentina and Uruguay.

Some interesting variations in Europe: Portugal is much darker – i.e. much more metal-oriented – than its Iberian neighbour Spain, and Greece is a solid southern outpost of metal on an otherwise wishy-washy Balkan Peninsula.

On the other side of the scale, light blue indicates the worst – or at least loneliest – places to be a metal fan: Papua New Guinea, North Korea, Cambodia, Afghanistan, Yemen, and most of Africa outside its northern and southern fringe. According to the Encyclopaedia Metallum, there isn’t a single metal band in any of those countries.

[div class=attrib]Read the entire article after the jump.[end-div]

Why Do Some Videos Go Viral, and Others Not?

Some online videos and stories are seen by tens or hundreds of millions, yet others never see the light of day. Advertisers and reality star wannabes search daily for the secret sauce that determines the huge success of one internet meme over many others. However, much to the frustration of the many agents to the “next big thing”, several fascinating new studies point at nothing more than simple randomness.

[div class=attrib]From the New Scientist:[end-div]

WHAT causes some photos, videos, and Twitter posts to spread across the internet like wildfire while others fall by the wayside? The answer may have little to do with the quality of the information. What goes viral may be completely arbitrary, according to a controversial new study of online social networks.

By analysing 120 million retweets – repostings of users’ messages on Twitter – by 12.5 million users of the social network, researchers at Indiana University, Bloomington, learned the mechanisms by which memes compete for user interest, and how information spreads.

Using this insight, the team built a computer simulation designed to mimic Twitter. In the simulation, each tweet or message was assigned the same value and retweets were performed at random. Despite this, some tweets became incredibly popular and were persistently reposted, while others were quickly forgotten.

The reason for this, says team member Filippo Menczer, is that the simulated users had a limited attention span and could only view a portion of the total number of tweets – as is the case in the real world. Tweets selected for retweeting would be more likely to be seen by a user and re-posted. After a few iterations, a tweet becomes significantly more prevalent than those not retweeted. Many users see the message and retweet it further.

“When a meme starts to get popular it displaces other memes; you start to pay attention to the popular meme and don’t pay attention to other things because you have only so much attention,” Menczer says. “It’s similar to when a big news story breaks, you don’t hear about other things that happened on that day.”

Katherine Milkman of the University of Pennsylvania in Philadelphia disagrees. “[Menczer’s study] says that all of the things that catch on could be truly random but it doesn’t say they have to be,” says Milkman, who co-authored a paper last year examining how emotions affect meme sharing.

Milkman’s study analysed 7000 articles that appeared in the New York Times over a three-month period. It found that articles that aroused readers’ emotions were more likely to end up on the website’s “most emailed” list. “Anything that gets you fired up, whether positive or negative, will lead you to share it more,” Milkman says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets is a book by Nassim Nicholas Taleb. Courtesy of Wikipedia.[end-div]

Childhood Memory

[div class=attrib]From Slate:[end-div]

Last August, I moved across the country with a child who was a few months shy of his third birthday. I assumed he’d forget his old life—his old friends, his old routine—within a couple of months. Instead, over a half-year later, he remembers it in unnerving detail: the Laundromat below our apartment, the friends he ran around naked with, my wife’s co-workers. I just got done with a stint pretending to be his long-abandoned friend Iris—at his direction.

We assume children don’t remember much, because we don’t remember much about being children. As far as I can tell, I didn’t exist before the age of 5 or so—which is how old I am in my earliest memory, wandering around the Madison, Wis. farmers market in search of cream puffs. But developmental research now tells us that Isaiah’s memory isn’t extraordinary. It’s ordinary. Children remember.

Up until the 1980s, almost no one would have believed that Isaiah still remembers Iris. It was thought that babies and young toddlers lived in a perpetual present: All that existed was the world in front of them at that moment. When Jean Piaget conducted his famous experiments on object permanence—in which once an object was covered up, the baby seemed to forget about it—Piaget concluded that the baby had been unable to store the memory of the object: out of sight, out of mind.

The paradigm of the perpetual present has now itself been forgotten. Even infants are aware of the past, as many remarkable experiments have shown. Babies can’t speak but they can imitate, and if shown a series of actions with props, even 6-month-old infants will repeat a three-step sequence a day later. Nine-month-old infants will repeat it a month later.

The conventional wisdom for older children has been overturned, too. Once, children Isaiah’s age were believed to have memories of the past but nearly no way to organize those memories. According to Patricia Bauer, a professor of psychology at Emory who studies early memory, the general consensus was that a 3-year-old child’s memory was a jumble of disorganized information, like your email inbox without any sorting function: “You can’t sort them by name, you can’t sort them by date, it’s just all your email messages.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Summer school memories. Retouched New York World-Telegram photograph by Walter Albertin. Courtesy of Wikimedia.[end-div]

Creativity and Failure at School

[div class=attrib]From the Wall Street Journal:[end-div]

Most of our high schools and colleges are not preparing students to become innovators. To succeed in the 21st-century economy, students must learn to analyze and solve problems, collaborate, persevere, take calculated risks and learn from failure. To find out how to encourage these skills, I interviewed scores of innovators and their parents, teachers and employers. What I learned is that young Americans learn how to innovate most often despite their schooling—not because of it.

Though few young people will become brilliant innovators like Steve Jobs, most can be taught the skills needed to become more innovative in whatever they do. A handful of high schools, colleges and graduate schools are teaching young people these skills—places like High Tech High in San Diego, the New Tech high schools (a network of 86 schools in 16 states), Olin College in Massachusetts, the Institute of Design (d.school) at Stanford and the MIT Media Lab. The culture of learning in these programs is radically at odds with the culture of schooling in most classrooms.

In most high-school and college classes, failure is penalized. But without trial and error, there is no innovation. Amanda Alonzo, a 32-year-old teacher at Lynbrook High School in San Jose, Calif., who has mentored two Intel Science Prize finalists and 10 semifinalists in the last two years—more than any other public school science teacher in the U.S.—told me, “One of the most important things I have to teach my students is that when you fail, you are learning.” Students gain lasting self-confidence not by being protected from failure but by learning that they can survive it.

The university system today demands and rewards specialization. Professors earn tenure based on research in narrow academic fields, and students are required to declare a major in a subject area. Though expertise is important, Google’s director of talent, Judy Gilbert, told me that the most important thing educators can do to prepare students for work in companies like hers is to teach them that problems can never be understood or solved in the context of a single academic discipline. At Stanford’s d.school and MIT’s Media Lab, all courses are interdisciplinary and based on the exploration of a problem or new opportunity. At Olin College, half the students create interdisciplinary majors like “Design for Sustainable Development” or “Mathematical Biology.”

Learning in most conventional education settings is a passive experience: The students listen. But at the most innovative schools, classes are “hands-on,” and students are creators, not mere consumers. They acquire skills and knowledge while solving a problem, creating a product or generating a new understanding. At High Tech High, ninth graders must develop a new business concept—imagining a new product or service, writing a business and marketing plan, and developing a budget. The teams present their plans to a panel of business leaders who assess their work. At Olin College, seniors take part in a yearlong project in which students work in teams on a real engineering problem supplied by one of the college’s corporate partners.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of NY Daily News.[end-div]

Science and Politics

The tension between science, religion and politics that began several millennia ago continues unabated.

[div class=attrib]From ars technica:[end-div]

In the US, science has become a bit of a political punching bag, with a number of presidential candidates accusing climatologists of fraud, even as state legislators seek to inject phony controversies into science classrooms. It’s enough to make one long for the good old days when science was universally respected. But did those days ever actually exist?

A new look at decades of survey data suggests that there was never a time when science was universally respected, but one political group in particular—conservative voters—has seen its confidence in science decline dramatically over the last 30 years.

The researcher behind the new work, North Carolina’s Gordon Gauchat, figures there are three potential trajectories for the public’s view of science. One possibility is that the public, appreciating the benefits of the technological advances that science has helped to provide, would show a general increase in its affinity for science. An alternative prospect is that this process will inevitably peak, either because there are limits to how admired a field can be, or because a more general discomfort with modernity spills over to a field that helped bring it about.

The last prospect Gauchat considers is that there has been a change in views about science among a subset of the population. He cites previous research that suggests some view the role of science as having changed from one where it enhances productivity and living standards to one where it’s the primary justification for regulatory policies. “Science has always been politicized,” Gauchat writes. “What remains unclear is how political orientations shape public trust in science.”

To figure out which of these trends might apply, he turned to the General Social Survey, which has been gathering information on the US public’s views since 1972. During that time, the survey consistently contained a series of questions about confidence in US institutions, including the scientific community. The answers are divided pretty crudely—”a great deal,” “only some,” and “hardly any”—but they do provide a window into the public’s views on science. (In fact, “hardly any” was the choice of less than 7 percent of the respondents, so Gauchat simply lumped it in with “only some” for his analysis.)

The data showed a few general trends. For much of the study period, moderates actually had the lowest levels of confidence in science, with liberals typically having the highest; the levels of trust for both these groups were fairly steady across the 34 years of data. Conservatives were the odd one out. At the very start of the survey in 1974, they actually had the highest confidence in scientific institutions. By the 1980s, however, they had dropped so that they had significantly less trust than liberals did; in recent years, they’ve become the least trusting of science of any political affiliation.

Examining other demographic trends, Gauchat noted that the only other group to see a significant decline over time is regular churchgoers. Crunching the data, he states, indicates that “The growing force of the religious right in the conservative movement is a chief factor contributing to conservatives’ distrust in science.” This decline in trust occurred even among those who had college or graduate degrees, despite the fact that advanced education typically correlated with enhanced trust in science.

[div class=attrib]Read the entire article after the jump:[end-div]

You Are What You Share

The old maxim used to go something like, “you are what you eat”. Well, in the early 21st century it has been usurped by, “you are what you share online (knowingly or not)”.

[div class=attrib]From the Wall Street Journal:[end-div]

Not so long ago, there was a familiar product called software. It was sold in stores, in shrink-wrapped boxes. When you bought it, all that you gave away was your credit card number or a stack of bills.

Now there are “apps”—stylish, discrete chunks of software that live online or in your smartphone. To “buy” an app, all you have to do is click a button. Sometimes they cost a few dollars, but many apps are free, at least in monetary terms. You often pay in another way. Apps are gateways, and when you buy an app, there is a strong chance that you are supplying its developers with one of the most coveted commodities in today’s economy: personal data.

Some of the most widely used apps on Facebook—the games, quizzes and sharing services that define the social-networking site and give it such appeal—are gathering volumes of personal information.

A Wall Street Journal examination of 100 of the most popular Facebook apps found that some seek the email addresses, current location and sexual preference, among other details, not only of app users but also of their Facebook friends. One Yahoo service powered by Facebook requests access to a person’s religious and political leanings as a condition for using it. The popular Skype service for making online phone calls seeks the Facebook photos and birthdays of its users and their friends.

Yahoo and Skype say that they seek the information to customize their services for users and that they are committed to protecting privacy. “Data that is shared with Yahoo is managed carefully,” a Yahoo spokeswoman said.

The Journal also tested its own app, “WSJ Social,” which seeks data about users’ basic profile information and email and requests the ability to post an update when a user reads an article. A Journal spokeswoman says that the company asks only for information required to make the app work.

This appetite for personal data reflects a fundamental truth about Facebook and, by extension, the Internet economy as a whole: Facebook provides a free service that users pay for, in effect, by providing details about their lives, friendships, interests and activities. Facebook, in turn, uses that trove of information to attract advertisers, app makers and other business opportunities.

Up until a few years ago, such vast and easily accessible repositories of personal information were all but nonexistent. Their advent is driving a profound debate over the definition of privacy in an era when most people now carry information-transmitting devices with them all the time.

Capitalizing on personal data is a lucrative enterprise. Facebook is in the midst of planning for an initial public offering of its stock in May that could value the young company at more than $100 billion on the Nasdaq Stock Market.

Facebook requires apps to ask permission before accessing a user’s personal details. However, a user’s friends aren’t notified if information about them is used by a friend’s app. An examination of the apps’ activities also suggests that Facebook occasionally isn’t enforcing its own rules on data privacy.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Facebook is watching and selling you. Courtesy of Daily Mail.[end-div]

Coke or Pepsi?

Most people come down on one side or the other; there’s really no middle ground when it comes to the soda (or pop) wars. But, while the choice of drink itself may seem trivial the combined annual revenues of these food and beverage behemoths is far from it — close to $100 billion. The infographic below dissects this seriously big business.

On Being a Billionare For a Day

New York Times writer Kevin Roose recently lived the life of a billionaire for a day. His report while masquerading as a member of the 0.01 percent of the 0.1 percent of the 1 percent makes for fascinating and disturbing reading.

[div class=attrib]From the New York Times:[end-div]

I HAVE a major problem: I just glanced at my $45,000 Chopard watch, and it’s telling me that my Rolls-Royce may not make it to the airport in time for my private jet flight.

Yes, I know my predicament doesn’t register high on the urgency scale. It’s not exactly up there with malaria outbreaks in the Congo or street riots in Athens. But it’s a serious issue, because my assignment today revolves around that plane ride.

“Step on it, Mike,” I instruct my chauffeur, who nods and guides the $350,000 car into the left lane of the West Side Highway.

Let me back up a bit. As a reporter who writes about Wall Street, I spend a fair amount of time around extreme wealth. But my face is often pressed up against the gilded window. I’ve never eaten at Per Se, or gone boating on the French Riviera. I live in a pint-size Brooklyn apartment, rarely take cabs and feel like sending Time Warner to The Hague every time my cable bill arrives.

But for the next 24 hours, my goal is to live like a billionaire. I want to experience a brief taste of luxury — the chauffeured cars, the private planes, the V.I.P. access and endless privilege — and then go back to my normal life.

The experiment illuminates a paradox. In the era of the Occupy Wall Street movement, when the global financial elite has been accused of immoral and injurious conduct, we are still obsessed with the lives of the ultrarich. We watch them on television shows, follow their exploits in magazines and parse their books and public addresses for advice. In addition to the long-running list by Forbes, Bloomberg now maintains a list of billionaires with rankings that update every day.

Really, I wondered, what’s so great about billionaires? What privileges and perks do a billion dollars confer? And could I tap into the psyches of the ultrawealthy by walking a mile in their Ferragamo loafers?

At 6 a.m., Mike, a chauffeur with Flyte Tyme Worldwide, picked me up at my apartment. He opened the Rolls-Royce’s doors to reveal a spotless white interior, with lamb’s wool floor mats, seatback TVs and a football field’s worth of legroom. The car, like the watch, was lent to me by the manufacturer for the day while The New York Times made payments toward the other services.

Mike took me to my first appointment, a power breakfast at the Core club in Midtown. “Core,” as the cognoscenti call it, is a members-only enclave with hefty dues — $15,000 annually, plus a $50,000 initiation fee — and a membership roll that includes brand-name financiers like Stephen A. Schwarzman of the Blackstone Group and Daniel S. Loeb of Third Point.

Over a spinach omelet, Jennie Enterprise, the club’s founder, told me about the virtues of having a cloistered place for “ultrahigh net worth individuals” to congregate away from the bustle of the boardroom.

“They want someplace that respects their privacy,” she said. “They want a place that they can seamlessly transition from work to play, that optimizes their time.”

After breakfast, I rush back to the car for a high-speed trip to Teterboro Airport in New Jersey, where I’m meeting a real-life billionaire for a trip on his private jet. The billionaire, a hedge fund manager, was scheduled to go down to Georgia and offered to let me interview him during the two-hour jaunt on the condition that I not reveal his identity.

[div class=attrib]Read the entire article after the Learjet.[end-div]

[div class=attrib]Image: Waited On: Mr. Roose exits the Rolls-Royce looking not unlike many movers and shakers in Manhattan. Courtesy of New York Times.[end-div]

Runner’s High: How and Why

There is a small but mounting body of evidence that supports the notion of the so-called Runner’s High, a state of euphoria attained by athletes during and immediately following prolonged and vigorous exercise. But while the neurochemical basis for this may soon be understood little is known as to why this happens. More on the how and the why from Scicurious Brain.

[div class=attrib]From the Scicurious over at Scientific American:[end-div]

I just came back from an 11 mile run. The wind wasn’t awful like it usually is, the sun was out, and I was at peace with the world, and right now, I still am. Later, I know my knees will be yelling at me and my body will want nothing more than to lie down. But right now? Right now I feel FANTASTIC.

What I am in the happy, zen-like, yet curiously energetic throes of is what is popularly known as the “runner’s high”. The runner’s high is a state of bliss achieved by athletes (not just runners) during and immediately following prolonged and intense exercise. It can be an extremely powerful, emotional experience. Many athletes will say they get it (and indeed, some would say we MUST get it, because otherwise why would we keep running 26.2 miles at a stretch?), but what IS it exactly? For some people it’s highly emotional, for some it’s peaceful, and for some it’s a burst of energy. And there are plenty of other people who don’t appear to get it at all. What causes it? Why do some people get it and others don’t?

Well, the short answer is that we don’t know. As I was coming back from my run, blissful and emotive enough that the sight of a small puppy could make me weepy with joy, I began to wonder myself…what is up with me? As I re-hydrated and and began to sift through the literature, I found…well, not much. But what I did find suggests two competing hypothesis: the endogenous opioid hypothesis and the cannabinoid hypothesis.

The endogenous opioid hypothesis

This hypothesis of the runner’s high is based on a study showing that enorphins, endogenous opioids, are released during intense physical activity. When you think of the word “opioids”, you probably think of addictive drugs like opium or morphine. But your body also produces its own versions of these chemicals (called ‘endogenous’ or produced within an organism), usually in response to times of physical stress. Endogenous opioids can bind to the opioid receptors in your brain, which affect all sorts of systems. Opioid receptor activations can help to blunt pain, something that is surely present at the end of a long workout. Opioid receptors can also act in reward-related areas such as the striatum and nucleus accumbens. There, they can inhibit the release of inhibitory transmitters and increase the release of dopamine, making strenuous physical exercise more pleasurable. Endogenous opioid production has been shown to occur during the runner’s high in humans and well as after intense exercise in rats.

The cannabinoid hypothesis

Not only does the brain release its own forms of opioid chemicals, it also releases its own form of cannabinoids. When we usually talk about cannabinoids, we think about things like marijuana or the newer synthetic cannabinoids, which act upon cannabinoid receptors in the brain to produce their effects. But we also produce endogenous cannabinoids (called endocannabinoids), such as anandamide, which also act upon those same receptors. Studies have shown that deletion of cannabinoid receptor 1 decreases wheel running in mice, and that intense exercise causes increases in anadamide in humans.

Not only how, but why?

There isn’t a lot out there on HOW the runner’s high might occur, but there is even less on WHY. There are several hypotheses out there, but none of them, as far as I can tell, are yet supported by evidence. First there is the hypothesis of a placebo effect due to achieving goals. The idea is that you expect yourself to achieve a difficult goal, and then feel great when you do. While the runner’s high does have some things in common with goal achievement, it doesn’t really explain why people get them on training runs or regular runs, when they are not necessarily pushing themselves extremely hard.

[div class=attrib]Read the entire article after the jump, (no pun intended).[end-div]

[div class=attrib]Image courtesy of Cincinnati.com.[end-div]

Arial or Calibri?

Nowadays the choice of a particular font for the written word seems just as important as the word itself. Most organizations, from small businesses to major advertisers, from individual authors to global publishers, debate and analyze the typefaces for their communications to ensure brand integrity and optimum readability. Some even select a particular font to save on printing costs.

The infographic below, courtesy of Mashable, shows some of the key milestones in the development of some of our favorite fonts.

[div class=attrib]See the original, super-sized infographic after the jump.[end-div]

Inward Attention and Outward Attention

New studies show that our brains use two fundamentally different neurological pathways when we focus on our external environment and pay attention to our internal world. Researchers believe this could have important consequences, from finding new methods to manage stress and in treating some types of mental illness.

[div class=attrib]From Scientific American:[end-div]

What’s the difference between noticing the rapid beat of a popular song on the radio and noticing the rapid rate of your heart when you see your crush? Between noticing the smell of fresh baked bread and noticing that you’re out of breath? Both require attention. However, the direction of that attention differs: it is either turned outward, as in the case of noticing a stop sign or a tap on your shoulder, or turned inward, as in the case of feeling full or feeling love.

Scientists have long held that attention – regardless to what – involves mostly the prefrontal cortex, that frontal region of the brain responsible for complex thought and unique to humans and advanced mammals. A recent study by Norman Farb from the University of Toronto published in Cerebral Cortex, however, suggests a radically new view: there are different ways of paying attention. While the prefrontal cortex may indeed be specialized for attending to external information, older and more buried parts of the brain including the “insula” and “posterior cingulate cortex” appear to be specialized in observing our internal landscape.

Most of us prioritize externally oriented attention. When we think of attention, we often think of focusing on something outside of ourselves. We “pay attention” to work, the TV, our partner, traffic, or anything that engages our senses. However, a whole other world exists that most of us are far less aware of: an internal world, with its varied landscape of emotions, feelings, and sensations. Yet it is often the internal world that determines whether we are having a good day or not, whether we are happy or unhappy. That’s why we can feel angry despite beautiful surroundings or feel perfectly happy despite being stuck in traffics. For this reason perhaps, this newly discovered pathway of attention may hold the key to greater well-being.

Although this internal world of feelings and sensations dominates perception in babies, it becomes increasingly foreign and distant as we learn to prioritize the outside world.  Because we don’t pay as much attention to our internal world, it often takes us by surprise. We often only tune into our body when it rings an alarm bell –– that we’re extremely thirsty, hungry, exhausted or in pain. A flush of anger, a choked up feeling of sadness, or the warmth of love in our chest often appear to come out of the blue.

In a collaboration with professors Zindel Segal and Adam Anderson at the University of Toronto, the study compared exteroceptive (externally focused) attention to interoceptive (internally focused) attention in the brain. Participants were instructed to either focus on the sensation of their breath (interoceptive attention) or to focus their attention on words on a screen (exteroceptive attention).  Contrary to the conventional assumption that all attention relies upon the frontal lobe of the brain, the researchers found that this was true of only exteroceptive attention; interoceptive attention used evolutionarily older parts of the brain more associated with sensation and integration of physical experience.

[div class=attrib]Read the entire article after the jump.[end-div]

Dissecting Artists

Jonathan Jones dissects artists’ fascination over the ages with anatomy and pickled organs in glass jars.

[div class=attrib]From the Guardian:[end-div]

From Hirst to Da Vinci, a shared obsession with dissection and the human body seems to connect exhibitions opening this spring.

Is it something to do with the Olympics? Athletics is physical, the logic might go, so let’s think about bodies… Anyway, a shared anatomical obsession connects exhibitions that open this week, and later in the spring. Damien Hirst’s debt to anatomy does not need labouring. But just as his specimens are unveiled at Tate Modern, everyone else seems to be opening their own cabinets of curiosities. At London’s Natural History Museum, dissected animals are going on view in an exhibition that brings the morbid spectacle – which in my childhood was simultaneously the horror and fascination of this museum – back into its largely flesh-free modern galleries.

If that were not enough, the Wellcome Collection invites you to take a good look at some brains in jars.

It is no surprise that art and science keep coming together on the anatomist’s table this spring, for anatomy has a venerable and intimate connection with the efforts of artists to depict life and death. In his series of popular prints The Four Stages of Cruelty, William Hogarth sees the public dissection of a murderer by cold-blooded anatomists as the ultimate cruelty. But many artists have been happy to watch or even wield a knife at such events.

In the 16th century, the first published modern study of the human body, by Vesalius, was illustrated by a pupil of Titian. In the 18th century, the British animal artist George Stubbs undertook his own dissections of a horse, and published the results. He is one of the greatest ever portrayers of equine majesty, and his study of the skeleton and muscles of the horse helped him to achieve this.

Clinical knowledge, to help them portray humans and animals correctly, is one reason artists have been drawn to anatomy. Another attraction is more metaphysical: to look inside a human body is to get an eerie sense of who we are, to probe the mystery of being. Rembrandt’s painting The Anatomy Lesson of Dr Nicolaes Tulp is not a scientific study but a poetic reverie on the fragility and wonder of life – glimpsed in a study of death. Does Hirst make it as an artist in this tradition?

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

The Benefits of Bilingualism

[div class=attrib]From the New York Times:[end-div]

SPEAKING two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.

This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development.

They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.

Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one marked with a blue square and the other marked with a red circle.

In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.

The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while driving.

Why does the tussle between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage stemmed primarily from an ability for inhibition that was honed by the exercise of suppressing one language system: this suppression, it was thought, would help train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.

The key difference between bilinguals and monolinguals may be more basic: a heightened ability to monitor the environment. “Bilinguals have to switch languages quite often — you may talk to your father in one language and to your mother in another language,” says Albert Costa, a researcher at the University of Pompeu Fabra in Spain. “It requires keeping track of changes around you in the same way that we monitor our surroundings when driving.” In a study comparing German-Italian bilinguals with Italian monolinguals on monitoring tasks, Mr. Costa and his colleagues found that the bilingual subjects not only performed better, but they also did so with less activity in parts of the brain involved in monitoring, indicating that they were more efficient at it.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image courtesy of Futurity.org.[end-div]

So Where Is Everybody?

Astrobiologist Caleb Scharf brings us up to date on Fermi’s Paradox — which asks why, given that our galaxy is so old, haven’t other sentient intergalactic travelers found us. The answer may come from a video game.

[div class=attrib]From Scientific American:[end-div]

Right now, all across the planet, millions of people are engaged in a struggle with enormous implications for the very nature of life itself. Making sophisticated tactical decisions and wrestling with chilling and complex moral puzzles, they are quite literally deciding the fate of our existence.

Or at least they are pretending to.

The video game Mass Effect has now reached its third and final installment; a huge planet-destroying, species-wrecking, epic finale to a story that takes humanity from its tentative steps into interstellar space to a critical role in a galactic, and even intergalactic saga. It’s awfully good, even without all the fantastic visual design or gameplay, at the heart is a rip-roaring plot and countless backstories that tie the experience into one of the most carefully and completely imagined sci-fi universes out there.

As a scientist, and someone who will sheepishly admit to a love of videogames (from countless hours spent as a teenager coding my own rather inferior efforts, to an occasional consumer’s dip into the lushness of what a multi-billion dollar industry can produce), the Mass Effect series is fascinating for a number of reasons. The first of which is the relentless attention to plausible background detail. Take for example the task of finding mineral resources in Mass Effect 2. Flying your ship to different star systems presents you with a bird’s eye view of the planets, each of which has a fleshed out description – be it inhabited, or more often, uninhabitable. These have been torn from the annals of the real exoplanets, gussied up a little, but still recognizable. There are hot Jupiters, and icy Neptune-like worlds. There are gassy planets, rocky planets, and watery planets of great diversity in age, history and elemental composition. It’s a surprisingly good representation of what we now think is really out there.

But the biggest idea, the biggest piece of fiction-meets-genuine-scientific-hypothesis is the overarching story of Mass Effect. It directly addresses one of the great questions of astrobiology – is there intelligent life elsewhere in our galaxy, and if so, why haven’t we intersected with it yet? The first serious thinking about this problem seems to have arisen during a lunchtime chat in the 1940?s where the famous physicist Enrico Fermi (for whom the fundamental particle type ‘fermion’ is named) is supposed to have asked “Where is Everybody?” The essence of the Fermi Paradox is that since our galaxy is very old, perhaps 10 billion years old, unless intelligent life is almost impossibly rare it will have arisen ages before we came along. Such life will have had time to essentially span the Milky Way, even if spreading out at relatively slow sub-light speeds, it – or its artificial surrogates, machines – will have reached every nook and cranny. Thus we should have noticed it, or been noticed by it, unless we are truly the only example of intelligent life.

The Fermi Paradox comes with a ton of caveats and variants. It’s not hard to think of all manner of reasons why intelligent life might be teeming out there, but still not have met us – from self-destructive behavior to the realistic hurdles of interstellar travel. But to my mind Mass Effect has what is perhaps one of the most interesting, if not entertaining, solutions. This will spoil the story; you have been warned.

Without going into all the colorful details, the central premise is that a hugely advanced and ancient race of artificially intelligent machines ‘harvests’ all sentient, space-faring life in the Milky Way every 50,000 years. These machines otherwise lie dormant out in the depths of intergalactic space. They have constructed and positioned an ingenious web of technological devices (including the Mass Effect relays, providing rapid interstellar travel) and habitats within the Galaxy that effectively sieve through the rising civilizations, helping the successful flourish and multiply, ripening them up for eventual culling. The reason for this? Well, the plot is complex and somewhat ambiguous, but one thing that these machines do is use the genetic slurry of millions, billions of individuals from a species to create new versions of themselves.

It’s a grand ol’ piece of sci-fi opera, but it also provides a neat solution to the Fermi Paradox via a number of ideas: a) The most truly advanced interstellar species spends most of its time out of the Galaxy in hibernation. b) Purging all other sentient (space-faring) life every 50,000 years puts a stop to any great spreading across the Galaxy. c) Sentient, space-faring species are inevitably drawn into the technological lures and habitats left for them, and so are less inclined to explore.

These make it very unlikely that until a species is capable of at least proper interplanetary space travel (in the game humans have to reach Mars to become aware of what’s going on at all) it will have to conclude that the Galaxy is a lonely place.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image: Intragalactic life. Courtesy of J. Schombert, U. Oregon.[end-div]

Your Molecular Ancestors

[div class=attrib]From Scientific American:[end-div]

Well, perhaps your great-to-the-hundred-millionth-grandmother was.

Understanding the origins of life and the mechanics of the earliest beginnings of life is as important for the quest to unravel the Earth’s biological history as it is for the quest to seek out other life in the universe. We’re pretty confident that single-celled organisms – bacteria and archaea – were the first ‘creatures’ to slither around on this planet, but what happened before that is a matter of intense and often controversial debate.

One possibility for a precursor to these organisms was a world without DNA, but with the bare bone molecular pieces that would eventually result in the evolutionary move to DNA and its associated machinery. This idea was put forward by an influential paper in the journal Nature in 1986 by Walter Gilbert (winner of a Nobel in Chemistry), who fleshed out an idea by Carl Woese – who had earlier identified the Archaea as a distinct branch of life. This ancient biomolecular system was called the RNA-world, since it consists of ribonucleic acid sequences (RNA) but lacks the permanent storage mechanisms of deoxyribonucleic acids (DNA).

A key part of the RNA-world hypothesis is that in addition to carrying reproducible information in their sequences, RNA molecules can also perform the duties of enzymes in catalyzing reactions – sustaining a busy, self-replicating, evolving ecosystem. In this picture RNA evolves away until eventually items like proteins come onto the scene, at which point things can really gear up towards more complex and familiar life. It’s an appealing picture for the stepping-stones to life as we know it.

In modern organisms a very complex molecular structure called the ribosome is the critical machine that reads the information in a piece of messenger-RNA (that has spawned off the original DNA) and then assembles proteins according to this blueprint by snatching amino acids out of a cell’s environment and putting them together. Ribosomes are amazing, they’re also composed of a mix of large numbers of RNA molecules and protein molecules.

But there’s a possible catch to all this, and it relates to the idea of a protein-free RNA-world some 4 billion years ago.

[div class=attrib]Read more after the jump:[end-div]

[div class=attrib]Image: RNA molecule. Courtesy of Wired / Universitat Pampeu Fabra.[end-div]

Male Brain + Female = Jello

[div class=attrib]From Scientific American:[end-div]

In one experiment, just telling a man he would be observed by a female was enough to hurt his psychological performance.

Movies and television shows are full of scenes where a man tries unsuccessfully to interact with a pretty woman. In many cases, the potential suitor ends up acting foolishly despite his best attempts to impress. It seems like his brain isn’t working quite properly and according to new findings, it may not be.

Researchers have begun to explore the cognitive impairment that men experience before and after interacting with women. A 2009 study demonstrated that after a short interaction with an attractive woman, men experienced a decline in mental performance. A more recent study suggests that this cognitive impairment takes hold even w hen men simply anticipate interacting with a woman who they know very little about.

Sanne Nauts and her colleagues at Radboud University Nijmegen in the Netherlands ran two experiments using men and women university students as participants. They first collected a baseline measure of cognitive performance by having the students complete a Stroop test. Developed in 1935 by the psychologist John Ridley Stroop, the test is a common way of assessing our ability to process competing information. The test involves showing people a series of words describing different colors that are printed in different colored inks. For example, the word “blue” might be printed in green ink and the word “red” printed in blue ink. Participants are asked to name, as quickly as they can, the color of the ink that the words are written in. The test is cognitively demanding because our brains can’t help but process the meaning of the word along with the color of the ink. When people are mentally tired, they tend to complete the task at a slower rate.

After completing the Stroop Test, participants in Nauts’ study were asked to take part in another supposedly unrelated task. They were asked to read out loud a number of Dutch words while sitting in front of a webcam. The experimenters told them that during this “lip reading task” an observer would watch them over the webcam. The observer was given either a common male or female name. Participants were led to believe that this person would see them over the web cam, but they would not be able to interact with the person. No pictures or other identifying information were provided about the observer—all the participants knew was his or her name. After the lip reading task, the participants took another Stroop test. Women’s performance on the second test did not differ, regardless of the gender of their observer. However men who thought a woman was observing them ended up performing worse on the second Stroop test. This cognitive impairment occurred even though the men had not interacted with the female observer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock/Iconogenic.[end-div]