Category Archives: Idea Soup

Ray Bradbury’s Real World Dystopia

Ray Bradbury’s death on June 5 reminds us of his uncanny gift for inventing a future that is much like our modern day reality.

Bradbury’s body of work beginning in the early 1940s introduced us to ATMs, wall mounted flat screen TVs, ear-piece radios, online social networks, self-driving cars, and electronic surveillance. Bravely and presciently he also warned us of technologically induced cultural amnesia, social isolation, indifference to violence, and dumbed-down 24/7 mass media.

An especially thoughtful opinion from author Tim Kreider on Bradbury’s life as a “misanthropic humanist”.

[div class=attrib]From the New York Times:[end-div]

IF you’d wanted to know which way the world was headed in the mid-20th century, you wouldn’t have found much indication in any of the day’s literary prizewinners. You’d have been better advised to consult a book from a marginal genre with a cover illustration of a stricken figure made of newsprint catching fire.

Prescience is not the measure of a science-fiction author’s success — we don’t value the work of H. G. Wells because he foresaw the atomic bomb or Arthur C. Clarke for inventing the communications satellite — but it is worth pausing, on the occasion of Ray Bradbury’s death, to notice how uncannily accurate was his vision of the numb, cruel future we now inhabit.

Mr. Bradbury’s most famous novel, “Fahrenheit 451,” features wall-size television screens that are the centerpieces of “parlors” where people spend their evenings watching interactive soaps and vicious slapstick, live police chases and true-crime dramatizations that invite viewers to help catch the criminals. People wear “seashell” transistor radios that fit into their ears. Note the perversion of quaint terms like “parlor” and “seashell,” harking back to bygone days and vanished places, where people might visit with their neighbors or listen for the sound of the sea in a chambered nautilus.

Mr. Bradbury didn’t just extrapolate the evolution of gadgetry; he foresaw how it would stunt and deform our psyches. “It’s easy to say the wrong thing on telephones; the telephone changes your meaning on you,” says the protagonist of the prophetic short story “The Murderer.” “First thing you know, you’ve made an enemy.”

Anyone who’s had his intended tone flattened out or irony deleted by e-mail and had to explain himself knows what he means. The character complains that he’s relentlessly pestered with calls from friends and employers, salesmen and pollsters, people calling simply because they can. Mr. Bradbury’s vision of “tired commuters with their wrist radios, talking to their wives, saying, ‘Now I’m at Forty-third, now I’m at Forty-fourth, here I am at Forty-ninth, now turning at Sixty-first” has gone from science-fiction satire to dreary realism.

“It was all so enchanting at first,” muses our protagonist. “They were almost toys, to be played with, but the people got too involved, went too far, and got wrapped up in a pattern of social behavior and couldn’t get out, couldn’t admit they were in, even.”

Most of all, Mr. Bradbury knew how the future would feel: louder, faster, stupider, meaner, increasingly inane and violent. Collective cultural amnesia, anhedonia, isolation. The hysterical censoriousness of political correctness. Teenagers killing one another for kicks. Grown-ups reading comic books. A postliterate populace. “I remember the newspapers dying like huge moths,” says the fire captain in “Fahrenheit,” written in 1953. “No one wanted them back. No one missed them.” Civilization drowned out and obliterated by electronic chatter. The book’s protagonist, Guy Montag, secretly trying to memorize the Book of Ecclesiastes on a train, finally leaps up screaming, maddened by an incessant jingle for “Denham’s Dentrifice.” A man is arrested for walking on a residential street. Everyone locked indoors at night, immersed in the social lives of imaginary friends and families on TV, while the government bombs someone on the other side of the planet. Does any of this sound familiar?

The hero of “The Murderer” finally goes on a rampage and smashes all the yammering, blatting devices around him, expressing remorse only over the Insinkerator — “a practical device indeed,” he mourns, “which never said a word.” It’s often been remarked that for a science-fiction writer, Mr. Bradbury was something of a Luddite — anti-technology, anti-modern, even anti-intellectual. (“Put me in a room with a pad and a pencil and set me up against a hundred people with a hundred computers,” he challenged a Wired magazine interviewer, and swore he would “outcreate” every one.)

But it was more complicated than that; his objections were not so much reactionary or political as they were aesthetic. He hated ugliness, noise and vulgarity. He opposed the kind of technology that deadened imagination, the modernity that would trash the past, the kind of intellectualism that tried to centrifuge out awe and beauty. He famously did not care to drive or fly, but he was a passionate proponent of space travel, not because of its practical benefits but because he saw it as the great spiritual endeavor of the age, our generation’s cathedral building, a bid for immortality among the stars.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

FOMO: An Important New Acronym

FOMO is an increasing “problem” for college students and other young adults. Interestingly, and somewhat ironically, FOMO seems to be a more chronic issue in a culture mediated by online social networks. So, what is FOMO? And do you have it?

[div class=attrib]From the Washington Post:[end-div]

Over the past academic year, there has been an explosion of new or renewed campus activities, pop culture phenomena, tech trends, generational shifts, and social movements started by or significantly impacting students. Most can be summed up in a single word.

As someone who monitors student life and student media daily, I’ve noticed a small number of words appearing more frequently, prominently or controversially during the past two semesters on campuses nationwide. Some were brand-new. Others were redefined or reached a tipping point of interest or popularity. And still others showed a remarkable staying power, carrying over from semesters and years past.

I’ve selected 15 as finalists for what I am calling the “2011-2012 College Word of the Year Contest.” Okay, a few are actually acronyms or short phrases. But altogether the terms — whether short-lived or seemingly permanent — offer a unique glimpse at what students participated in, talked about, fretted over, and fought for this past fall and spring.

As Time Magazine’s Touré confirms, “The words we coalesce around as a society say so much about who we are. The language is a mirror that reflects our collective soul.”

Let’s take a quick look in the collegiate rearview mirror. In alphabetical order, here are my College Word of the Year finalists.

1) Boomerangers: Right after commencement, a growing number of college graduates are heading home, diploma in hand and futures on hold. They are the boomerangers, young 20-somethings who are spending their immediate college afterlife in hometown purgatory. A majority move back into their childhood bedroom due to poor employment or graduate school prospects or to save money so they can soon travel internationally, engage in volunteer work or launch their own business.

A brief homestay has long been an option favored by some fresh graduates, but it’s recently reemerged in the media as a defining activity of the current student generation.

“Graduation means something completely different than it used to 30 years ago,” student columnist Madeline Hennings wrote in January for the Collegiate Times at Virginia Tech. “At my age, my parents were already engaged, planning their wedding, had jobs, and thinking about starting a family. Today, the economy is still recovering, and more students are moving back in with mom and dad.”

2) Drunkorexia: This five-syllable word has become the most publicized new disorder impacting college students. Many students, researchers and health professionals consider it a dangerous phenomenon. Critics, meanwhile, dismiss it as a media-driven faux-trend. And others contend it is nothing more than a fresh label stamped onto an activity that students have been carrying out for years.

The affliction, which leaves students hungry and at times hung over, involves “starving all day to drink at night.” As a March report in Daily Pennsylvanian at the University of Pennsylvania further explained, it centers on students “bingeing or skipping meals in order to either compensate for alcohol calories consumed later at night, or to get drunk faster… At its most severe, it is a combination of an eating disorder and alcohol dependency.”

4) FOMO: Students are increasingly obsessed with being connected — to their high-tech devices, social media chatter and their friends during a night, weekend or roadtrip in which something worthy of a Facebook status update or viral YouTube video might occur.  (For an example of the latter, check out this young woman “tree dancing“ during a recent music festival.)

This ever-present emotional-digital anxiety now has a defining acronym: FOMO or Fear of Missing Out.  Recent Georgetown University graduate Kinne Chapin confirmed FOMO “is a widespread problem on college campuses. Each weekend, I have a conversation with a friend of mine in which one of us expresses the following: ‘I’m not really in the mood to go out, but I feel like I should.’ Even when we’d rather catch up on sleep or melt our brain with some reality television, we feel compelled to seek bigger and better things from our weekend. We fear that if we don’t partake in every Saturday night’s fever, something truly amazing will happen, leaving us hopelessly behind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Urban Dictionary.[end-div]

Philip K. Dick – Future Gnostic

Simon Critchley, professor of philosophy, continues his serialized analysis of Philip K. Dick. Part I first appeared here. Part II examines the events around 2-3-74 that led to Dick’s 8,000 page Gnostic treatise “Exegesis”.

[div class=attrib]From the New York Times:[end-div]

In the previous post, we looked at the consequences and possible philosophic import of the events of February and March of 1974 (also known as 2-3-74) in the life and work of Philip K. Dick, a period in which a dose of sodium pentathol, a light-emitting fish pendant and decades of fiction writing and quasi-philosophic activity came together in revelation that led to Dick’s 8,000-page “Exegesis.”

So, what is the nature of the true reality that Dick claims to have intuited during psychedelic visions of 2-3-74? Does it unwind into mere structureless ranting and raving or does it suggest some tradition of thought or belief? I would argue the latter. This is where things admittedly get a little weirder in an already weird universe, so hold on tight.

In the very first lines of “Exegesis” Dick writes, “We see the Logos addressing the many living entities.” Logos is an important concept that litters the pages of “Exegesis.” It is a word with a wide variety of meaning in ancient Greek, one of which is indeed “word.” It can also mean speech, reason (in Latin, ratio) or giving an account of something. For Heraclitus, to whom Dick frequently refers, logos is the universal law that governs the cosmos of which most human beings are somnolently ignorant. Dick certainly has this latter meaning in mind, but — most important — logos refers to the opening of John’s Gospel, “In the beginning was the word” (logos), where the word becomes flesh in the person of Christ.

But the core of Dick’s vision is not quite Christian in the traditional sense; it is Gnostical: it is the mystical intellection, at its highest moment a fusion with a transmundane or alien God who is identified with logos and who can communicate with human beings in the form of a ray of light or, in Dick’s case, hallucinatory visions.

There is a tension throughout “Exegesis” between a monistic view of the cosmos (where there is just one substance in the universe, which can be seen in Dick’s references to Spinoza’s idea as God as nature, Whitehead’s idea of reality as process and Hegel’s dialectic where “the true is the whole”) and a dualistic or Gnostical view of the cosmos, with two cosmic forces in conflict, one malevolent and the other benevolent. The way I read Dick, the latter view wins out. This means that the visible, phenomenal world is fallen and indeed a kind of prison cell, cage or cave.

Christianity, lest it be forgotten, is a metaphysical monism where it is the obligation of every Christian to love every aspect of creation – even the foulest and smelliest – because it is the work of God. Evil is nothing substantial because if it were it would have to be caused by God, who is by definition good. Against this, Gnosticism declares a radical dualism between the false God who created this world – who is usually called the “demiurge” – and the true God who is unknown and alien to this world. But for the Gnostic, evil is substantial and its evidence is the world. There is a story of a radical Gnostic who used to wash himself in his own saliva in order to have as little contact as possible with creation. Gnosticism is the worship of an alien God by those alienated from the world.

The novelty of Dick’s Gnosticism is that the divine is alleged to communicate with us through information. This is a persistent theme in Dick, and he refers to the universe as information and even Christ as information. Such information has a kind of electrostatic life connected to the theory of what he calls orthogonal time. The latter is rich and strange idea of time that is completely at odds with the standard, linear conception, which goes back to Aristotle, as a sequence of now-points extending from the future through the present and into the past. Dick explains orthogonal time as a circle that contains everything rather than a line both of whose ends disappear in infinity. In an arresting image, Dick claims that orthogonal time contains, “Everything which was, just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.”

It is like that seemingly endless final chord in the Beatles’ “A Day in the Life” that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.

[div class=attrib]Read the entire article after the jump.[end-div]

Heinz and the Clear Glass Bottle

[div class=attrib]From Anthropology in Practice:[end-div]

Do me a favor: Go open your refrigerator and look at the labels on your condiments. Alternatively, if you’re at work, open your drawer and flip through your stash of condiment packets. (Don’t look at me like that. I know you have a stash. Or you know where to find one. It’s practically Office Survival 101.) Go on. I’ll wait.

So tell me, what brands are hanging out in your fridge? (Or drawer?) Hellmann’s? French’s? Heinz? Even if you aren’t a slave to brand names and you typically buy whatever is on sale or the local supermarket brand, if you’ve ever eaten out or purchased a meal to-go that required condiments, you’ve likely been exposed to one of these brands for mayonnaise, mustard, or ketchup. And given the broad reach of Heinz, I’d be surprised if the company didn’t get a mention. So what are the origins of Heinz—the man and the brand? Why do we adorn our hamburgers and hotdogs with his products over others? It boils down to trust—carefully crafted trust, which obscures the image of Heinz as a food corporation and highlights a sense of quality, home-made goods.

Henry Heinz was born in 1844 to German immigrant parents near Pittsburgh, Pennsylvania. His father John owned a brickyard in Sharpsburg, and his mother Anna was a homemaker with a talent for gardening. Henry assisted both of them—in the brickyard before and after school, and in the garden when time permitted. He also sold surplus produce to local grocers. Henry proved to have quite a green thumb himself and at the age of twelve, he had his own plot, a horse, a cart, and a list of customers.

Henry’s gardening proficiency was in keeping with the times—most households were growing or otherwise making their own foods at home in the early nineteenth century, space permitting. The market for processed food was hampered by distrust in the quality offered:

Food quality and safety were growing concerns in the mid nineteenth-century cities. These issues were not new. Various local laws had mandated inspection of meat and flour exports since the colonial period. Other ordinances had regulated bread prices and ingredients, banning adulterants, such as chalk and ground beans. But as urban areas and the sources of food supplying these areas expanded, older controls weakened. Public anxiety about contaminated food, including milk, meat, eggs, and butter mounted. So, too, did worries about adulterated chocolate, sugar, vinegar, molasses, and other foods.

Contaminants included lead (in peppers and mustard) and ground stone (in flour and sugar). So it’s not surprising that people were hesitant about purchasing pre-packaged products. However, American society was on the brink of a social change that would make people more receptive to processed foods: industrialization was accelerating. As a result, an increase in urbanization reduced the amount of space available for gardens and livestock, incomes rose so that more people could afford prepared foods, and women’s roles shifted to allow for wage labor. In fact, between 1859 and 1899, the output of the food processing industry expanded 1500%, and by 1900, manufactured food comprised about a third of commodities produced in the US.

So what led the way for this adoption of packaged foods? Believe it or not, horseradish.

Horseradish was particularly popular among English and German immigrant communities. It was used to flavor potatoes, cabbage, bread, meats, and fish—and some people even attributed medicinal properties to the condiment. It was also extremely time consuming to make: the root had to be grated, packed in vinegar and spices, and sealed in jars or pots. The potential market for prepared horseradish existed, but customers were suspicious of the contents of the green and brown glass bottles that served as packaging. Turnip and wood-fibers were popular fillers, and the opaque coloring of the bottles made it hard to judge the caliber of the contents.

Heinz understood this—and saw the potential for selling consumers, especially women—something that they desperately wanted: time. In his teens, he began to bottle horseradish using his mother’s recipe—without fillers—in clear glass, and sold his products to local grocers and hotel owners. He emphasized the purity of his product and noted he had nothing to hide because he used clear glass so you could view the contents of his product. His strategy worked: By 1861, he was growing three and a half acres of horseradish to meet demand, and had made $2400.00 by year’s end (roughly $93,000.00 in 2012).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Henry J. Heinz (1844-1919). Courtesy of Wikipedia.[end-div]

What Happened to TED?

No, not Ted Nugent or Ted Koppel or Ted Turner; we are talking about the TED.

Alex Pareene over at Salon offers a well rounded critique of TED. TED is a global forum of “ideas worth spreading” centered around annual conferences, loosely woven around themes of technology, entertainment and design (TED).

Richard Wurman started TED in 1984 as a self-congratulatory networking event for Silicon Valley insiders. Since changing hands in 2002, TED has grown into a worldwide brand, but still self-congratulatory, only more exclusive. Currently, it costs $6,000 annually to be admitted to the elite idea sharing club.

By way of background, TED’s mission statement follows:

We believe passionately in the power of ideas to change attitudes, lives and ultimately, the world. So we’re building here a clearinghouse that offers free knowledge and inspiration from the world’s most inspired thinkers, and also a community of curious souls to engage with ideas and each other.

[div class=attrib]From Salon:[end-div]

There was a bit of a scandal last week when it was reported that a TED Talk on income equality had been censored. That turned out to be not quite the entire story. Nick Hanauer, a venture capitalist with a book out on income inequality, was invited to speak at a TED function. He spoke for a few minutes, making the argument that rich people like himself are not in fact job creators and that they should be taxed at a higher rate.

The talk seemed reasonably well-received by the audience, but TED “curator” Chris Anderson told Hanauer that it would not be featured on TED’s site, in part because the audience response was mixed but also because it was too political and this was an “election year.”

Hanauer had his PR people go to the press immediately and accused TED of censorship, which is obnoxious — TED didn’t have to host his talk, obviously, and his talk was not hugely revelatory for anyone familiar with recent writings on income inequity from a variety of experts — but Anderson’s responses were still a good distillation of TED’s ideology.

In case you’re unfamiliar with TED, it is a series of short lectures on a variety of subjects that stream on the Internet, for free. That’s it, really, or at least that is all that TED is to most of the people who have even heard of it. For an elite few, though, TED is something more: a lifestyle, an ethos, a bunch of overpriced networking events featuring live entertainment from smart and occasionally famous people.

Before streaming video, TED was a conference — it is not named for a person, but stands for “technology, entertainment and design” — organized by celebrated “information architect” (fancy graphic designer) Richard Saul Wurman. Wurman sold the conference, in 2002, to a nonprofit foundation started and run by former publisher and longtime do-gooder Chris Anderson (not the Chris Anderson of Wired). Anderson grew TED from a woolly conference for rich Silicon Valley millionaire nerds to a giant global brand. It has since become a much more exclusive, expensive elite networking experience with a much more prominent public face — the little streaming videos of lectures.

It’s even franchising — “TEDx” events are licensed third-party TED-style conferences largely unaffiliated with TED proper — and while TED is run by a nonprofit, it brings in a tremendous amount of money from its members and corporate sponsorships. At this point TED is a massive, money-soaked orgy of self-congratulatory futurism, with multiple events worldwide, awards and grants to TED-certified high achievers, and a list of speakers that would cost a fortune if they didn’t agree to do it for free out of public-spiritedness.

According to a 2010 piece in Fast Company, the trade journal of the breathless bullshit industry, the people behind TED are “creating a new Harvard — the first new top-prestige education brand in more than 100 years.” Well! That’s certainly saying… something. (What it’s mostly saying is “This is a Fast Company story about some overhyped Internet thing.”)

To even attend a TED conference requires not just a donation of between $7,500 and $125,000, but also a complicated admissions process in which the TED people determine whether you’re TED material; so, as Maura Johnston says, maybe it’s got more in common with Harvard than is initially apparent.

Strip away the hype and you’re left with a reasonably good video podcast with delusions of grandeur. For most of the millions of people who watch TED videos at the office, it’s a middlebrow diversion and a source of factoids to use on your friends. Except TED thinks it’s changing the world, like if “This American Life” suddenly mistook itself for Doctors Without Borders.

The model for your standard TED talk is a late-period Malcolm Gladwell book chapter. Common tropes include:

  • Drastically oversimplified explanations of complex problems.
  • Technologically utopian solutions to said complex problems.
  • Unconventional (and unconvincing) explanations of the origins of said complex problems.

Staggeringly obvious observations presented as mind-blowing new insights.
What’s most important is a sort of genial feel-good sense that everything will be OK, thanks in large part to the brilliance and beneficence of TED conference attendees. (Well, that and a bit of Vegas magician-with-PowerPoint stagecraft.)

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Multi-millionaire Nick Hanauer delivers a speech at TED Talks. Courtesy of Time.[end-div]

Philip K. Dick – Mystic, Epileptic, Madman, Fictionalizing Philosopher

Professor of philosophy Simon Critchley has an insightful examination (serialized) of Philip K. Dick’s writings. Philip K. Dick had a tragically short, but richly creative writing career. Since his death twenty years ago, many of his novels have profoundly influenced contemporary culture.

[div class=attrib]From the New York Times:[end-div]

Philip K. Dick is arguably the most influential writer of science fiction in the past half century. In his short and meteoric career, he wrote 121 short stories and 45 novels. His work was successful during his lifetime but has grown exponentially in influence since his death in 1982. Dick’s work will probably be best known through the dizzyingly successful Hollywood adaptations of his work, in movies like “Blade Runner” (based on “Do Androids Dream of Electric Sheep?”), “Total Recall,” “Minority Report,” “A Scanner Darkly” and, most recently, “The Adjustment Bureau.” Yet few people might consider Dick a thinker. This would be a mistake.

Dick’s life has long passed into legend, peppered with florid tales of madness and intoxication. There are some who consider such legend something of a diversion from the character of Dick’s literary brilliance. Jonathan Lethem writes — rightly in my view — “Dick wasn’t a legend and he wasn’t mad. He lived among us and was a genius.” Yet Dick’s life continues to obtrude massively into any assessment of his work.

Everything turns here on an event that “Dickheads” refer to with the shorthand “the golden fish.” On Feb. 20, 1974, Dick was hit with the force of an extraordinary revelation after a visit to the dentist for an impacted wisdom tooth for which he had received a dose of sodium pentothal. A young woman delivered a bottle of Darvon tablets to his apartment in Fullerton, Calif. She was wearing a necklace with the pendant of a golden fish, an ancient Christian symbol that had been adopted by the Jesus counterculture movement of the late 1960s.

The fish pendant, on Dick’s account, began to emit a golden ray of light, and Dick suddenly experienced what he called, with a nod to Plato, anamnesis: the recollection or total recall of the entire sum of knowledge. Dick claimed to have access to what philosophers call the faculty of “intellectual intuition”: the direct perception by the mind of a metaphysical reality behind screens of appearance. Many philosophers since Kant have insisted that such intellectual intuition is available only to human beings in the guise of fraudulent obscurantism, usually as religious or mystical experience, like Emmanuel Swedenborg’s visions of the angelic multitude. This is what Kant called, in a lovely German word, “die Schwärmerei,” a kind of swarming enthusiasm, where the self is literally en-thused with the God, o theos. Brusquely sweeping aside the careful limitations and strictures that Kant placed on the different domains of pure and practical reason, the phenomenal and the noumenal, Dick claimed direct intuition of the ultimate nature of what he called “true reality.”

Yet the golden fish episode was just the beginning. In the following days and weeks, Dick experienced and indeed enjoyed a couple of nightlong psychedelic visions with phantasmagoric visual light shows. These hypnagogic episodes continued off and on, together with hearing voices and prophetic dreams, until his death eight years later at age 53. Many very weird things happened — too many to list here — including a clay pot that Dick called “Ho On” or “Oh Ho,” which spoke to him about various deep spiritual issues in a brash and irritable voice.

Now, was this just bad acid or good sodium pentothal? Was Dick seriously bonkers? Was he psychotic? Was he schizophrenic? (He writes, “The schizophrenic is a leap ahead that failed.”) Were the visions simply the effect of a series of brain seizures that some call T.L.E. — temporal lobe epilepsy? Could we now explain and explain away Dick’s revelatory experience by some better neuroscientific story about the brain? Perhaps. But the problem is that each of these causal explanations misses the richness of the phenomena that Dick was trying to describe and also overlooks his unique means for describing them.

The fact is that after Dick experienced the events of what he came to call “2-3-74” (the events of February and March of that year), he devoted the rest of his life to trying to understand what had happened to him. For Dick, understanding meant writing. Suffering from what we might call “chronic hypergraphia,” between 2-3-74 and his death, Dick wrote more than 8,000 pages about his experience. He often wrote all night, producing 20 single-spaced, narrow-margined pages at a go, largely handwritten and littered with extraordinary diagrams and cryptic sketches.

The unfinished mountain of paper, assembled posthumously into some 91 folders, was called “Exegesis.” The fragments were assembled by Dick’s friend Paul Williams and then sat in his garage in Glen Ellen, Calif., for the next several years. A beautifully edited selection of these texts, with a golden fish on the cover, was finally published at the end of 2011, weighing in at a mighty 950 pages. But this is still just a fraction of the whole.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Philip K. Dick by R.Crumb. Courtesy of Wired.[end-div]

Death May Not Be as Bad For You as You Think

Professor of philosopher Shelly Kagan has an interesting take on death. After all, how bad can something be for you if you’re not alive to experience it?

[div class=attrib]From the Chronicle:[end-div]

We all believe that death is bad. But why is death bad?

In thinking about this question, I am simply going to assume that the death of my body is the end of my existence as a person. (If you don’t believe me, read the first nine chapters of my book.) But if death is my end, how can it be bad for me to die? After all, once I’m dead, I don’t exist. If I don’t exist, how can being dead be bad for me?

People sometimes respond that death isn’t bad for the person who is dead. Death is bad for the survivors. But I don’t think that can be central to what’s bad about death. Compare two stories.

Story 1. Your friend is about to go on the spaceship that is leaving for 100 Earth years to explore a distant solar system. By the time the spaceship comes back, you will be long dead. Worse still, 20 minutes after the ship takes off, all radio contact between the Earth and the ship will be lost until its return. You’re losing all contact with your closest friend.

Story 2. The spaceship takes off, and then 25 minutes into the flight, it explodes and everybody on board is killed instantly.

Story 2 is worse. But why? It can’t be the separation, because we had that in Story 1. What’s worse is that your friend has died. Admittedly, that is worse for you, too, since you care about your friend. But that upsets you because it is bad for her to have died. But how can it be true that death is bad for the person who dies?

In thinking about this question, it is important to be clear about what we’re asking. In particular, we are not asking whether or how the process of dying can be bad. For I take it to be quite uncontroversial—and not at all puzzling—that the process of dying can be a painful one. But it needn’t be. I might, after all, die peacefully in my sleep. Similarly, of course, the prospect of dying can be unpleasant. But that makes sense only if we consider death itself to be bad. Yet how can sheer nonexistence be bad?

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

Despite the overall plausibility of the deprivation account, though, it’s not all smooth sailing. For one thing, if something is true, it seems as though there’s got to be a time when it’s true. Yet if death is bad for me, when is it bad for me? Not now. I’m not dead now. What about when I’m dead? But then, I won’t exist. As the ancient Greek philosopher Epicurus wrote: “So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more.”

If death has no time at which it’s bad for me, then maybe it’s not bad for me. Or perhaps we should challenge the assumption that all facts are datable. Could there be some facts that aren’t?

Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun, but he bleeds slowly, and doesn’t die until Wednesday. Meanwhile, on Tuesday, I have a heart attack and die. I killed John, but when? No answer seems satisfactory! So maybe there are undatable facts, and death’s being bad for me is one of them.

Alternatively, if all facts can be dated, we need to say when death is bad for me. So perhaps we should just insist that death is bad for me when I’m dead. But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don’t exist? Isn’t it true that something can be bad for you only if you exist? Call this idea the existence requirement.

Should we just reject the existence requirement? Admittedly, in typical cases—involving pain, blindness, losing your job, and so on—things are bad for you while you exist. But maybe sometimes you don’t even need to exist for something to be bad for you. Arguably, the comparative bads of deprivation are like that.

Unfortunately, rejecting the existence requirement has some implications that are hard to swallow. For if nonexistence can be bad for somebody even though that person doesn’t exist, then nonexistence could be bad for somebody who never exists. It can be bad for somebody who is a merely possible person, someone who could have existed but never actually gets born.

t’s hard to think about somebody like that. But let’s try, and let’s call him Larry. Now, how many of us feel sorry for Larry? Probably nobody. But if we give up on the existence requirement, we no longer have any grounds for withholding our sympathy from Larry. I’ve got it bad. I’m going to die. But Larry’s got it worse: He never gets any life at all.

Moreover, there are a lot of merely possible people. How many? Well, very roughly, given the current generation of seven billion people, there are approximately three million billion billion billion different possible offspring—almost all of whom will never exist! If you go to three generations, you end up with more possible people than there are particles in the known universe, and almost none of those people get to be born.

If we are not prepared to say that that’s a moral tragedy of unspeakable proportions, we could avoid this conclusion by going back to the existence requirement. But of course, if we do, then we’re back with Epicurus’ argument. We’ve really gotten ourselves into a philosophical pickle now, haven’t we? If I accept the existence requirement, death isn’t bad for me, which is really rather hard to believe. Alternatively, I can keep the claim that death is bad for me by giving up the existence requirement. But then I’ve got to say that it is a tragedy that Larry and the other untold billion billion billions are never born. And that seems just as unacceptable.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Still photograph from Ingmar Bergman’s “The Seventh Seal”. Courtesy of the Guardian.[end-div]

Humanity Becoming “Nicer”

Peter Singer, Professor of Bioethics at Princeton, lends support to Steven Pinker’s recent arguments that our current era is less violent and more peaceful than any previous period of human existence.

[div class=attrib]From Project Syndicate:[end-div]

With daily headlines focusing on war, terrorism, and the abuses of repressive governments, and religious leaders frequently bemoaning declining standards of public and private behavior, it is easy to get the impression that we are witnessing a moral collapse. But I think that we have grounds to be optimistic about the future.

Thirty years ago, I wrote a book called The Expanding Circle, in which I asserted that, historically, the circle of beings to whom we extend moral consideration has widened, first from the tribe to the nation, then to the race or ethnic group, then to all human beings, and, finally, to non-human animals. That, surely, is moral progress.

We might think that evolution leads to the selection of individuals who think only of their own interests, and those of their kin, because genes for such traits would be more likely to spread. But, as I argued then, the development of reason could take us in a different direction.

On the one hand, having a capacity to reason confers an obvious evolutionary advantage, because it makes it possible to solve problems and to plan to avoid dangers, thereby increasing the prospects of survival. Yet, on the other hand, reason is more than a neutral problem-solving tool. It is more like an escalator: once we get on it, we are liable to be taken to places that we never expected to reach. In particular, reason enables us to see that others, previously outside the bounds of our moral view, are like us in relevant respects. Excluding them from the sphere of beings to whom we owe moral consideration can then seem arbitrary, or just plain wrong.

Steven Pinker’s recent book The Better Angels of Our Nature lends weighty support to this view.  Pinker, a professor of psychology at Harvard University, draws on recent research in history, psychology, cognitive science, economics, and sociology to argue that our era is less violent, less cruel, and more peaceful than any previous period of human existence.

The decline in violence holds for families, neighborhoods, tribes, and states. In essence, humans living today are less likely to meet a violent death, or to suffer from violence or cruelty at the hands of others, than their predecessors in any previous century.

Many people will doubt this claim. Some hold a rosy view of the simpler, supposedly more placid lives of tribal hunter-gatherers relative to our own. But examination of skeletons found at archaeological sites suggests that as many as 15% of prehistoric humans met a violent death at the hands of another person. (For comparison, in the first half of the twentieth century, the two world wars caused a death rate in Europe of not much more than 3%.)

Even those tribal peoples extolled by anthropologists as especially “gentle” – for example, the Semai of Malaysia, the Kung of the Kalahari, and the Central Arctic Inuit – turn out to have murder rates that are, relative to population, comparable to Detroit, which has one of the highest murder rates in the United States. In Europe, your chance of being murdered is now less than one-tenth, and in some countries only one-fiftieth, of what it would have been had you lived 500 years ago.

Pinker accepts that reason is an important factor underlying the trends that he describes. In support of this claim, he refers to the “Flynn Effect” – the remarkable finding by the philosopher James Flynn that since IQ tests were first administered, scores have risen considerably. The average IQ is, by definition, 100; but, to achieve that result, raw test results have to be standardized. If the average teenager today took an IQ test in 1910, he or she would score 130, which would be better than 98% of those taking the test then.

It is not easy to attribute this rise to improved education, because the aspects of the tests on which scores have risen the most do not require a good vocabulary, or even mathematical ability, but instead assess powers of abstract reasoning.

[div class=attrib]Read the entire article after the jump.[end-div]

The Wantologist

This may sound like another job from the future, but “wantologists” wander among us in 2012.

[div class=attrib]From the New York Times:[end-div]

IN the sprawling outskirts of San Jose, Calif., I find myself at the apartment door of Katherine Ziegler, a psychologist and wantologist. Could it be, I wonder, that there is such a thing as a wantologist, someone we can hire to figure out what we want? Have I arrived at some final telling moment in my research on outsourcing intimate parts of our lives, or at the absurdist edge of the market frontier?

A willowy woman of 55, Ms. Ziegler beckons me in. A framed Ph.D. degree in psychology from the University of Illinois hangs on the wall, along with an intricate handmade quilt and a collage of images clipped from magazines — the back of a child’s head, a gnarled tree, a wandering cat — an odd assemblage that invites one to search for a connecting thread.

After a 20-year career as a psychologist, Ms. Ziegler expanded her practice to include executive coaching, life coaching and wantology. Originally intended to help business managers make purchasing decisions, wantology is the brainchild of Kevin Kreitman, an industrial engineer who set up a two-day class to train life coaches to apply this method to individuals in private life. Ms. Ziegler took the course and was promptly certified in the new field.

Ms. Ziegler explains that the first step in thinking about a “want,” is to ask your client, “ ‘Are you floating or navigating toward your goal?’ A lot of people float. Then you ask, ‘What do you want to feel like once you have what you want?’ ”

She described her experience with a recent client, a woman who lived in a medium-size house with a small garden but yearned for a bigger house with a bigger garden. She dreaded telling her husband, who had long toiled at renovations on their present home, and she feared telling her son, who she felt would criticize her for being too materialistic.

Ms. Ziegler took me through the conversation she had with this woman: “What do you want?”

“A bigger house.”

“How would you feel if you lived in a bigger house?”

“Peaceful.”

“What other things make you feel peaceful?”

“Walks by the ocean.” (The ocean was an hour’s drive away.)

“Do you ever take walks nearer where you live that remind you of the ocean?”“Certain ones, yes.”

“What do you like about those walks?”

“I hear the sound of water and feel surrounded by green.”

This gentle line of questions nudged the client toward a more nuanced understanding of her own desire. In the end, the woman dedicated a small room in her home to feeling peaceful. She filled it with lush ferns. The greenery encircled a bubbling slate-and-rock tabletop fountain. Sitting in her redesigned room in her medium-size house, the woman found the peace for which she’d yearned.

I was touched by the story. Maybe Ms. Ziegler’s client just needed a good friend who could listen sympathetically and help her work out her feelings. Ms. Ziegler provided a service — albeit one with a wacky name — for a fee. Still, the mere existence of a paid wantologist indicates just how far the market has penetrated our intimate lives. Can it be that we are no longer confident to identify even our most ordinary desires without a professional to guide us?

Is the wantologist the tail end of a larger story? Over the last century, the world of services has changed greatly.

A hundred — or even 40 — years ago, human eggs and sperm were not for sale, nor were wombs for rent. Online dating companies, nameologists, life coaches, party animators and paid graveside visitors did not exist.

Nor had a language developed that so seamlessly melded village and market — as in “Rent-a-Mom,” “Rent-a-Dad,” “Rent-a-Grandma,” “Rent-a-Friend” — insinuating itself, half joking, half serious, into our culture. The explosion in the number of available personal services says a great deal about changing ideas of what we can reasonably expect from whom. In the late 1940s, there were 2,500 clinical psychologists licensed in the United States. By 2010, there were 77,000 — and an additional 50,000 marriage and family therapists.

[div class=attrib]Read the entire article after the jump.[end-div]

How Religions Are Born: Church of Jedi

May the Fourth was Star Wars Day. Why? Say, “May the Fourth” slowly while pretending to lisp slightly, and you’ll understand. Appropriately, Matt Cresswen over at the Guardian took this day to review the growing Jedi religion in the UK.

Would that make George Lucas God?

[div class=attrib]From the Guardian:[end-div]

Today [May 4] is Star Wars Day, being May the Fourth. (Say the date slowly, several times.) Around the world, film buffs, storm troopers and Jedi are gathering to celebrate one of the greatest science fiction romps of all time. It would be easy to let the fan boys enjoy their day and be done with it. However, Jediism is a growing religion in the UK. Although the results of the 2001 census, in which 390,000 recipients stated their religion as Jedi, have been widely interpreted as a pop at the government, the UK does actually have serious Jedi.

For those of you who, like BBC producer Bill Dare, have never seen Star Wars, the Jedi are “good” characters from the films. They draw from a mystical entity binding the universe, called “the Force”. Sporting hoodies, the Jedi are generally altruistic, swift-footed and handy with a light sabre. Their enemies, Emperor Palpatine, Darth Vader and other cohorts use the dark side of the Force. By tapping into its powers, the dark side command armies of demented droids, kill Jedi and are capable of wiping out entire planets.

This week, Chi-Pa Amshe from the Church of Jediism in Anglesey, Wales, emailed me with some responses to questions. He said Jediism was growing and that they were gaining hundreds of members each month. The church made the news three years ago, after its founder, Daniel Jones, had a widely reported run-in with Tesco.

Chi-Pa Amshe, speaking as a spokesperson for the Jedi council (Falkna Kar, Anzai Kooji Cutpa and Daqian Xiong), believes that Jediism can merge with other belief systems, rather like a bolt-on accessory.

“Many of our members are in fact both Christian and Jedi,” he says. “We can no more understand the Force and our place within it than a gear in a clock could comprehend its function in moving the hands across the face. I’d like to point out that each of our members interprets their beliefs through the prison of their own lives and although we offer guidance and support, ultimately like with the Qur’an, it is up to them to find what they need and choose their own path.”

Meeting up as a church is hard, the council explained, and members rely heavily on Skype and Facebook. They have an annual physical meeting, “where the church council is available for face-to-face questions and guidance”. They also support charity events and attend computer gaming conventions.

Meanwhile, in New Zealand, a web-based group called the Jedi Church believes that Jediism has always been around.

It states: “The Jedi religion is just like the sun, it existed before a popular movie gave it a name, and now that it has a name, people all over the world can share their experiences of the Jedi religion, here in the Jedi church.”

There are many other Jedi groups on the web, although Chi-Pa Amshe said some were “very unpleasant”. The dark side, perhaps.

[div class=attrib]Read the entire article after the jump.[end-div]

Creativity and Immorality

[div class=attrib]From Scientific American:[end-div]

In the mid 1990’s, Apple Computers was a dying company.  Microsoft’s Windows operating system was overwhelmingly favored by consumers, and Apple’s attempts to win back market share by improving the Macintosh operating system were unsuccessful.  After several years of debilitating financial losses, the company chose to purchase a fledgling software company called NeXT.  Along with purchasing the rights to NeXT’s software, this move allowed Apple to regain the services of one of the company’s founders, the late Steve Jobs.  Under the guidance of Jobs, Apple returned to profitability and is now the largest technology company in the world, with the creativity of Steve Jobs receiving much of the credit.

However, despite the widespread positive image of Jobs as a creative genius, he also has a dark reputation for encouraging censorship,“ losing sight of honesty and integrity”, belittling employees, and engaging in other morally questionable actions. These harshly contrasting images of Jobs raise the question of why a CEO held in such near-universal positive regard could also be the same one accused of engaging in such contemptible behavior.  The answer, it turns out, may have something to do with the aspect of Jobs which is so admired by so many.

In a recent paper published in the Journal of Personality and Social Psychology, researchers at Harvard and Duke Universities demonstrate that creativity can lead people to behave unethically.  In five studies, the authors show that creative individuals are more likely to be dishonest, and that individuals induced to think creatively were more likely to be dishonest. Importantly, they showed that this effect is not explained by any tendency for creative people to be more intelligent, but rather that creativity leads people to more easily come up with justifications for their unscrupulous actions.

In one study, the authors administered a survey to employees at an advertising agency.  The survey asked the employees how likely they were to engage in various kinds of unethical behaviors, such as taking office supplies home or inflating business expense reports.  The employees were also asked to report how much creativity was required for their job.  Further, the authors asked the executives of the company to provide creativity ratings for each department within the company.

Those who said that their jobs required more creativity also tended to self-report a greater likelihood of unethical behavior.  And if the executives said that a particular department required more creativity, the individuals in that department tended to report greater likelihoods of unethical behavior.

The authors hypothesized that it is creativity which causes unethical behavior by allowing people the means to justify their misdeeds, but it is hard to say for certain whether this is correct given the correlational nature of the study.  It could just as easily be true, after all, that unethical behavior leads people to be more creative, or that there is something else which causes both creativity and dishonesty, such as intelligence.  To explore this, the authors set up an experiment in which participants were induced into a creative mindset and then given the opportunity to cheat.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

Job of the Future: Personal Data Broker

Pause for a second, and think of all the personal data that companies have amassed about you. Then think about the billions that these companies make in trading this data to advertisers, information researchers and data miners. There are credit bureaus with details of your financial history since birth; social networks with details of everything you and your friends say and (dis)like; GPS-enabled services that track your every move; search engines that trawl your searches, medical companies with your intimate health data, security devices that monitor your movements, and online retailers with all your purchase transactions and wish-lists.

Now think of a business model that puts you in charge of your own personal data. This may not be as far fetched as it seems, especially as the backlash grows against the increasing consolidation of personal data in the hands of an ever smaller cadre of increasingly powerful players.

[div class=attrib]From Technology Review:[end-div]

Here’s a job title made for the information age: personal data broker.

Today, people have no choice but to give away their personal information—sometimes in exchange for free networking on Twitter or searching on Google, but other times to third-party data-aggregation firms without realizing it at all.

“There’s an immense amount of value in data about people,” says Bernardo Huberman, senior fellow at HP Labs. “That data is being collected all the time. Anytime you turn on your computer, anytime you buy something.”

Huberman, who directs HP Labs’ Social Computing Research Group, has come up with an alternative—a marketplace for personal information—that would give individuals control of and compensation for the private tidbits they share, rather than putting it all in the hands of companies.

In a paper posted online last week, Huberman and coauthor Christina Aperjis propose something akin to a New York Stock Exchange for personal data. A trusted market operator could take a small cut of each transaction and help arrive at a realistic price for a sale.

“There are two kinds of people. Some people who say, ‘I’m not going to give you my data at all, unless you give me a million bucks.’ And there are a lot of people who say, ‘I don’t care, I’ll give it to you for little,’ ” says Huberman. He’s tested this the academic way, through experiments that involved asking men and women to share how much they weigh for a payment.

On his proposed market, a person who highly values her privacy might chose an option to sell her shopping patterns for $10, but at a big risk of not finding a buyer. Alternately, she might sell the same data for a guaranteed payment of 50 cents. Or she might opt out and keep her privacy entirely.

You won’t find any kind of opportunity like this today. But with Internet companies making billions of dollars selling our information, fresh ideas and business models that promise users control over their privacy are gaining momentum. Startups like Personal and Singly are working on these challenges already. The World Economic Forum recently called an individual’s data an emerging “asset class.”

Huberman is not the first to investigate a personal data marketplace, and there would seem to be significant barriers—like how to get companies that already collect data for free to participate. But, he says, since the pricing options he outlines gauge how a person values privacy and risk, they address at least two big obstacles to making such a market function.

[div class=attrib]Read the entire article after the jump.[end-div]

Bike+GPS=Map Art

Frank Jacobs over at Strange Maps has found another “out in leftfield” map. This cartographic invention is courtesy of an artist who “paints” using his GPS-enabled bicycle.

[div class=attrib]From Strange Maps:[end-div]

GPS technology is opening up exciting new hybrid forms of mapping and art. Or in this case: cycling, mapping and art. The maps on this page are the product of Michael Wallace, a Baltimore-based artist who uses his bike as a paintbrush, and the city as his canvas.

As Wallace traces shapes and forms across Baltimore’s street grid, the GPS technology that tracks this movements fixes the fluid pattern of his pedalstrokes onto a map. The results are what Wallace calls GPX images, or ‘virtual geoglyphs’ [1].

These massive images, created over by now three riding seasons, “continue to generate happiness, fitness and imagination through planning the physical activity of ‘digital spray-painting’ my ‘local canvas’ with the help of tracking satellites 12,500 miles above.”

Wallace’s portfolio by now is filled with dozens of GPX images, ranging from pictures of a toilet to the Titanic. They even include a map of the US – traced on the map of Baltimore. How’s that for self-reference? Or for Bawlmer [2] hubris?

[div class=attrib]Read the entire article after the jump.[end-div]

Corporatespeak: Lingua Franca of the Internet

Author Lewis Lapham reminds us of the phrase made (in)famous by Emperor Charles V:

“I speak Spanish to God, Italian to women, French to men, and German to my horse.”

So, what of the language of the internet? Again, Lapham offers a fitting and damning summary, this time courtesy of a lesser mortal, critic George Steiner:

“The true catastrophe of Babel is not the scattering of tongues. It is the reduction of human speech to a handful of planetary, ‘multinational’ tongues…Anglo-American standardized vocabularies” and grammar shaped by “military technocratic megalomania” and “the imperatives of commercial greed.”

More from the keyboard of Lewis Lapham on how the communicative promise of the internet is being usurped by commerce and the “lowest common denominator”.

[div class=attrib]From TomDispatch:[end-div]

But in which language does one speak to a machine, and what can be expected by way of response? The questions arise from the accelerating datastreams out of which we’ve learned to draw the breath of life, posed in consultation with the equipment that scans the flesh and tracks the spirit, cues the ATM, the GPS, and the EKG, arranges the assignations on Match.com and the high-frequency trades at Goldman Sachs, catalogs the pornography and drives the car, tells us how and when and where to connect the dots and thus recognize ourselves as human beings.

Why then does it come to pass that the more data we collect—from Google, YouTube, and Facebook—the less likely we are to know what it means?

The conundrum is in line with the late Marshall McLuhan’s noticing 50 years ago the presence of “an acoustic world,” one with “no continuity, no homogeneity, no connections, no stasis,” a new “information environment of which humanity has no experience whatever.” He published Understanding Media in 1964, proceeding from the premise that “we become what we behold,” that “we shape our tools, and thereafter our tools shape us.”

Media were to be understood as “make-happen agents” rather than as “make-aware agents,” not as art or philosophy but as systems comparable to roads and waterfalls and sewers. Content follows form; new means of communication give rise to new structures of feeling and thought.

To account for the transference of the idioms of print to those of the electronic media, McLuhan examined two technological revolutions that overturned the epistemological status quo. First, in the mid-15th century, Johannes Gutenberg’s invention of moveable type, which deconstructed the illuminated wisdom preserved on manuscript in monasteries, encouraged people to organize their perceptions of the world along the straight lines of the printed page. Second, in the 19th and 20th centuries, the applications of electricity (telegraph, telephone, radio, movie camera, television screen, eventually the computer), favored a sensibility that runs in circles, compressing or eliminating the dimensions of space and time, narrative dissolving into montage, the word replaced with the icon and the rebus.

Within a year of its publication, Understanding Media acquired the standing of Holy Scripture and made of its author the foremost oracle of the age. The New York Herald Tribune proclaimed him “the most important thinker since Newton, Darwin, Freud, Einstein, and Pavlov.” Although never at a loss for Delphic aphorism—”The electric light is pure information”; “In the electric age, we wear all mankind as our skin”—McLuhan assumed that he had done nothing more than look into the window of the future at what was both obvious and certain.

[div class=attrib]Read the entire article following the jump.[end-div]

Language as a Fluid Construct

Peter Ludlow, professor of philosophy at Northwestern University, has authored a number of fascinating articles on the philosophy of language and linguistics. Here he discusses his view of language as a dynamic, living organism. Literalists take note.

[div class=attrib]From the New York Times:[end-div]

There is a standard view about language that one finds among philosophers, language departments, pundits and politicians.  It is the idea that a language like English is a semi-stable abstract object that we learn to some degree or other and then use in order to communicate or express ideas and perform certain tasks.  I call this the static picture of language, because, even though it acknowledges some language change, the pace of change is thought to be slow, and what change there is, is thought to be the hard fought product of conflict.  Thus, even the “revisionist” picture of language sketched by Gary Gutting in a recent Stone column counts as static on my view, because the change is slow and it must overcome resistance.

Recent work in philosophy, psychology and artificial intelligence has suggested an alternative picture that rejects the idea that languages are stable abstract objects that we learn and then use.  According to the alternative “dynamic” picture, human languages are one-off things that we build “on the fly” on a conversation-by-conversation basis; we can call these one-off fleeting languages microlanguages.  Importantly, this picture rejects the idea that words are relatively stable things with fixed meanings that we come to learn. Rather, word meanings themselves are dynamic — they shift from microlanguage to microlanguage.

Shifts of meaning do not merely occur between conversations; they also occur within conversations — in fact conversations are often designed to help this shifting take place.  That is, when we engage in conversation, much of what we say does not involve making claims about the world but involves instructing our communicative partners how to adjust word meanings for the purposes of our conversation.

I’d I tell my friend that I don’t care where I teach so long as the school is in a city.  My friend suggests that I apply to the University of Michigan and I reply “Ann Arbor is not a city.”  In doing this, I am not making a claim about the world so much as instructing my friend (for the purposes of our conversation) to adjust the meaning of “city” from official definitions to one in which places like Ann Arbor do not count as a cities.

Word meanings are dynamic, but they are also underdetermined.  What this means is that there is no complete answer to what does and doesn’t fall within the range of a term like “red” or “city” or “hexagonal.”  We may sharpen the meaning and we may get clearer on what falls in the range of these terms, but we never completely sharpen the meaning.

This isn’t just the case for words like “city” but, for all words, ranging from words for things, like “person” and “tree,” words for abstract ideas, like “art” and “freedom,” and words for crimes, like “rape” and “murder.” Indeed, I would argue that this is also the case with mathematical and logical terms like “parallel line” and “entailment.”  The meanings of these terms remain open to some degree or other, and are sharpened as needed when we make advances in mathematics and logic.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Leif Parsons / New York Times.[end-div]

Loneliness in the Age of Connectedness

Online social networks are a boon to researchers. As never before, social scientists are probing our connections, our innermost thoughts now made public, our networks of friends, and our loneliness. Some academics point to the likes of Facebook for making our increasingly shallow “friendships” a disposable and tradable commodity, and ironically facilitating isolation from more intimate and deeper connections. Others see Facebook merely as a mirror — we have, quite simply, made ourselves lonely, and our social networks instantly and starkly expose our isolation for all to see and “like”.

An insightful article by novelist Stephen Marche over at The Atlantic examines our self-imposed loneliness.

[div class=attrib]From the Atlantic:[end-div]

Yvette Vickers, a former Playboy playmate and B-movie star, best known for her role in Attack of the 50 Foot Woman, would have been 83 last August, but nobody knows exactly how old she was when she died. According to the Los Angeles coroner’s report, she lay dead for the better part of a year before a neighbor and fellow actress, a woman named Susan Savage, noticed cobwebs and yellowing letters in her mailbox, reached through a broken window to unlock the door, and pushed her way through the piles of junk mail and mounds of clothing that barricaded the house. Upstairs, she found Vickers’s body, mummified, near a heater that was still running. Her computer was on too, its glow permeating the empty space.

The Los Angeles Times posted a story headlined “Mummified Body of Former Playboy Playmate Yvette Vickers Found in Her Benedict Canyon Home,” which quickly went viral. Within two weeks, by Technorati’s count, Vickers’s lonesome death was already the subject of 16,057 Facebook posts and 881 tweets. She had long been a horror-movie icon, a symbol of Hollywood’s capacity to exploit our most basic fears in the silliest ways; now she was an icon of a new and different kind of horror: our growing fear of loneliness. Certainly she received much more attention in death than she did in the final years of her life. With no children, no religious group, and no immediate social circle of any kind, she had begun, as an elderly woman, to look elsewhere for companionship. Savage later told Los Angeles magazine that she had searched Vickers’s phone bills for clues about the life that led to such an end. In the months before her grotesque death, Vickers had made calls not to friends or family but to distant fans who had found her through fan conventions and Internet sites.

Vickers’s web of connections had grown broader but shallower, as has happened for many of us. We are living in an isolation that would have been unimaginable to our ancestors, and yet we have never been more accessible. Over the past three decades, technology has delivered to us a world in which we need not be out of contact for a fraction of a moment. In 2010, at a cost of $300 million, 800 miles of fiber-optic cable was laid between the Chicago Mercantile Exchange and the New York Stock Exchange to shave three milliseconds off trading times. Yet within this world of instant and absolute communication, unbounded by limits of time or space, we suffer from unprecedented alienation. We have never been more detached from one another, or lonelier. In a world consumed by ever more novel modes of socializing, we have less and less actual society. We live in an accelerating contradiction: the more connected we become, the lonelier we are. We were promised a global village; instead we inhabit the drab cul-de-sacs and endless freeways of a vast suburb of information.

At the forefront of all this unexpectedly lonely interactivity is Facebook, with 845 million users and $3.7 billion in revenue last year. The company hopes to raise $5 billion in an initial public offering later this spring, which will make it by far the largest Internet IPO in history. Some recent estimates put the company’s potential value at $100 billion, which would make it larger than the global coffee industry—one addiction preparing to surpass the other. Facebook’s scale and reach are hard to comprehend: last summer, Facebook became, by some counts, the first Web site to receive 1 trillion page views in a month. In the last three months of 2011, users generated an average of 2.7 billion “likes” and comments every day. On whatever scale you care to judge Facebook—as a company, as a culture, as a country—it is vast beyond imagination.

Despite its immense popularity, or more likely because of it, Facebook has, from the beginning, been under something of a cloud of suspicion. The depiction of Mark Zuckerberg, in The Social Network, as a bastard with symptoms of Asperger’s syndrome, was nonsense. But it felt true. It felt true to Facebook, if not to Zuckerberg. The film’s most indelible scene, the one that may well have earned it an Oscar, was the final, silent shot of an anomic Zuckerberg sending out a friend request to his ex-girlfriend, then waiting and clicking and waiting and clicking—a moment of superconnected loneliness preserved in amber. We have all been in that scene: transfixed by the glare of a screen, hungering for response.

When you sign up for Google+ and set up your Friends circle, the program specifies that you should include only “your real friends, the ones you feel comfortable sharing private details with.” That one little phrase, Your real friends—so quaint, so charmingly mothering—perfectly encapsulates the anxieties that social media have produced: the fears that Facebook is interfering with our real friendships, distancing us from each other, making us lonelier; and that social networking might be spreading the very isolation it seemed designed to conquer.

Facebook arrived in the middle of a dramatic increase in the quantity and intensity of human loneliness, a rise that initially made the site’s promise of greater connection seem deeply attractive. Americans are more solitary than ever before. In 1950, less than 10 percent of American households contained only one person. By 2010, nearly 27 percent of households had just one person. Solitary living does not guarantee a life of unhappiness, of course. In his recent book about the trend toward living alone, Eric Klinenberg, a sociologist at NYU, writes: “Reams of published research show that it’s the quality, not the quantity of social interaction, that best predicts loneliness.” True. But before we begin the fantasies of happily eccentric singledom, of divorcées dropping by their knitting circles after work for glasses of Drew Barrymore pinot grigio, or recent college graduates with perfectly articulated, Steampunk-themed, 300-square-foot apartments organizing croquet matches with their book clubs, we should recognize that it is not just isolation that is rising sharply. It’s loneliness, too. And loneliness makes us miserable.

We know intuitively that loneliness and being alone are not the same thing. Solitude can be lovely. Crowded parties can be agony. We also know, thanks to a growing body of research on the topic, that loneliness is not a matter of external conditions; it is a psychological state. A 2005 analysis of data from a longitudinal study of Dutch twins showed that the tendency toward loneliness has roughly the same genetic component as other psychological problems such as neuroticism or anxiety.

[div class=attrib]Kindly read the entire article after the momentary jump.[end-div]

[div class=attrib]Photograph courtesy of Phillip Toledano / The Atlantic.[end-div]

Wedding Photography

If you’ve been through a marriage or other formal ceremony you probably have an album of images that beautifully captured the day. You, significant other, family and select friends will browse through the visual memories every so often. Doubtless you will have hired, for a quite handsome sum, a professional photographer and/or videographer to record all the important instants. However, somewhere you, or your photographer, will have a selection of “outtakes” that should never see the light of day, such as those described below.

[div class=attrib]From the Daily Telegraph:[end-div]

Thomas and Anneka Geary commissioned professional photographers Ian McCloskey and Nikki Carter £750 to cover what should have been the best day of their lives.

But they were stunned when the pictures arrived and included out of focus shots of the couple, the back of guests’ heads and a snap of the bride’s mother whose face was completely obscured by her hat.

Astonishingly, the photographers even failed to take a single frame of the groom’s parents.

One snap of the couple signing the marriage register also appears to feature a ghostly hand clutching a toy motorbike where the snappers tried to edit out Anneka’s three-year-old nephew Harry who was standing in the background.

The pictures of the evening do, which hosted 120 guests, were also taken without flash because one of the photographers complained about being epileptic.

[div class=attrib]Read the entire article and browse through more images after the jump.[end-div]

[div class=attrib]Image: Tom, 32, a firefighter for Warwickshire Fire Service, said: “We received a CD from the wedding photographers but at first we thought it was a joke. Just about all of the pictures were out of focus or badly lit or just plain weird.” Courtesy of Daily Telegraph, Westgate Photography / SWNS.[end-div]

The Evolutionary Benefits of Middle Age

David Bainbridge, author of “Middle Age: A Natural History”, examines the benefits of middle age. Yes, really. For those of us in “middle age” it’s not surprising to see that this period is not limited to decline, disease and senility. Rather, it’s a pre-programmed redistribution of physical and mental resources designed to cope with our ever-increasing life spans.

[div class=attrib]From David Bainbridge over at New Scientist:[end-div]

As a 42-year-old man born in England, I can expect to live for about another 38 years. In other words, I can no longer claim to be young. I am, without doubt, middle-aged.

To some people that is a depressing realization. We are used to dismissing our fifth and sixth decades as a negative chapter in our lives, perhaps even a cause for crisis. But recent scientific findings have shown just how important middle age is for every one of us, and how crucial it has been to the success of our species. Middle age is not just about wrinkles and worry. It is not about getting old. It is an ancient, pivotal episode in the human life span, preprogrammed into us by natural selection, an exceptional characteristic of an exceptional species.

Compared with other animals, humans have a very unusual pattern to our lives. We take a very long time to grow up, we are long-lived, and most of us stop reproducing halfway through our life span. A few other species have some elements of this pattern, but only humans have distorted the course of their lives in such a dramatic way. Most of that distortion is caused by the evolution of middle age, which adds two decades that most other animals simply do not get.

An important clue that middle age isn’t just the start of a downward spiral is that it does not bear the hallmarks of general, passive decline. Most body systems deteriorate very little during this stage of life. Those that do, deteriorate in ways that are very distinctive, are rarely seen in other species and are often abrupt.

For example, our ability to focus on nearby objects declines in a predictable way: Farsightedness is rare at 35 but universal at 50. Skin elasticity also decreases reliably and often surprisingly abruptly in early middle age. Patterns of fat deposition change in predictable, stereotyped ways. Other systems, notably cognition, barely change.

Each of these changes can be explained in evolutionary terms. In general, it makes sense to invest in the repair and maintenance only of body systems that deliver an immediate fitness benefit — that is, those that help to propagate your genes. As people get older, they no longer need spectacular visual acuity or mate-attracting, unblemished skin. Yet they do need their brains, and that is why we still invest heavily in them during middle age.

As for fat — that wonderfully efficient energy store that saved the lives of many of our hard-pressed ancestors — its role changes when we are no longer gearing up to produce offspring, especially in women. As the years pass, less fat is stored in depots ready to meet the demands of reproduction — the breasts, hips and thighs — or under the skin, where it gives a smooth, youthful appearance. Once our babymaking days are over, fat is stored in larger quantities and also stored more centrally, where it is easiest to carry about. That way, if times get tough we can use it for our own survival, thus freeing up food for our younger relatives.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Middle Age Couple Laughing. Courtesy of Cindi Matthews / Flickr.[end-div]

Why Do Some Videos Go Viral, and Others Not?

Some online videos and stories are seen by tens or hundreds of millions, yet others never see the light of day. Advertisers and reality star wannabes search daily for the secret sauce that determines the huge success of one internet meme over many others. However, much to the frustration of the many agents to the “next big thing”, several fascinating new studies point at nothing more than simple randomness.

[div class=attrib]From the New Scientist:[end-div]

WHAT causes some photos, videos, and Twitter posts to spread across the internet like wildfire while others fall by the wayside? The answer may have little to do with the quality of the information. What goes viral may be completely arbitrary, according to a controversial new study of online social networks.

By analysing 120 million retweets – repostings of users’ messages on Twitter – by 12.5 million users of the social network, researchers at Indiana University, Bloomington, learned the mechanisms by which memes compete for user interest, and how information spreads.

Using this insight, the team built a computer simulation designed to mimic Twitter. In the simulation, each tweet or message was assigned the same value and retweets were performed at random. Despite this, some tweets became incredibly popular and were persistently reposted, while others were quickly forgotten.

The reason for this, says team member Filippo Menczer, is that the simulated users had a limited attention span and could only view a portion of the total number of tweets – as is the case in the real world. Tweets selected for retweeting would be more likely to be seen by a user and re-posted. After a few iterations, a tweet becomes significantly more prevalent than those not retweeted. Many users see the message and retweet it further.

“When a meme starts to get popular it displaces other memes; you start to pay attention to the popular meme and don’t pay attention to other things because you have only so much attention,” Menczer says. “It’s similar to when a big news story breaks, you don’t hear about other things that happened on that day.”

Katherine Milkman of the University of Pennsylvania in Philadelphia disagrees. “[Menczer’s study] says that all of the things that catch on could be truly random but it doesn’t say they have to be,” says Milkman, who co-authored a paper last year examining how emotions affect meme sharing.

Milkman’s study analysed 7000 articles that appeared in the New York Times over a three-month period. It found that articles that aroused readers’ emotions were more likely to end up on the website’s “most emailed” list. “Anything that gets you fired up, whether positive or negative, will lead you to share it more,” Milkman says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Fooled by Randomness: The Hidden Role of Chance in Life and in the Markets is a book by Nassim Nicholas Taleb. Courtesy of Wikipedia.[end-div]

Childhood Memory

[div class=attrib]From Slate:[end-div]

Last August, I moved across the country with a child who was a few months shy of his third birthday. I assumed he’d forget his old life—his old friends, his old routine—within a couple of months. Instead, over a half-year later, he remembers it in unnerving detail: the Laundromat below our apartment, the friends he ran around naked with, my wife’s co-workers. I just got done with a stint pretending to be his long-abandoned friend Iris—at his direction.

We assume children don’t remember much, because we don’t remember much about being children. As far as I can tell, I didn’t exist before the age of 5 or so—which is how old I am in my earliest memory, wandering around the Madison, Wis. farmers market in search of cream puffs. But developmental research now tells us that Isaiah’s memory isn’t extraordinary. It’s ordinary. Children remember.

Up until the 1980s, almost no one would have believed that Isaiah still remembers Iris. It was thought that babies and young toddlers lived in a perpetual present: All that existed was the world in front of them at that moment. When Jean Piaget conducted his famous experiments on object permanence—in which once an object was covered up, the baby seemed to forget about it—Piaget concluded that the baby had been unable to store the memory of the object: out of sight, out of mind.

The paradigm of the perpetual present has now itself been forgotten. Even infants are aware of the past, as many remarkable experiments have shown. Babies can’t speak but they can imitate, and if shown a series of actions with props, even 6-month-old infants will repeat a three-step sequence a day later. Nine-month-old infants will repeat it a month later.

The conventional wisdom for older children has been overturned, too. Once, children Isaiah’s age were believed to have memories of the past but nearly no way to organize those memories. According to Patricia Bauer, a professor of psychology at Emory who studies early memory, the general consensus was that a 3-year-old child’s memory was a jumble of disorganized information, like your email inbox without any sorting function: “You can’t sort them by name, you can’t sort them by date, it’s just all your email messages.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Summer school memories. Retouched New York World-Telegram photograph by Walter Albertin. Courtesy of Wikimedia.[end-div]

Creativity and Failure at School

[div class=attrib]From the Wall Street Journal:[end-div]

Most of our high schools and colleges are not preparing students to become innovators. To succeed in the 21st-century economy, students must learn to analyze and solve problems, collaborate, persevere, take calculated risks and learn from failure. To find out how to encourage these skills, I interviewed scores of innovators and their parents, teachers and employers. What I learned is that young Americans learn how to innovate most often despite their schooling—not because of it.

Though few young people will become brilliant innovators like Steve Jobs, most can be taught the skills needed to become more innovative in whatever they do. A handful of high schools, colleges and graduate schools are teaching young people these skills—places like High Tech High in San Diego, the New Tech high schools (a network of 86 schools in 16 states), Olin College in Massachusetts, the Institute of Design (d.school) at Stanford and the MIT Media Lab. The culture of learning in these programs is radically at odds with the culture of schooling in most classrooms.

In most high-school and college classes, failure is penalized. But without trial and error, there is no innovation. Amanda Alonzo, a 32-year-old teacher at Lynbrook High School in San Jose, Calif., who has mentored two Intel Science Prize finalists and 10 semifinalists in the last two years—more than any other public school science teacher in the U.S.—told me, “One of the most important things I have to teach my students is that when you fail, you are learning.” Students gain lasting self-confidence not by being protected from failure but by learning that they can survive it.

The university system today demands and rewards specialization. Professors earn tenure based on research in narrow academic fields, and students are required to declare a major in a subject area. Though expertise is important, Google’s director of talent, Judy Gilbert, told me that the most important thing educators can do to prepare students for work in companies like hers is to teach them that problems can never be understood or solved in the context of a single academic discipline. At Stanford’s d.school and MIT’s Media Lab, all courses are interdisciplinary and based on the exploration of a problem or new opportunity. At Olin College, half the students create interdisciplinary majors like “Design for Sustainable Development” or “Mathematical Biology.”

Learning in most conventional education settings is a passive experience: The students listen. But at the most innovative schools, classes are “hands-on,” and students are creators, not mere consumers. They acquire skills and knowledge while solving a problem, creating a product or generating a new understanding. At High Tech High, ninth graders must develop a new business concept—imagining a new product or service, writing a business and marketing plan, and developing a budget. The teams present their plans to a panel of business leaders who assess their work. At Olin College, seniors take part in a yearlong project in which students work in teams on a real engineering problem supplied by one of the college’s corporate partners.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of NY Daily News.[end-div]

Coke or Pepsi?

Most people come down on one side or the other; there’s really no middle ground when it comes to the soda (or pop) wars. But, while the choice of drink itself may seem trivial the combined annual revenues of these food and beverage behemoths is far from it — close to $100 billion. The infographic below dissects this seriously big business.

On Being a Billionare For a Day

New York Times writer Kevin Roose recently lived the life of a billionaire for a day. His report while masquerading as a member of the 0.01 percent of the 0.1 percent of the 1 percent makes for fascinating and disturbing reading.

[div class=attrib]From the New York Times:[end-div]

I HAVE a major problem: I just glanced at my $45,000 Chopard watch, and it’s telling me that my Rolls-Royce may not make it to the airport in time for my private jet flight.

Yes, I know my predicament doesn’t register high on the urgency scale. It’s not exactly up there with malaria outbreaks in the Congo or street riots in Athens. But it’s a serious issue, because my assignment today revolves around that plane ride.

“Step on it, Mike,” I instruct my chauffeur, who nods and guides the $350,000 car into the left lane of the West Side Highway.

Let me back up a bit. As a reporter who writes about Wall Street, I spend a fair amount of time around extreme wealth. But my face is often pressed up against the gilded window. I’ve never eaten at Per Se, or gone boating on the French Riviera. I live in a pint-size Brooklyn apartment, rarely take cabs and feel like sending Time Warner to The Hague every time my cable bill arrives.

But for the next 24 hours, my goal is to live like a billionaire. I want to experience a brief taste of luxury — the chauffeured cars, the private planes, the V.I.P. access and endless privilege — and then go back to my normal life.

The experiment illuminates a paradox. In the era of the Occupy Wall Street movement, when the global financial elite has been accused of immoral and injurious conduct, we are still obsessed with the lives of the ultrarich. We watch them on television shows, follow their exploits in magazines and parse their books and public addresses for advice. In addition to the long-running list by Forbes, Bloomberg now maintains a list of billionaires with rankings that update every day.

Really, I wondered, what’s so great about billionaires? What privileges and perks do a billion dollars confer? And could I tap into the psyches of the ultrawealthy by walking a mile in their Ferragamo loafers?

At 6 a.m., Mike, a chauffeur with Flyte Tyme Worldwide, picked me up at my apartment. He opened the Rolls-Royce’s doors to reveal a spotless white interior, with lamb’s wool floor mats, seatback TVs and a football field’s worth of legroom. The car, like the watch, was lent to me by the manufacturer for the day while The New York Times made payments toward the other services.

Mike took me to my first appointment, a power breakfast at the Core club in Midtown. “Core,” as the cognoscenti call it, is a members-only enclave with hefty dues — $15,000 annually, plus a $50,000 initiation fee — and a membership roll that includes brand-name financiers like Stephen A. Schwarzman of the Blackstone Group and Daniel S. Loeb of Third Point.

Over a spinach omelet, Jennie Enterprise, the club’s founder, told me about the virtues of having a cloistered place for “ultrahigh net worth individuals” to congregate away from the bustle of the boardroom.

“They want someplace that respects their privacy,” she said. “They want a place that they can seamlessly transition from work to play, that optimizes their time.”

After breakfast, I rush back to the car for a high-speed trip to Teterboro Airport in New Jersey, where I’m meeting a real-life billionaire for a trip on his private jet. The billionaire, a hedge fund manager, was scheduled to go down to Georgia and offered to let me interview him during the two-hour jaunt on the condition that I not reveal his identity.

[div class=attrib]Read the entire article after the Learjet.[end-div]

[div class=attrib]Image: Waited On: Mr. Roose exits the Rolls-Royce looking not unlike many movers and shakers in Manhattan. Courtesy of New York Times.[end-div]

Inward Attention and Outward Attention

New studies show that our brains use two fundamentally different neurological pathways when we focus on our external environment and pay attention to our internal world. Researchers believe this could have important consequences, from finding new methods to manage stress and in treating some types of mental illness.

[div class=attrib]From Scientific American:[end-div]

What’s the difference between noticing the rapid beat of a popular song on the radio and noticing the rapid rate of your heart when you see your crush? Between noticing the smell of fresh baked bread and noticing that you’re out of breath? Both require attention. However, the direction of that attention differs: it is either turned outward, as in the case of noticing a stop sign or a tap on your shoulder, or turned inward, as in the case of feeling full or feeling love.

Scientists have long held that attention – regardless to what – involves mostly the prefrontal cortex, that frontal region of the brain responsible for complex thought and unique to humans and advanced mammals. A recent study by Norman Farb from the University of Toronto published in Cerebral Cortex, however, suggests a radically new view: there are different ways of paying attention. While the prefrontal cortex may indeed be specialized for attending to external information, older and more buried parts of the brain including the “insula” and “posterior cingulate cortex” appear to be specialized in observing our internal landscape.

Most of us prioritize externally oriented attention. When we think of attention, we often think of focusing on something outside of ourselves. We “pay attention” to work, the TV, our partner, traffic, or anything that engages our senses. However, a whole other world exists that most of us are far less aware of: an internal world, with its varied landscape of emotions, feelings, and sensations. Yet it is often the internal world that determines whether we are having a good day or not, whether we are happy or unhappy. That’s why we can feel angry despite beautiful surroundings or feel perfectly happy despite being stuck in traffics. For this reason perhaps, this newly discovered pathway of attention may hold the key to greater well-being.

Although this internal world of feelings and sensations dominates perception in babies, it becomes increasingly foreign and distant as we learn to prioritize the outside world.  Because we don’t pay as much attention to our internal world, it often takes us by surprise. We often only tune into our body when it rings an alarm bell –– that we’re extremely thirsty, hungry, exhausted or in pain. A flush of anger, a choked up feeling of sadness, or the warmth of love in our chest often appear to come out of the blue.

In a collaboration with professors Zindel Segal and Adam Anderson at the University of Toronto, the study compared exteroceptive (externally focused) attention to interoceptive (internally focused) attention in the brain. Participants were instructed to either focus on the sensation of their breath (interoceptive attention) or to focus their attention on words on a screen (exteroceptive attention).  Contrary to the conventional assumption that all attention relies upon the frontal lobe of the brain, the researchers found that this was true of only exteroceptive attention; interoceptive attention used evolutionarily older parts of the brain more associated with sensation and integration of physical experience.

[div class=attrib]Read the entire article after the jump.[end-div]

The Benefits of Bilingualism

[div class=attrib]From the New York Times:[end-div]

SPEAKING two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.

This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development.

They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.

Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one marked with a blue square and the other marked with a red circle.

In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.

The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while driving.

Why does the tussle between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage stemmed primarily from an ability for inhibition that was honed by the exercise of suppressing one language system: this suppression, it was thought, would help train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.

The key difference between bilinguals and monolinguals may be more basic: a heightened ability to monitor the environment. “Bilinguals have to switch languages quite often — you may talk to your father in one language and to your mother in another language,” says Albert Costa, a researcher at the University of Pompeu Fabra in Spain. “It requires keeping track of changes around you in the same way that we monitor our surroundings when driving.” In a study comparing German-Italian bilinguals with Italian monolinguals on monitoring tasks, Mr. Costa and his colleagues found that the bilingual subjects not only performed better, but they also did so with less activity in parts of the brain involved in monitoring, indicating that they were more efficient at it.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image courtesy of Futurity.org.[end-div]

Saucepan Lids No Longer Understand Cockney

You may not “adam and eve it”, but it seems that fewer and fewer Londoners now take to their “jam jars” for a drive down the “frog and toad” to their neighborhood “rub a dub dub”.

[div class=attrib]From the Daily Telegraph:[end-div]

The slang is dying out amid London’s diverse, multi-cultural society, new research has revealed.

A study of 2,000 adults, including half from the capital, found the world famous East End lingo which has been mimicked and mocked for decades is on the wane.

The survey, commissioned by The Museum of London, revealed almost 80 per cent of Londoners do not understand phrases such as ‘donkey’s ears’ – slang for years.

Other examples of rhyming slang which baffled participants included ‘mother hubbard’, which means cupboard, and ‘bacon and eggs’ which means legs.

Significantly, Londoners’ own knowledge of the jargon is now almost as bad as those who live outside of the capital.

Yesterday, Alex Werner, head of history collections at the Museum of London, said: “For many people, Cockney rhyming slang is intrinsic to the identity of London.

“However this research suggests that the Cockney dialect itself may not be enjoying the same level of popularity.

“The origins of Cockney slang reflects the diverse, immigrant community of London’s East End in the 19th century so perhaps it’s no surprise that other forms of slang are taking over as the cultural influences on the city change.”

The term ‘cokenay’ was used in The Reeve’s Tale, the third story in Geoffrey Chaucer’s The Canterbury Tales, to describe a child who was “tenderly brought up” and “effeminate”.

By the early 16th century the reference was commonly used as a derogatory term to describe town-dwellers. Later still, it was used to indicate those born specifically within earshot of the ringing of Bow-bell at St Mary-le-Bow church in east London.

Research by The Museum of London found that just 20 per cent of the 2,000 people questioned knew that ‘rabbit and pork’ meant talk.

It also emerged that very few of those polled understood the meaning of tommy tucker (supper), watch the custard and jelly (telly) or spend time with the teapot lids (kids).

Instead, the report found that most Londoners now have a grasp of just a couple of Cockney phrases such as tea leaf (thief), and apples and pears (stairs).

The most-used cockney slang was found to be the phrase ‘porky pies’ with 13 per cent of those questioned still using it. One in 10 used the term ‘cream crackered’.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Tesco UK.[end-div]

The End of the World for Doomsday Predictions

Apparently the world is due to end, again, this time on December 21, 2012. This latest prediction is from certain scholars of all things ancient Mayan. Now, of course, the world did not end as per Harold Camping’s most recent predictions, so let’s hope, or not, that the Mayan’s get it right for the sake of humanity.

The infographic below courtesy of xerxy brings many of these failed predictions of death, destruction and apocalypse into living color.

Are you a Spammer?

Infographic week continues here at theDiagonal with a visual guide to amateur email spammers. You know you may one if you’ve ever sent an email titled “Read now: this will make your Friday!”, to friends, family and office colleagues. You may be a serial offender if you use the “forward this email” button more than a couple of times as day.

[div class=attrib]Infographic courtesy of OnlineITDegree.[end-div]