Category Archives: Idea Soup

The Rise of Neurobollocks

For readers of thediagonal in North America “neurobollocks” would roughly translate to “neurobullshit”.

So what is this growing “neuro-trend”, why is there an explosion in “neuro-babble” and all things with a “neuro-” prefix, and is Malcolm Gladwell to blame?

[div class=attrib]From the New Statesman:[end-div]

An intellectual pestilence is upon us. Shop shelves groan with books purporting to explain, through snazzy brain-imaging studies, not only how thoughts and emotions function, but how politics and religion work, and what the correct answers are to age-old philosophical controversies. The dazzling real achievements of brain research are routinely pressed into service for questions they were never designed to answer. This is the plague of neuroscientism – aka neurobabble, neurobollocks, or neurotrash – and it’s everywhere.

In my book-strewn lodgings, one literally trips over volumes promising that “the deepest mysteries of what makes us who we are are gradually being unravelled” by neuroscience and cognitive psychology. (Even practising scientists sometimes make such grandiose claims for a general audience, perhaps urged on by their editors: that quotation is from the psychologist Elaine Fox’s interesting book on “the new science of optimism”, Rainy Brain, Sunny Brain, published this summer.) In general, the “neural” explanation has become a gold standard of non-fiction exegesis, adding its own brand of computer-assisted lab-coat bling to a whole new industry of intellectual quackery that affects to elucidate even complex sociocultural phenomena. Chris Mooney’s The Republican Brain: the Science of Why They Deny Science – and Reality disavows “reductionism” yet encourages readers to treat people with whom they disagree more as pathological specimens of brain biology than as rational interlocutors.

The New Atheist polemicist Sam Harris, in The Moral Landscape, interprets brain and other research as showing that there are objective moral truths, enthusiastically inferring – almost as though this were the point all along – that science proves “conservative Islam” is bad.

Happily, a new branch of the neuroscienceexplains everything genre may be created at any time by the simple expedient of adding the prefix “neuro” to whatever you are talking about. Thus, “neuroeconomics” is the latest in a long line of rhetorical attempts to sell the dismal science as a hard one; “molecular gastronomy” has now been trumped in the scientised gluttony stakes by “neurogastronomy”; students of Republican and Democratic brains are doing “neuropolitics”; literature academics practise “neurocriticism”. There is “neurotheology”, “neuromagic” (according to Sleights of Mind, an amusing book about how conjurors exploit perceptual bias) and even “neuromarketing”. Hoping it’s not too late to jump on the bandwagon, I have decided to announce that I, too, am skilled in the newly minted fields of neuroprocrastination and neuroflâneurship.

Illumination is promised on a personal as well as a political level by the junk enlightenment of the popular brain industry. How can I become more creative? How can I make better decisions? How can I be happier? Or thinner? Never fear: brain research has the answers. It is self-help armoured in hard science. Life advice is the hook for nearly all such books. (Some cram the hard sell right into the title – such as John B Arden’s Rewire Your Brain: Think Your Way to a Better Life.) Quite consistently, heir recommendations boil down to a kind of neo- Stoicism, drizzled with brain-juice. In a selfcongratulatory egalitarian age, you can no longer tell people to improve themselves morally. So self-improvement is couched in instrumental, scientifically approved terms.

The idea that a neurological explanation could exhaust the meaning of experience was already being mocked as “medical materialism” by the psychologist William James a century ago. And today’s ubiquitous rhetorical confidence about how the brain works papers over a still-enormous scientific uncertainty. Paul Fletcher, professor of health neuroscience at the University of Cambridge, says that he gets “exasperated” by much popular coverage of neuroimaging research, which assumes that “activity in a brain region is the answer to some profound question about psychological processes. This is very hard to justify given how little we currently know about what different regions of the brain actually do.” Too often, he tells me in an email correspondence, a popular writer will “opt for some sort of neuro-flapdoodle in which a highly simplistic and questionable point is accompanied by a suitably grand-sounding neural term and thus acquires a weightiness that it really doesn’t deserve. In my view, this is no different to some mountebank selling quacksalve by talking about the physics of water molecules’ memories, or a beautician talking about action liposomes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Amazon.[end-div]

Power and Baldness

Since behavioral scientists and psychologists first began roaming the globe we have come to know how and (sometimes) why visual appearance is so important in human interactions. Of course, anecdotally, humans have known this for thousands of years — that image is everything. After all it, was not Mary Kay or L’Oreal who brought us make-up but the ancient Egyptians. Yet, it is still fascinating to see how markedly the perception of an individual can change with a basic alteration, and only at the surface. Witness the profound difference in characteristics that we project onto a male with male pattern baldness (wimp) when he shaves his head (tough guy). And, of course, corporations can now assign a monetary value to the shaven look. As for comb-overs, well that is another topic entirely.

[div class=attrib]From the Wall Street Journal:[end-div]

Up for a promotion? If you’re a man, you might want to get out the clippers.

Men with shaved heads are perceived to be more masculine, dominant and, in some cases, to have greater leadership potential than those with longer locks or with thinning hair, according to a recent study out of the University of Pennsylvania’s Wharton School.

That may explain why the power-buzz look has caught on among business leaders in recent years. Venture capitalist and Netscape founder Marc Andreessen, 41 years old, DreamWorks Animation Chief Executive Jeffrey Katzenberg, 61, and Amazon.com Inc. CEO Jeffrey Bezos, 48, all sport some variant of the close-cropped look.

Some executives say the style makes them appear younger—or at least, makes their age less evident—and gives them more confidence than a comb-over or monk-like pate.

“I’m not saying that shaving your head makes you successful, but it starts the conversation that you’ve done something active,” says tech entrepreneur and writer Seth Godin, 52, who has embraced the bare look for two decades. “These are people who decide to own what they have, as opposed to trying to pretend to be something else.”

Wharton management lecturer Albert Mannes conducted three experiments to test peoples’ perceptions of men with shaved heads. In one of the experiments, he showed 344 subjects photos of the same men in two versions: one showing the man with hair and the other showing him with his hair digitally removed, so his head appears shaved.

In all three tests, the subjects reported finding the men with shaved heads as more dominant than their hirsute counterparts. In one test, men with shorn heads were even perceived as an inch taller and about 13% stronger than those with fuller manes. The paper, “Shorn Scalps and Perceptions of Male Dominance,” was published online, and will be included in a coming issue of journal Social Psychological and Personality Science.

The study found that men with thinning hair were viewed as the least attractive and powerful of the bunch, a finding that tracks with other studies showing that people perceive men with typical male-pattern baldness—which affects roughly 35 million Americans—as older and less attractive. For those men, the solution could be as cheap and simple as a shave.

According to Wharton’s Dr. Mannes—who says he was inspired to conduct the research after noticing that people treated him more deferentially when he shaved off his own thinning hair—head shavers may seem powerful because the look is associated with hypermasculine images, such as the military, professional athletes and Hollywood action heroes like Bruce Willis. (Male-pattern baldness, by contrast, conjures images of “Seinfeld” character George Costanza.)

New York image consultant Julie Rath advises her clients to get closely cropped when they start thinning up top. “There’s something really strong, powerful and confident about laying it all bare,” she says, describing the thinning or combed-over look as “kind of shlumpy.”

The look is catching on. A 2010 study from razor maker Gillette, a unit of Procter & Gamble Co., found that 13% of respondents said they shaved their heads, citing reasons as varied as fashion, sports and already thinning hair, according to a company spokesman. HeadBlade Inc., which sells head-shaving accessories, says revenues have grown 30% a year in the past decade.

Shaving his head gave 60-year-old Stephen Carley, CEO of restaurant chain Red Robin Gourmet Burgers Inc., a confidence boost when he was working among 20-somethings at tech start-ups in the 1990s. With his thinning hair shorn, “I didn’t feel like the grandfather in the office anymore.” He adds that the look gave him “the impression that it was much harder to figure out how old I was.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Comb-over patent, 1977. Courtesy of Wikipedia.[end-div]

QTWTAIN: Are there Nazis living on the moon?

QTWTAIN is a Twitterspeak acronym for a Question To Which The Answer Is No.

QTWTAINs are a relatively recent journalistic phenomenon. They are often used as headlines to great effect by media organizations to grab a reader’s attention. But importantly, QTWTAINs imply that something ridiculous is true — by posing a headline as a question no evidence seems to be required. Here’s an example of a recent headline:

“Europe: Are there Nazis living on the moon?”

Author and journalist John Rentoul has done all connoisseurs of QTWTAINs a great service by collecting an outstanding selection from hundreds of his favorites into a new book, Questions to Which the Answer is No. Rentoul tells us his story, excerpted, below.

[div class=attrib]From the Independent:[end-div]

I have an unusual hobby. I collect headlines in the form of questions to which the answer is no. This is a specialist art form that has long been a staple of “prepare to be amazed” journalism. Such questions allow newspapers, television programmes and websites to imply that something preposterous is true without having to provide the evidence.

If you see a question mark after a headline, ask yourself why it is not expressed as a statement, such as “Church of England threatened by excess of cellulite” or “Revealed: Marlene Dietrich plotted to murder Hitler” or, “This penguin is a communist”.

My collection started with a bishop, a grudge against Marks & Spencer and a theft in broad daylight. The theft was carried out by me: I had been inspired by Oliver Kamm, a friend and hero of mine, who wrote about Great Historical Questions to Which the Answer is No on his blog. Then I came across this long headline in Britain’s second-best-selling newspaper three years ago: “He’s the outcast bishop who denies the Holocaust – yet has been welcomed back by the Pope. But are Bishop Williamson’s repugnant views the result of a festering grudge against Marks & Spencer?” Thus was an internet meme born.

Since then readers of The Independent blog and people on Twitter with nothing better to do have supplied me with a constant stream of QTWTAIN. If this game had a serious purpose, which it does not, it would be to make fun of conspiracy theories. After a while, a few themes recurred: flying saucers, yetis, Jesus, the murder of John F Kennedy, the death of Marilyn Monroe and reincarnation.

An enterprising PhD student could use my series as raw material for a thesis entitled: “A Typology of Popular Irrationalism in Early 21st-Century Media”. But that would be to take it too seriously. The proper use of the series is as a drinking game, to be followed by a rousing chorus of “Jerusalem”, which consists largely of questions to which the answer is no.

My only rule in compiling the series is that the author or publisher of the question has to imply that the answer is yes (“Does Nick Clegg Really Expect Us to Accept His Apology?” for example, would be ruled out of order). So far I have collected 841 of them, and the best have been selected for a book published this week. I hope you like them.

Is the Loch Ness monster on Google Earth?

Daily Telegraph, 26 August 2009

A picture of something that actually looked like a giant squid had been spotted by a security guard as he browsed the digital planet. A similar question had been asked by the Telegraph six months earlier, on 19 February, about a different picture: “Has the Loch Ness Monster emigrated to Borneo?”

Would Boudicca have been a Liberal Democrat?

This one is cheating, because Paul Richards, who asked it in an article in Progress magazine, 12 March 2010, did not imply that the answer was yes. He was actually making a point about the misuse of historical conjecture, comparing Douglas Carswell, the Conservative MP, who suggested that the Levellers were early Tories, to the spiritualist interviewed by The Sun in 1992, who was asked how Winston Churchill, Joseph Stalin, Karl Marx and Chairman Mao would have voted (Churchill was for John Major; the rest for Neil Kinnock, naturally).

Is Tony Blair a Mossad agent?

A question asked by Peza, who appears to be a cat, on an internet forum on 9 April 2010. One reader had a good reply: “Peza, are you drinking that vodka-flavoured milk?”

Could Angelina Jolie be the first female US President?

Daily Express, 24 June 2009

An awkward one this, because one of my early QTWTAIN was “Is the Express a newspaper?” I had formulated an arbitrary rule that its headlines did not count. But what are rules for, if not for changing?

[div class=attrib]Read the entire article after the jump?[end-div]

[div class=attrib]Book Cover: Questions to Which the Answer is No, by John Rentoul. Courtesy of the Independent / John Rentoul.[end-div]

Brilliant! The Brits are Coming

Following decades of one-way cultural osmosis — from the United States to the UK, it seems that the trend may be reversing. Well, at least in the linguistic department. Although it may be a while before “blimey” enters the American lexicon, other words and phrases such as “spot on”, “chat up”, “ginger” to describe hair color, “gormless”

[div class=attrib]From the BBC:[end-div]

There is little that irks British defenders of the English language more than Americanisms, which they see creeping insidiously into newspaper columns and everyday conversation. But bit by bit British English is invading America too.

Spot on – it’s just ludicrous!” snaps Geoffrey Nunberg, a linguist at the University of California at Berkeley.

“You are just impersonating an Englishman when you say spot on.”

Will do – I hear that from Americans. That should be put into quarantine,” he adds.

And don’t get him started on the chattering classes – its overtones of a distinctly British class system make him quiver.

But not everyone shares his revulsion at the drip, drip, drip of Britishisms – to use an American term – crossing the Atlantic.

“I enjoy seeing them,” says Ben Yagoda, professor of English at the University of Delaware, and author of the forthcoming book, How to Not Write Bad.

“It’s like a birdwatcher. If I find an American saying one, it makes my day!”

Last year Yagoda set up a blog dedicated to spotting the use of British terms in American English.

So far he has found more than 150 – from cheeky to chat-up via sell-by date, and the long game – an expression which appears to date back to 1856, and comes not from golf or chess, but the card game whist. President Barack Obama has used it in at least one speech.

Yagoda notices changes in pronunciation too – for example his students sometimes use “that sort of London glottal stop”, dropping the T in words like “important” or “Manhattan”.

Kory Stamper, Associate Editor for Merriam-Webster, whose dictionaries are used by many American publishers and news organisations, agrees that more and more British words are entering the American vocabulary.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ngram graph showing online usage of the phrase “chat up”. Courtesy of Google / BBC.[end-div]

Childhood Injuries on the Rise: Blame Parental Texting

The long-term downward trend in the number injuries to young children is no longer. Sadly, urgent care and emergency room doctors are now seeing more children aged 0-14 years with unintentional injuries. While the exact causes are yet to be determined, there is a growing body of anecdotal evidence that points to distraction among patents and supervisors — it’s the texting stupid!

The great irony is that should your child suffer an injury while you were using your smartphone, you’ll be able to contact the emergency room much more quickly now — courtesy of the very same smartphone.

[div class=attrib]From the Wall Street Journal:[end-div]

One sunny July afternoon in a San Francisco park, tech recruiter Phil Tirapelle was tapping away on his cellphone while walking with his 18-month-old son. As he was texting his wife, his son wandered off in front of a policeman who was breaking up a domestic dispute.

“I was looking down at my mobile, and the police officer was looking forward,” and his son “almost got trampled over,” he says. “One thing I learned is that multitasking makes you dumber.”

Yet a few minutes after the incident, he still had his phone out. “I’m a hypocrite. I admit it,” he says. “We all are.”

Is high-tech gadgetry diminishing the ability of adults to give proper supervision to very young children? Faced with an unending litany of newly proclaimed threats to their kids, harried parents might well roll their eyes at this suggestion. But many emergency-room doctors are worried: They see the growing use of hand-held electronic devices as a plausible explanation for the surprising reversal of a long slide in injury rates for young children. There have even been a few extreme cases of death and near drowning.

Nonfatal injuries to children under age five rose 12% between 2007 and 2010, after falling for much of the prior decade, according to the most recent data from the Centers for Disease Control and Prevention, based on emergency-room records. The number of Americans 13 and older who own a smartphone such as an iPhone or BlackBerry has grown from almost 9 million in mid-2007, when Apple introduced its device, to 63 million at the end of 2010 and 114 million in July 2012, according to research firm comScore.

Child-safety experts say injury rates had been declining since at least the 1970s, thanks to everything from safer playgrounds to baby gates on staircases to fences around backyard swimming pools. “It was something we were always fairly proud of,” says Dr. Jeffrey Weiss, a pediatrician at Phoenix Children’s Hospital who serves on an American Academy of Pediatrics working group for injury, violence and poison prevention. “The injuries were going down and down and down.” The recent uptick, he says, is “pretty striking.”

Childhood-injury specialists say there appear to be no formal studies or statistics to establish a connection between so-called device distraction and childhood injury. “What you have is an association,” says Dr. Gary Smith, founder and director of the Center for Injury Research and Policy of the Research Institute at Nationwide Children’s Hospital. “Being able to prove causality is the issue…. It certainly is a question that begs to be asked.”

It is well established that using a smartphone while driving or even crossing a street increases the risk of accident. More than a dozen pediatricians, emergency-room physicians, academic researchers and police interviewed by The Wall Street Journal say that a similar factor could be at play in injuries to young children.

“It’s very well understood within the emergency-medicine community that utilizing devices—hand-held devices—while you are assigned to watch your kids—that resulting injuries could very well be because you are utilizing those tools,” says Dr. Wally Ghurabi, medical director of the emergency center at the Santa Monica-UCLA Medical Center and Orthopaedic Hospital.

Adds Dr. Rahul Rastogi, an emergency-room physician at Kaiser Permanente in Oregon: “We think we’re multitasking and not really feeling like we are truly distracted. But in reality we are.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Declining and Disparate Life Expectancy in the U.S

Social scientists are not certain of the causes but the sobering numbers speak for themselves: life expectancy for white women without a high school diploma is 74 years, while that for women with at least a college degree is 84 years; for white men the comparable life expectancies are 66 years versus 80 years.

[div class=attrib]From the New York Times:[end-div]

For generations of Americans, it was a given that children would live longer than their parents. But there is now mounting evidence that this enduring trend has reversed itself for the country’s least-educated whites, an increasingly troubled group whose life expectancy has fallen by four years since 1990.

Researchers have long documented that the most educated Americans were making the biggest gains in life expectancy, but now they say mortality data show that life spans for some of the least educated Americans are actually contracting. Four studies in recent years identified modest declines, but a new one that looks separately at Americans lacking a high school diploma found disturbingly sharp drops in life expectancy for whites in this group. Experts not involved in the new research said its findings were persuasive.

The reasons for the decline remain unclear, but researchers offered possible explanations, including a spike in prescription drug overdoses among young whites, higher rates of smoking among less educated white women, rising obesity, and a steady increase in the number of the least educated Americans who lack health insurance.

The steepest declines were for white women without a high school diploma, who lost five years of life between 1990 and 2008, said S. Jay Olshansky, a public health professor at the University of Illinois at Chicago and the lead investigator on the study, published last month in Health Affairs. By 2008, life expectancy for black women without a high school diploma had surpassed that of white women of the same education level, the study found.

White men lacking a high school diploma lost three years of life. Life expectancy for both blacks and Hispanics of the same education level rose, the data showed. But blacks over all do not live as long as whites, while Hispanics live longer than both whites and blacks.

“We’re used to looking at groups and complaining that their mortality rates haven’t improved fast enough, but to actually go backward is deeply troubling,” said John G. Haaga, head of the Population and Social Processes Branch of the National Institute on Aging, who was not involved in the new study.

The five-year decline for white women rivals the catastrophic seven-year drop for Russian men in the years after the collapse of the Soviet Union, said Michael Marmot, director of the Institute of Health Equity in London.

[div class=attrib]Read the entire article after the jump.[end-div]

Your Proximity to Fast Food

A striking map that shows how close or far you are from a McDonalds. If you love fast food then the Eastern U.S. is the place for you. On the other hand, if you crave McDistance, then you may want to move to the Nevada desert, the wilds of Idaho, the Rocky Mountains or the plains of the Dakotas. The map is based on 2009 data.

[div class=attrib]Read more details about this cool map after the jump.[end-div]

[div class=attrib]Map courtesy of Guardian / Stephen Von Worley, Data Pointed.[end-div]

Bicyclist Tribes

If you ride a bike (as in, bicycle) you will find that you probably belong to a specific tribe of bicyclist — and you’re being observed by bicyclist watchers! Read on to find out if you’re a Roadie or a Beach Cruiser or if you belong to one of the other tribes. Of course, some are quite simply in an exclusive “mayo jaune” tribe of their own.

[div class=attrib]From Wall Street Journal:[end-div]

Bird watching is a fine hobby for those with the time and inclination to traipse into nature, but the thrill of spotting different species of bicyclists can be just as rewarding. Why travel to Argentina to find a black-breasted plovercrest when one can spy a similarly plumed “Commuter” at the neighborhood Starbucks? No need to squint into binoculars or get up at the crack of dawn, either—bicyclists are out and about at all hours.

Bicyclist-watching has become much more interesting in recent years as the number of two-wheeled riders has grown. High gas prices, better bicycles, concern about the environment, looking cool—they’re all contributing factors. And with proliferation has come specialization. People don’t just “ride” bikes anymore: They commute or race or cruise, with each activity spawning corresponding gear and attitudes. Those in the field categorize cyclists into groups known as “bike tribes.” Instead of ducks, hawks and water fowl, bicyclologists might speak of Roadies, Cyclocrossers and Beach Cruisers.

To identify a bike tribe, note distinguishing marks, patterns and habits. Start with the dominant color and materials of a cyclist’s clothing. For example, garish jerseys and Lycra shorts indicate a Roadie, while padded gloves, mud-spattered jackets and black cleats are the territory of Cyclocrossers. Migration patterns are revealing. Observe the speed of travel and the treatment of other cyclists. Does the cyclist insist on riding amid cars even when wide bicycle paths are available? Probably a Roadie. Is the cyclist out in the pouring rain? Sounds like a Commuter. The presence of juveniles is telling, too; only a few tribes travel with offspring.

The Roadie

No bike tribe is more common in the United States than the Roadie. Their mien is sportiness and “performance” their goal. Roadies love passing other bike riders; they get annoyed when they have to dodge pedestrians walking with dogs or small children; they often ride in the middle of the road. They tend to travel in packs and spend time in small bicycle shops.

The Commuter

Commuters view a bicycle first and foremost as a means of transportation. They don’t ride without a destination. It’s easy to confuse Commuters with other tribes because others will sometimes use their bicycles to get to work. Even more challenging, Commuters come in all shapes and sizes and ride all different types of bicycles. But there are some distinguishing behaviors. Commuters almost always travel alone. They tend to wear drabber clothing than other tribes. Some adopt a smug, I’m-saving-the-world attitude, which is apparent in the way they glare at motorists. Commuters are most visible during rush hour.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Bradley Wiggins, Winner 2012 Tour de France.[end-div]

Social Media and Vanishing History

Social media is great for notifying members in one’s circle of events in the here and now. Of course, most events turn out to be rather trivial, of the “what I ate for dinner” kind. However, social media also has a role in spreading word of more momentous social and political events; the Arab Spring comes to mind.

But, while Twitter and its peers may be a boon for those who live in the present moment and need to transmit their current status, it seems that our social networks are letting go of the past. Will history become lost and irrelevant to the Twitter generation?

A terrifying thought.

[div class=attrib]From Technology Review:[end-div]

On 25 January 2011, a popular uprising began in Egypt that  led to the overthrow of the country’s brutal president and to the first truly free elections. One of the defining features of this uprising and of others in the Arab Spring was the way people used social media to organise protests and to spread news.

Several websites have since begun the task of curating this content, which is an important record of events and how they unfolded. That led Hany SalahEldeen and Michael Nelson at Old Dominion University in Norfolk, Virginia, to take a deeper look at the material to see how much the shared  were still live.

What they found has serious implications. SalahEldeen and Nelson say a significant proportion of the websites that this social media points to has disappeared. And the same pattern occurs for other culturally significant events, such as the the H1N1 virus outbreak, Michael Jackson’s death and the Syrian uprising.

In other words, our history, as recorded by social media, is slowly leaking away.

Their method is straightforward. SalahEldeen and Nelson looked for tweets on six culturally significant events that occurred between June 2009 and March 2012. They then filtered the URLs these tweets pointed to and checked to see whether the content was still available on the web, either in its original form or in an archived form.

They found that the older the social media, the more likely its content was to be missing. In fact, they found an almost linear relationship between time and the percentage lost.

The numbers are startling. They say that 11 per cent of the social media content had disappeared within a year and 27 per cent within 2 years. Beyond that, SalahEldeen and Nelson say the world loses 0.02 per cent of its culturally significant social media material every day.

That’s a sobering thought.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Movie poster for the 2002 film ”The Man Without a Past”. The Man Without a Past (Finnish: Mies vailla menneisyyttä) is a 2002 Finnish comedy-drama film directed by Aki Kaurismäki. Courtesy of Wikipedia.[end-div]

The How and Why of Supersized Sodas

Apparently the Great Depression in the United States is to blame for the mega-sized soda drinks that many now consume on a daily basis, except in New York City of course (sugary drinks larger than 16oz were banned for sale in restaurants beginning September 13, 2012).

[div class=attrib]From Wired:[end-div]

The New York City Board of Health voted Thursday to ban the sale of sugary soft drinks larger than 16 ounces at restaurants, a move that has sparked intense debate between public health advocates and beverage industry lobbyists. When did sodas get so big in the first place?

In the 1930s. At the beginning of the Great Depression, the 6-ounce Coca-Cola bottle was the undisputed king of soft drinks. The situation began to change in 1934, when the smallish Pepsi-Cola company began selling 12-ounces bottles for the same nickel price as 6 ounces of Coke. The move was brilliant. Distribution, bottling, and advertising accounted for most of the company’s costs, so adding six free ounces hardly mattered. In addition, the 12-ounce size enabled Pepsi-Cola to use the same bottles as beer-makers, cutting container costs. The company pursued a similar strategy at the nation’s soda fountains, selling enough syrup to make 10 ounces for the same price as 6 ounces worth of Coca-Cola. Pepsi sales soared, and the company soon produced a jingle about their supersize bottles: “Pepsi-Cola hits the spot, 12 full ounces, that’s a lot. Twice as much for a nickel, too. Pepsi-Cola is the drink for you.” Pepsi’s value-for-volume gambit kicked off a decades-long industry trend.

Coke was slow to respond at first, according to author Mark Pendergrast, who chronicled the company’s history in For God, Country, and Coca-Cola: The Definitive History of the Great American Soft Drink and the Company That Makes It. President Robert Woodruff held firm to the 6-ounce size, even as his subordinates warned him that Pepsi was onto something. By the 1950s, industry observers predicted that Coca-Cola might lose its dominant position, and top company executives were threatening to resign if Woodruff didn’t bend on bottle size. In 1955, 10- and 12-ounce “King Size” Coke bottles hit the market, along with a 26-ounce “Family Size.” Although the new flexibility helped Coca-Cola regain its footing, the brave new world of giant bottles was hard to accept for some. Company vice president Ed Forio noted that “bringing out another bottle was like being unfaithful to your wife.”

The trend toward larger sizes occurred in all sectors of the market. When Coca-Cola partnered with McDonald’s in the 1950s, the original fountain soda at the restaurant chain more closely approximated the classic Coke bottle at seven ounces. The largest cup size grew to 16 ounces in the 1960s and hit 21 ounces by 1974.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Big Gulp. Courtesy of Chicago Tribune.[end-div]

GDP of States Versus Countries

A niffty or neat (depending upon your location) map courtesy of Frank Jacobs over at Strange Maps. This one shows countries in place of U.S. States where the GDP (Gross Domestic Product) is similar. For instance, Canada replaces Texas in the United States map since Canada’s entire GDP matches the economy of Texas. The map is based on data for 2007.

[div class=attrib]Read the entire article after the jump.[end-div]

Social Outcast = Creative Wunderkind

A recent study published in the Journal of Experimental Psychology correlates social ostracization and rejection with creativity. Businesses seeking creative individuals take note: perhaps your next great hire is a social misfit.

[div class=attrib]From Fast Company:[end-div]

Are you a recovering high school geek who still can’t get the girl? Are you always the last person picked for your company’s softball team? When you watched Office Space, did you feel a special kinship to the stapler-obsessed Milton Waddams? If you answered yes to any of these questions, do not despair. Researchers at Johns Hopkins and Cornell have recently found that the socially rejected might also be society’s most creatively powerful people.

The study, which is forthcoming in the Journal of Experimental Psychology, is called “Outside Advantage: Can Social Rejection Fuel Creative Thought?” It found that people who already have a strong “self-concept”–i.e. are independently minded–become creatively fecund in the face of rejection. “We were inspired by the stories of highly creative individuals like Steve Jobs and Lady Gaga,” says the study’s lead author, Hopkins professor Sharon Kim. “And we wanted to find a silver lining in all the popular press about bullying. There are benefits to being different.”

The study consisted of 200 Cornell students and set out to identify the relationship between the strength of an individual’s self-concept and their level of creativity. First, Kim tested the strength of each student’s self-concept by assessing his or her “need for uniqueness.” In other words, how important it is for each individual to feel separate from the crowd. Next, students were told that they’d either been included in or rejected from a hypothetical group project. Finally, they were given a simple, but creatively demanding, task: Draw an alien from a planet unlike earth.

If you’re curious about your own general creativity level (at least by the standards of Kim’s study), go ahead and sketch an alien right now…Okay, got your alien? Now give yourself a point for every non-human characteristic you’ve included in the drawing. If your alien has two eyes between the nose and forehead, you don’t get any points. If your alien has two eyes below the mouth, or three eyes that breathe fire, you get a point. If your alien doesn’t even have eyes or a mouth, give yourself a bunch of points. In short, the more dissimilar your alien is to a human, the higher your creativity score.

Kim found that people with a strong self-concept who were rejected produced more creative aliens than people from any other group, including people with a strong self-concept who were accepted. “If you’re in a mindset where you don’t care what others think,” she explained, “you’re open to ideas that you may not be open to if you’re concerned about what other people are thinking.”

This may seem like an obvious conclusion, but Kim pointed out that most companies don’t encourage the kind of freedom and independence that readers of Fast Company probably expect. “The benefits of being different is not a message everyone is getting,” she said.

But Kim also discovered something unexpected. People with a weak self-concept could be influenced toward a stronger one and, thus, toward a more creative mindset. In one part of the study, students were asked to read a short story in which all the pronouns were either singular (I/me) or plural (we/us) and then to circle all the pronouns. They were then “accepted” or “rejected” and asked to draw their aliens.

Kim found that all of the students who read stories with singular pronouns and were rejected produced more creative aliens. Even the students who originally had a weaker self-concept. Once these group-oriented individuals focused on individual-centric prose, they became more individualized themselves. And that made them more creative.

This finding doesn’t prove that you can teach someone to have a strong self-concept but it suggests that you can create a professional environment that facilitates independent and creative thought.

[div class=attrib]Read the entire article after the jump.[end-div]

Work as Punishment (and For the Sake of Leisure)

Gary Gutting, professor of philosophy at the University of Notre Dame reminds us that work is punishment for Adam’s sin, according to the Book of Genesis. No doubt, many who hold other faiths, as well as those who don’t, may tend to agree with this basic notion.

So, what on earth is work for?

Gutting goes on to remind us that Aristotle and Bertrand Russell had it right: that work is for the sake of leisure.

[div class=attrib]From the New York Times:[end-div]

Is work good or bad?  A fatuous question, it may seem, with unemployment such a pressing national concern.  (Apart from the names of the two candidates, “jobs” was the politically relevant word most used by speakers at the Republican and Democratic conventions.) Even apart from current worries, the goodness of work is deep in our culture. We applaud people for their work ethic, judge our economy by its productivity and even honor work with a national holiday.

But there’s an underlying ambivalence: we celebrate Labor Day by not working, the Book of Genesis says work is punishment for Adam’s sin, and many of us count the days to the next vacation and see a contented retirement as the only reason for working.

We’re ambivalent about work because in our capitalist system it means work-for-pay (wage-labor), not for its own sake.  It is what philosophers call an instrumental good, something valuable not in itself but for what we can use it to achieve.  For most of us, a paying job is still utterly essential — as masses of unemployed people know all too well.  But in our economic system, most of us inevitably see our work as a means to something else: it makes a living, but it doesn’t make a life.

What, then, is work for? Aristotle has a striking answer: “we work to have leisure, on which happiness depends.” This may at first seem absurd. How can we be happy just doing nothing, however sweetly (dolce far niente)?  Doesn’t idleness lead to boredom, the life-destroying ennui portrayed in so many novels, at least since “Madame Bovary”?

Everything depends on how we understand leisure. Is it mere idleness, simply doing nothing?  Then a life of leisure is at best boring (a lesson of Voltaire’s “Candide”), and at worst terrifying (leaving us, as Pascal says, with nothing to distract from the thought of death).  No, the leisure Aristotle has in mind is productive activity enjoyed for its own sake, while work is done for something else.

We can pass by for now the question of just what activities are truly enjoyable for their own sake — perhaps eating and drinking, sports, love, adventure, art, contemplation? The point is that engaging in such activities — and sharing them with others — is what makes a good life. Leisure, not work, should be our primary goal.

Bertrand Russell, in his classic essay “In Praise of Idleness,” agrees. ”A great deal of harm,” he says, “is being done in the modern world by belief in the virtuousness of work.” Instead, “the road to happiness and prosperity lies in an organized diminution of work.” Before the technological breakthroughs of the last two centuries, leisure could be only “the prerogative of small privileged classes,” supported by slave labor or a near equivalent. But this is no longer necessary: “The morality of work is the morality of slaves, and the modern world has no need of slavery.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bust of Aristotle. Marble, Roman copy after a Greek bronze original by Lysippos from 330 BC; the alabaster mantle is a modern addition. Courtesy of Wikipedia.[end-div]

Innovation Before Its Time

Product driven companies, inventors from all backgrounds and market researchers have long studied how some innovations take off while others fizzle. So, why do some innovations gain traction? Given two similar but competing inventions, what factors lead to one eclipsing the other? Why do some pioneering ideas and inventions fail only to succeed from a different instigator years, sometimes decades, later? Answers to these questions would undoubtedly make many inventors household names, but as is the case in most human endeavors, the process of innovation is murky and more of an art than a science.

Author and columnist Matt Ridley offers some possible answers to the conundrum.

[div class=attrib]From the Wall Street Journal:[end-div]

Bill Moggridge, who invented the laptop computer in 1982, died last week. His idea of using a hinge to attach a screen to a keyboard certainly caught on big, even if the first model was heavy, pricey and equipped with just 340 kilobytes of memory. But if Mr. Moggridge had never lived, there is little doubt that somebody else would have come up with the idea.

The phenomenon of multiple discovery is well known in science. Innovations famously occur to different people in different places at the same time. Whether it is calculus (Newton and Leibniz), or the planet Neptune (Adams and Le Verrier), or the theory of natural selection (Darwin and Wallace), or the light bulb (Edison, Swan and others), the history of science is littered with disputes over bragging rights caused by acts of simultaneous discovery.

As Kevin Kelly argues in his book “What Technology Wants,” there is an inexorability about technological evolution, expressed in multiple discovery, that makes it look as if technological innovation is an autonomous process with us as its victims rather than its directors.

Yet some inventions seem to have occurred to nobody until very late. The wheeled suitcase is arguably such a, well, case. Bernard Sadow applied for a patent on wheeled baggage in 1970, after a Eureka moment when he was lugging his heavy bags through an airport while a local worker effortlessly pushed a large cart past. You might conclude that Mr. Sadow was decades late. There was little to stop his father or grandfather from putting wheels on bags.

Mr. Sadow’s bags ran on four wheels, dragged on a lead like a dog. Seventeen years later a Northwest Airlines pilot, Robert Plath, invented the idea of two wheels on a suitcase held vertically, plus a telescopic handle to pull it with. This “Rollaboard,” now ubiquitous, also feels as if it could have been invented much earlier.

Or take the can opener, invented in the 1850s, eight decades after the can. Early 19th-century soldiers and explorers had to make do with stabbing bayonets into food cans. “Why doesn’t somebody come up with a wheeled cutter?” they must have muttered (or not) as they wrenched open the cans.

Perhaps there’s something that could be around today but hasn’t been invented and that will seem obvious to future generations. Or perhaps not. It’s highly unlikely that brilliant inventions are lying on the sidewalk ignored by the millions of entrepreneurs falling over each other to innovate. Plenty of terrible ideas are tried every day.

Understanding why inventions take so long may require mentally revisiting a long-ago time. For a poorly paid Napoleonic soldier who already carried a decent bayonet, adding a can opener to his limited kitbag was probably a waste of money and space. Indeed, going back to wheeled bags, if you consider the abundance of luggage porters with carts in the 1960s, the ease of curbside drop-offs at much smaller airports and the heavy iron casters then available, 1970 seems about the right date for the first invention of rolling luggage.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class]Image: Joseph Swan, inventor of the incandescent light bulb, which was first publicly demonstrated on 18 December 1878. Courtesy of Wikipedia.[end-div]

Let the Wealthy Fund Innovation?

Nathan Myhrvold, former CTO of Microsoft, suggests that the wealthy should “think big” by funding large-scale and long-term innovation. Arguably, this would be a much preferred alternative to the wealthy using their millions to gain (more) political influence in much of the West, especially the United States. Myhrvold is now a backer of TerraPower, a nuclear energy startup.

[div class=attrib]From Technology Review:[end-div]

For some technologists, it’s enough to build something that makes them financially successful. They retire happily. Others stay with the company they founded for years and years, enthralled with the platform it gives them. Think how different the work Steve Jobs did at Apple in 2010 was from the innovative ride he took in the 1970s.

A different kind of challenge is to start something new. Once you’ve made it, a new venture carries some disadvantages. It will be smaller than your last company, and more frustrating. Startups require a level of commitment not everyone is ready for after tasting success. On the other hand, there’s no better time than that to be an entrepreneur. You’re not gambling your family’s entire future on what happens next. That is why many accomplished technologists are out in the trenches, leading and funding startups in unprecedented areas.

Jeff Bezos has Blue Origin, a company that builds spaceships. Elon Musk has Tesla, an electric-car company, and SpaceX, another rocket-ship company. Bill Gates took on big challenges in the developing world—combating malaria, HIV, and poverty. He is also funding inventive new companies at the cutting edge of technology. I’m involved in some of them, including TerraPower, which we formed to commercialize a promising new kind of nuclear reactor.

There are few technologies more daunting to inventors (and investors) than nuclear power. On top of the logistics, science, and engineering, you have to deal with the regulations and politics. In the 1970s, much of the world became afraid of nuclear energy, and last year’s events in Fukushima haven’t exactly assuaged those fears.

So why would any rational group of people create a nuclear power company? Part of the reason is that Bill and I have been primed to think long-term. We have the experience and resources to look for game-changing ideas—and the confidence to act when we think we’ve found one. Other technologists who fund ambitious projects have similar motivations. Elon Musk and Jeff Bezos are literally reaching for the stars because they believe NASA and its traditional suppliers can’t innovate at the same rate they can.

In the next few decades, we need more technology leaders to reach for some very big advances. If 20 of us were to try to solve energy problems—with carbon capture and storage, or perhaps some other crazy idea—maybe one or two of us would actually succeed. If nobody tries, we’ll all certainly fail.

I believe the world will need to rely on nuclear energy. A looming energy crisis will force us to rework the underpinnings of our energy economy. That happened last in the 19th century, when we moved at unprecedented scale toward gas and oil. The 20th century didn’t require a big switcheroo, but looking into the 21st century, it’s clear that we have a much bigger challenge.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Nathan Myhrvold. Courtesy of AllThingsD.[end-div]

Building Character in Kids

Many parents have known this for a long time: it takes more than a stellar IQ, SAT or ACT score to make a well-rounded kid. Arguably there a many more important traits that never feature on these quantitative tests. Such qualities as leadership, curiosity, initiative, perseverance, motivation, courage and empathy come to mind.

An excerpt below from Paul Tough’s book, “How Children Succeed: Grit, Curiosity and the Hidden Power of Character”.

[div class=attrib]From the Wall Street Journal:[end-div]

We are living through a particularly anxious moment in the history of American parenting. In the nation’s big cities these days, the competition among affluent parents over slots in favored preschools verges on the gladiatorial. A pair of economists from the University of California recently dubbed this contest for early academic achievement the “Rug Rat Race,” and each year, the race seems to be starting earlier and growing more intense.

At the root of this parental anxiety is an idea you might call the cognitive hypothesis. It is the belief, rarely spoken aloud but commonly held nonetheless, that success in the U.S. today depends more than anything else on cognitive skill—the kind of intelligence that gets measured on IQ tests—and that the best way to develop those skills is to practice them as much as possible, beginning as early as possible.

There is something undeniably compelling about the cognitive hypothesis. The world it describes is so reassuringly linear, such a clear case of inputs here leading to outputs there. Fewer books in the home means less reading ability; fewer words spoken by your parents means a smaller vocabulary; more math work sheets for your 3-year-old means better math scores in elementary school. But in the past decade, and especially in the past few years, a disparate group of economists, educators, psychologists and neuroscientists has begun to produce evidence that calls into question many of the assumptions behind the cognitive hypothesis.

What matters most in a child’s development, they say, is not how much information we can stuff into her brain in the first few years of life. What matters, instead, is whether we are able to help her develop a very different set of qualities, a list that includes persistence, self-control, curiosity, conscientiousness, grit and self-confidence. Economists refer to these as noncognitive skills, psychologists call them personality traits, and the rest of us often think of them as character.

If there is one person at the hub of this new interdisciplinary network, it is James Heckman, an economist at the University of Chicago who in 2000 won the Nobel Prize in economics. In recent years, Mr. Heckman has been convening regular invitation-only conferences of economists and psychologists, all engaged in one form or another with the same questions: Which skills and traits lead to success? How do they develop in childhood? And what kind of interventions might help children do better?

The transformation of Mr. Heckman’s career has its roots in a study he undertook in the late 1990s on the General Educational Development program, better known as the GED, which was at the time becoming an increasingly popular way for high-school dropouts to earn the equivalent of high-school diplomas. The GED’s growth was founded on a version of the cognitive hypothesis, on the belief that what schools develop, and what a high-school diploma certifies, is cognitive skill. If a teenager already has the knowledge and the smarts to graduate from high school, according to this logic, he doesn’t need to waste his time actually finishing high school. He can just take a test that measures that knowledge and those skills, and the state will certify that he is, legally, a high-school graduate, as well-prepared as any other high-school graduate to go on to college or other postsecondary pursuits.

Mr. Heckman wanted to examine this idea more closely, so he analyzed a few large national databases of student performance. He found that in many important ways, the premise behind the GED was entirely valid. According to their scores on achievement tests, GED recipients were every bit as smart as high-school graduates. But when Mr. Heckman looked at their path through higher education, he found that GED recipients weren’t anything like high-school graduates. At age 22, Mr. Heckman found, just 3% of GED recipients were either enrolled in a four-year university or had completed some kind of postsecondary degree, compared with 46% of high-school graduates. In fact, Heckman discovered that when you consider all kinds of important future outcomes—annual income, unemployment rate, divorce rate, use of illegal drugs—GED recipients look exactly like high-school dropouts, despite the fact that they have earned this supposedly valuable extra credential, and despite the fact that they are, on average, considerably more intelligent than high-school dropouts.

These results posed, for Mr. Heckman, a confounding intellectual puzzle. Like most economists, he had always believed that cognitive ability was the single most reliable determinant of how a person’s life would turn out. Now he had discovered a group—GED holders—whose good test scores didn’t seem to have any positive effect on their eventual outcomes. What was missing from the equation, Mr. Heckman concluded, were the psychological traits, or noncognitive skills, that had allowed the high-school graduates to make it through school.

So what can parents do to help their children develop skills like motivation and perseverance? The reality is that when it comes to noncognitive skills, the traditional calculus of the cognitive hypothesis—start earlier and work harder—falls apart. Children can’t get better at overcoming disappointment just by working at it for more hours. And they don’t lag behind in curiosity simply because they didn’t start doing curiosity work sheets at an early enough age.

[div class=attrib]Read the entire article after the jump.[end-div]

Corporate R&D meets Public Innovation

As corporate purse strings have drawn tighter some companies have looked for innovation beyond the office cubicle.

[div class=attrib]From Technology Review:[end-div]

Where does innovation come from? For one answer, consider the work of MIT professor Eric von Hippel, who has calculated that ordinary U.S. consumers spend $20 billion in time and money trying to improve on household products—for example, modifying a dog-food bowl so it doesn’t slide on the floor. Von Hippel estimates that these backyard Edisons collectively invest more in their efforts than the largest corporation anywhere does in R&D.

The low-tech kludges of consumers might once have had little impact. But one company, Procter & Gamble, has actually found a way to tap into them; it now gets many of its ideas for new Swiffers and toothpaste tubes from the general public. One way it has managed to do so is with the help of InnoCentive, a company in Waltham, Massachusetts, that specializes in organizing prize competitions over the Internet. Volunteer “solvers” can try to earn $500 to $1 million by coming up with answers to a company’s problems.

We like Procter & Gamble’s story because the company has discovered a creative, systematic way to pay for ideas originating far outside of its own development labs. It’s made an innovation in funding innovation, which is the subject of this month’s Technology Review business report.

How we pay for innovation is a question prompted, in part, by the beleaguered state of the venture capital industry. Over the long term, it’s the system that’s most often gotten the economic incentives right. Consider that although fewer than two of every 1,000 new American businesses are venture backed, these account for 11 percent of public companies and 6 percent of U.S. employment, according to Harvard Business School professor Josh Lerner. (Many of those companies, although not all, have succeeded because they’ve brought new technology to market.)

Yet losses since the dot-com boom in the late 1990s have taken a toll. In August, the nation’s largest public pension fund, the California Public Employees Retirement System, said it would basically stop investing with the state’s venture funds, citing returns of 0.0 percent over a decade.

The crisis has partly to do with the size of venture funds—$1 billion isn’t uncommon. That means they need big money plays at a time when entrepreneurs are headed on exactly the opposite course. On the Web, it’s never been cheaper to start a company. You can outsource software development, rent a thousand servers, and order hardware designs from China. That is significant because company founders can often get the money they need from seed accelerators, angel investors, or Internet-based funding mechanisms such as Kickstarter.

“We’re in a period of incredible change in how you fund innovation, especially entrepreneurial innovation,” says Ethan Mollick, a professor of management science at the Wharton School. He sees what’s happening as a kind of democratization—the bets are getting smaller, but also more spread out and numerous. He thinks this could be a good thing. “One of the ways we get more innovation is by taking more draws,” he says.

In an example of the changes ahead, Mollick cites plans by the U.S. Securities and Exchange Commission to allow “crowdfunding”—it will let companies raise $1 million or so directly from the public, every year, over the Internet. (This activity had previously been outlawed as a hazard to gullible investors.) Crowdfunding may lead to a major upset in the way inventions get financed, especially those with popular appeal and modest funding requirements, like new gadget designs.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Louisiana Department of Education.[end-div]

The Power of Lists

Where would you be without lists? Surely your life would be much less organized were it not for the shopping list, gift list, re-gifting list, reading list, items to fix list, resolutions list, medications list, vacation list, work action items list, spouse to-do list, movies to see list, greeting card list, gift wish list, allergies list, school supplies list, and of course the places to visit before you die list. The lists just go on an on.

[div class=attrib]From the New York Times:[end-div]

WITH school starting and vacations ending, this is the month, the season of the list. But face it. We’re living in the era of the list, maybe even its golden age. The Web click has led to the wholesale repackaging of information into lists, which can be complex and wonderful pieces of information architecture. Our technology has imperceptibly infected us with “list thinking.”

Lists are the simplest way to organize information. They are also a symptom of our short attention spans.

The crudest of online lists are galaxies of buttons, replacing real stories. “Listicles,” you might say. They are just one step beyond magazine cover lines like “37 Ways to Drive Your Man Wild in Bed.” Bucket lists have produced competitive list making online. Like competitive birders, people check off books read or travel destinations visited.

But lists can also tell a story. Even the humble shopping list says something about the shopper — and the Netflix queue, a “smart list” built on experience and suggestion algorithms, says much about the subscriber.

Lists can reveal personal dramas. An exhibit of lists at the Morgan Library and Museum showed a passive-aggressive Picasso omitting his bosom buddy, Georges Braque, from a list of recommended artists.

We’ve come a long way from the primitive best-seller lists and hit parade lists, “crowd sourced,” if you will, from sales. We all have our “to-do” lists, and there is a modern, sophisticated form of the list that is as serious as the “best of…” list is frivolous. That is the checklist.

The surgeon Atul Gawande, in his book “The Checklist Manifesto,” explains the utility of the list in assuring orderly procedures and removing error. For all that society has accomplished in such fields as medicine and aviation, he argues, the know-how is often unmanageable — without a checklist.

A 70-page checklist put together by James Lovell, the commander of Apollo 13, helped him navigate the spacecraft back to Earth after an oxygen tank exploded. Capt. Chesley B. Sullenberger safely ditched his Airbus A-320 in the Hudson River after consulting the “engine out” checklist, which advised “Land ASAP” if the engines fail to restart.

At a local fast-food joint, I see checklists for cleanliness, one list for the front of the store and one for restrooms — a set of inspections and cleanups to be done every 30 minutes. The list is mapped on photo views, with numbers of the tasks over the areas in question. A checklist is a kind of story or narrative and has a long history in literature. The heroic list or catalog is a feature of epic poetry, from Homer to Milton. There is the famed catalog of ships and heroes in “The Iliad.”

Homer’s ships are also echoed in a list in Lewis Carroll’s “The Walrus and the Carpenter”: “‘The time has come,’ the walrus said, ‘to talk of many things: Of shoes — and ships — and sealing-wax — of cabbages — and kings.’” This is the prototype of the surrealist list.

There are other sorts of lists in literature. Vladimir Nabokov said he spent a long time working out the list (he called it a poem) of Lolita’s classmates in his famous novel; the names reflect the flavor of suburban America in the 1950s and give sly clues to the plot as well. There are hopeful names like Grace Angel and ominous ones like Aubrey McFate.

[div class=attrib]Read the entire article after the jump.[end-div]

Old Concepts Die Hard

Regardless of how flawed old scientific concepts may be researchers have found that it is remarkably difficult for people to give these up and accept sound, new reasoning. Even scientists are creatures of habit.

[div class=attrib]From Scientific American:[end-div]

In one sense, science educators have it easy. The things they describe are so intrinsically odd and interesting — invisible fields, molecular machines, principles explaining the unity of life and origins of the cosmos — that much of the pedagogical attention-getting is built right in.  Where they have it tough, though, is in having to combat an especially resilient form of higher ed’s nemesis: the aptly named (if irredeemably clichéd) ‘preconceived idea.’ Worse than simple ignorance, naïve ideas about science lead people to make bad decisions with confidence. And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem.

Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new. According to a recent paper by Andrew Shtulman and Joshua Valcarcel, however, we may not be able to replace old ideas with new ones so cleanly. Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world. Even long after we learn that these intuitions have no scientific support, they can still subtly persist and influence our thought process. Like old habits, old concepts seem to die hard.

Testing for the persistence of old concepts can’t be done directly. Instead, one has to set up a situation in which old concepts, if present, measurably interfere with mental performance. To do this, Shtulman and Valcarcel designed a task that tested how quickly and accurately subjects verified short scientific statements (for example: “air is composed of matter.”). In a clever twist, the authors interleaved two kinds of statements — “consistent” ones that had the same truth-value under a naive theory and a proper scientific theory, and “inconsistent” ones. For example, the statement “air is composed of matter”  is inconsistent: it’s false under a naive theory (air just seems like empty space, right?), but is scientifically true. By contrast, the statement “people turn food into energy” is consistent: anyone who’s ever eaten a meal knows it’s true, and science affirms this by filling in the details about digestion, respiration and metabolism.

Shtulman and Valcarcel tested 150 college students on a battery of 200 such statements that included an equal and random mix of consistent and inconsistent statements from several domains, including astronomy, evolution, physiology, genetics, waves, and others. The scientists measured participants’ response speed and accuracy, and looked for systematic differences in how consistent vs. inconsistent statements were evaluated.

If scientific concepts, once learned, are fully internalized and don’t conflict with our earlier naive concepts, one would expect consistent and inconsistent statements to be processed similarly. On the other hand, if naive concepts are never fully supplanted, and are quietly threaded into our thought process, it should take take longer to evaluate inconsistent statements. In other words, it should take a bit of extra mental work (and time) to go against the grain of a naive theory we once held.

This is exactly what Shtulman and Valcarcel found. While there was some variability between the different domains tested, inconsistent statements took almost a half second longer to verify, on average. Granted, there’s a significant wrinkle in interpreting this result. Specifically, it may simply be the case that scientific concepts that conflict with naive intuition are simply learned more tenuously than concepts that are consistent with our intuition. Under this view, differences in response times aren’t necessarily evidence of ongoing inner conflict between old and new concepts in our brains — it’s just a matter of some concepts being more accessible than others, depending on how well they were learned.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of New Scientist.[end-div]

Death Cafe

“Death Cafe” sounds like the name of a group of alternative musicians from Denmark. But it’s not. Its rather more literal definition is a coffee shop where customers go to talk about death over a cup of earl grey tea or double shot espresso. And, while it’s not displacing Starbucks (yet), death cafes are a growing trend in Europe, first inspired by the pop-up Cafe Mortels of Switzerland.

[div class=attrib]From the Independent:[end-div]

Do you have a death wish?” is not a question normally bandied about in seriousness. But have you ever actually asked whether a parent, partner or friend has a wish, or wishes, concerning their death? Burial or cremation? Where would they like to die? It’s not easy to do.

Stiff-upper-lipped Brits have a particular problem talking about death. Anyone who tries invariably gets shouted down with “Don’t talk like that!” or “If you say it, you’ll make it happen.” A survey by the charity Dying Matters reveals that more than 70 per cent of us are uncomfortable talking about death and that less than a third of us have spoken to family members about end-of-life wishes.

But despite this ingrained reluctance there are signs of burgeoning interest in exploring death. I attended my first death cafe recently and was surprised to discover that the gathering of goths, emos and the terminally ill that I’d imagined, turned out to be a collection of fascinating, normal individuals united by a wish to discuss mortality.

At a trendy coffee shop called Cakey Muto in Hackney, east London, taking tea (and scones!) with death turned out to be rather a lot of fun. What is believed to be the first official British death cafe took place in September last year, organised by former council worker Jon Underwood. Since then, around 150 people have attended death cafes in London and the one I visited was the 17th such happening.

“We don’t want to shove death down people’s throats,” Underwood says. “We just want to create an environment where talking about death is natural and comfortable.” He got the idea from the Swiss model (cafe mortel) invented by sociologist Bernard Crettaz, the popularity of which gained momentum in the Noughties and has since spread to France.

Underwood is keen to start a death cafe movement in English-speaking countries and his website (deathcafe.com) includes instructions for setting up your own. He has already inspired the first death cafe in America and groups have sprung up in Northern England too. Last month, he arranged the first death cafe targeting issues around dying for a specific group, the LGBT community, which he says was extremely positive and had 22 attendees.

Back in Cakey Muto, 10 fellow attendees and I eye each other nervously as the cafe door is locked and we seat ourselves in a makeshift circle. Conversation is kicked off by our facilitator, grief specialist Kristie West, who sets some ground rules. “This is a place for people to talk about death,” she says. “I want to make it clear that it is not about grief, even though I’m a grief specialist. It’s also not a debate platform. We don’t want you to air all your views and pick each other apart.”

A number of our party are directly involved in the “death industry”: a humanist-funeral celebrant, an undertaker and a lady who works in a funeral home. Going around the circle explaining our decision to come to a death cafe, what came across from this trio, none of whom knew each other, was their satisfaction in their work.

“I feel more alive than ever since working in a funeral home,” one of the women remarked. “It has helped me recognise that it isn’t a circle between life and death, it is more like a cosmic soup. The dead and the living are sort of floating about together.”

Others in the group include a documentary maker, a young woman whose mother died 18 months ago, a lady who doesn’t say much but was persuaded by her neighbour to come, and a woman who has attended three previous death cafes but still hasn’t managed to admit this new interest to her family or get them to talk about death.

The funeral celebrant tells the circle she’s been thinking a lot about what makes a good or bad death. She describes “the roaring corrosiveness of stepping into a household” where a “bad death” has taken place and the group meditates on what a bad death entails: suddenness, suffering and a difficult relationship between the deceased and bereaved?

“I have seen people have funerals which I don’t think they would have wanted,” says the undertaker, who has 17 years of experience. “It is possible to provide funerals more cheaply, more sensitively and with greater respect for the dead.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Death cafe menu courtesy of Death Cafe.[end-div]

Watch Out Corporate America: Gen-Y is Coming

Social scientists have had Generation-Y, also known as “millenials”, under their microscopes for a while. Born between 1982 and 1999, Gen-Y is now coming of age and becoming a force in the workplace displacing aging “boomers” as they retire to the hills. So, researchers are now looking at how Gen-Y is faring inside corporate America. Remember, Gen-Y is the “it’s all about me generation”; members are characterized as typically lazy and spoiled, have a grandiose sense of entitlement, inflated self-esteem and deep emotional fragility. Their predecessors, the baby boomers, on the other hand are often seen as over-bearing, work-obsessed, competitive and narrow-minded. A clash of cultures is taking shape in office cubes across the country as these groups, with such differing personalities and philosophies, tussle within the workplace. However, it may not be all bad, as columnist Emily Matchar, argues below — corporate America needs the kind of shake-up that Gen-Y promises.

[div class=attrib]From the Washington Post:[end-div]

Have you heard the one about the kid who got his mom to call his boss and ask for a raise? Or about the college student who quit her summer internship because it forbade Facebook in the office?

Yep, we’re talking about Generation Y — loosely defined as those born between 1982 and 1999 — also known as millennials. Perhaps you know them by their other media-generated nicknames: teacup kids,for their supposed emotional fragility; boomerang kids, who always wind up back home; trophy kids — everyone’s a winner!; the Peter Pan generation, who’ll never grow up.

Now this pampered, over-praised, relentlessly self-confident generation (at age 30, I consider myself a sort of older sister to them) is flooding the workplace. They’ll make up 75 percent of the American workforce by 2025 — and they’re trying to change everything.

These are the kids, after all, who text their dads from meetings. They think “business casual” includes skinny jeans. And they expect the company president to listen to their “brilliant idea.”

When will they adapt?

They won’t. Ever. Instead, through their sense of entitlement and inflated self-esteem, they’ll make the modern workplace adapt to them. And we should thank them for it. Because the modern workplace frankly stinks, and the changes wrought by Gen Y will be good for everybody.

Few developed countries demand as much from their workers as the United States. Americans spend more time at the office than citizens of most other developed nations. Annually, we work 408 hours more than the Dutch, 374 hours more than the Germans and 311 hours more than the French. We even work 59 hours more than the stereotypically nose-to-the-grindstone Japanese. Though women make up half of the American workforce, the United States is the only country in the developed world without guaranteed paid maternity leave.

All this hard work is done for less and less reward. Wages have been stagnant for years, benefits shorn, opportunities for advancement blocked. While the richest Americans get richer, middle-class workers are left to do more with less. Because jobs are scarce and we’re used to a hierarchical workforce, we accept things the way they are. Worse, we’ve taken our overwork as a badge of pride. Who hasn’t flushed with a touch of self-importance when turning down social plans because we’re “too busy with work”?

Into this sorry situation strolls the self-esteem generation, printer-fresh diplomas in hand. And they’re not interested in business as usual.

The current corporate culture simply doesn’t make sense to much of middle-class Gen Y. Since the cradle, these privileged kids have been offered autonomy, control and choices (“Green pants or blue pants today, sweetie?”). They’ve been encouraged to show their creativity and to take their extracurricular interests seriously. Raised by parents who wanted to be friends with their kids, they’re used to seeing their elders as peers rather than authority figures. When they want something, they’re not afraid to say so.

[div class=attrib]Read the entire article after the jump.[end-div]

Subjective Objectivism: The Paradox that is Ayn Rand

Ayn Rand: anti-collectivist ideologue, standard-bearer for unapologetic individualism and rugged self-reliance, or selfish, fantasist and elitist hypocrite?

Political conservatives and libertarians increasingly flock to her writings and support her philosophy of individualism and unfettered capitalism, which she dubbed, “objectivism”. On the other hand, liberals see her as selfish zealot, elitist, narcissistic, even psychopathic.

The truth, of course, is more nuanced and complex, especially the private Ayn Rand versus the very public persona. Thus those who fail to delve into Rand’s traumatic and colorful history fail to grasp the many paradoxes and contradictions that she enshrined.

Rand was firmly and vociferously pro-choice, yet she believed that women should submit to the will of great men. She was a devout atheist and outspoken pacifist, yet she believed Native Americans fully deserved their cultural genocide for not grasping capitalism. She viewed homosexuality as disgusting and immoral, but supported non-discrimination protection for homosexuals in the public domain, yet opposed such rights in private, all the while having an extremely colorful private life herself. She was a valiant opponent of government and federal regulation in all forms. Publicly, she viewed Social Security, Medicare and other “big government” programs with utter disdain, their dependents nothing more than weak-minded loafers and “takers”. Privately, later in life, she accepted payments from Social Security and Medicare. Perhaps most paradoxically, Rand derided those who would fake their own reality, while at the same time being chronically dependent on mind-distorting amphetamines; popping speed at the same time as writing her keystones to objectivism: Fountainhead and Atlas Shrugged.

[div class=attrib]From the Guardian:[end-div]

As an atheist Ayn Rand did not approve of shrines but the hushed, air-conditioned headquarters which bears her name acts as a secular version. Her walnut desk occupies a position of honour. She smiles from a gallery of black and white photos, young in some, old in others. A bronze bust, larger than life, tilts her head upward, jaw clenched, expression resolute.

The Ayn Rand Institute in Irvine, California, venerates the late philosopher as a prophet of unfettered capitalism who showed America the way. A decade ago it struggled to have its voice heard. Today its message booms all the way to Washington DC.

It was a transformation which counted Paul Ryan, chairman of the House budget committee, as a devotee. He gave Rand’s novel, Atlas Shrugged, as Christmas presents and hailed her as “the reason I got into public service”.

Then, last week, he was selected as the Republican vice-presidential nominee and his enthusiasm seemed to evaporate. In fact, the backtracking began earlier this year when Ryan said as a Catholic his inspiration was not Rand’s “objectivism” philosophy but Thomas Aquinas’.

The flap has illustrated an acute dilemma for the institute. Once peripheral, it has veered close to mainstream, garnering unprecedented influence. The Tea Party has adopted Rand as a seer and waves placards saying “We should shrug” and “Going Galt”, a reference to an Atlas Shrugged character named John Galt.

Prominent Republicans channel Rand’s arguments in promises to slash taxes and spending and to roll back government. But, like Ryan, many publicly renounce the controversial Russian emigre as a serious influence. Where, then, does that leave the institute, the keeper of her flame?

Given Rand’s association with plutocrats – she depicted captains of industry as “producers” besieged by parasitic “moochers” – the headquarters are unexpectedly modest. Founded in 1985 three years after Rand’s death, the institution moved in 2002 from Marina del Rey, west of Los Angeles, to a drab industrial park in Irvine, 90 minutes south, largely to save money. It shares a nondescript two-storey building with financial services and engineering companies.

There is little hint of Galt, the character who symbolises the power and glory of the human mind, in the bland corporate furnishings. But the quotations and excerpts adorning the walls echo a mission which drove Rand and continues to inspire followers as an urgent injunction.

“The demonstration of a new moral philosophy: the morality of rational self-interest.”

These, said Onkar Ghate, the institute’s vice-president, are relatively good times for Randians. “Our primary mission is to advance awareness of her ideas and promote her philosophy. I must say, it’s going very well.”

On that point, if none other, conservatives and progressives may agree. Thirty years after her death Rand, as a radical intellectual and political force, is going very well indeed. Her novel Atlas Shrugged, a 1,000 page assault on big government, social welfare and altruism first published in 1957, is reportedly selling more than 400,000 copies per year and is being made into a movie trilogy. Its radical author, who also penned The Fountainhead and other novels and essays, is the subject of a recent documentary and spate of books.

To critics who consider Rand’s philosophy that “of the psychopath, a misanthropic fantasy of cruelty, revenge and greed”, her posthumous success is alarming.

Relatively little attention however has been paid to the institute which bears her name and works, often behind the scenes, to direct her legacy and shape right-wing debate.

 

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ayn Rand in 1957. Courtesy of Wikipedia.[end-div]

Philosophy and Science Fiction

We excerpt an fascinating article from I09 on the association of science fiction to philosophical inquiry. It’s quiet remarkable that this genre of literature can provide such a rich vein for philosophers to mine, often more so than reality itself. Though, it is no coincidence that our greatest authors of science fiction were, and are, amateur philosophers at heart.

[div class=attrib]From i09:[end-div]

People use science fiction to illustrate philosophy all the time. From ethical quandaries to the very nature of existence, science fiction’s most famous texts are tailor-made for exploring philosophical ideas. In fact, many college campuses now offer courses in the philosophy of science fiction.

But science fiction doesn’t just illuminate philosophy — in fact, the genre grew out of philosophy, and the earliest works of science fiction were philosophical texts. Here’s why science fiction has its roots in philosophy, and why it’s the genre of thought experiments about the universe.

Philosophical Thought Experiments As Science Fiction
Science fiction is a genre that uses strange worlds and inventions to illuminate our reality — sort of the opposite of a lot of other writing, which uses the familiar to build a portrait that cumulatively shows how insane our world actually is. People, especially early twenty-first century people, live in a world where strangeness lurks just beyond our frame of vision — but we can’t see it by looking straight at it. When we try to turn and confront the weird and unthinkable that’s always in the corner of our eye, it vanishes. In a sense, science fiction is like a prosthetic sense of peripheral vision.

We’re sort of like the people chained up in on the cave wall, but never seeing the full picture.

Plato is probably the best-known user of allegories — a form of writing which has a lot in common with science fiction. A lot of allegories are really thought experiments, trying out a set of strange facts to see what principles you derive from them. As plenty of people have pointed out, Plato’s Allegory of the Cave is the template for a million “what is reality” stories, from the works of Philip K. Dick to The Matrix. But you could almost see the cave allegory in itself as a proto-science fiction story, because of the strange worldbuilding that goes into these people who have never seen the “real” world. (Plato also gave us an allegory about the Ring of Gyges, which turns its wearer invisible — sound familiar?).

Later philosophers who ponder the nature of existence also seem to stray into weird science fiction territory — like Descartes, raising the notion that he, Descartes, could have existed since the beginning of the universe (as an alternative to God as a cause for Descartes’ existence.) Sitting in his bread oven, Descartes tries to cut himself off from sensory input to see what he can deduce of the universe.

And by the same token, the philosophy of human nature often seems to depend on conjuring imaginary worlds, whether it be Hobbes’ “nasty, brutish and short” world without laws, or Rousseau’s “state of nature.” A great believer in the importance of science, Hobbes sees humans as essentially mechanistic beings who are programmed to behave in a selfish fashion — and the state is a kind of artificial human that can contain us and give us better programming, in a sense.

So not only can you use something like Star Trek’s Holodeck to point out philosophical notions of the fallibility of the senses, and the possible falseness of reality — philosophy’s own explorations of those sorts of topics are frequently kind of other-worldly. Philosophical thought experiments, like the oft-cited “state of nature,” are also close kin to science fiction world building. As Susan Schneider writes in the book Science Fiction and Philosophy, “if you read science fiction writers like Stanislaw Lem, Isaac Asimov, Arthur C. Clarke and Robert Sawyer, you already aware that some of the best science fiction tales are in fact long versions of philosophical thought experiments.”

But meanwhile, when people come to list the earliest known works that could be considered “real” science fiction, they always wind up listing philosophical works, written by philosophers.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Description This is the front cover art for the book Nineteen Eighty-Four (1984) written by George Orwell. Courtesy of Secker and Warburg (London) / Wikipedia.[end-div]

Is It Good That Money Can Buy (Almost) Anything?

Money is a curious invention. It enables efficient and almost frictionless commerce and it allows us to assign tangible value to our time. Yet it poses enormous societal challenges and ethical dilemmas. For instance, should we bribe our children with money in return for better grades? Should we allow a chronically ill kidney patient to purchase a replacement organ from a donor?

Raghuram Rajan, professor of finance at the University of Chicago, reviews a fascinating new book that attempts to answer some of these questions. The book, “What Money Can’t Buy: The Moral Limits of the Market” is written by noted Harvard philosopher Michael Sandel.

[div class=attrib]From Project Syndicate:[end-div]

In an interesting recent book, What Money Can’t Buy: The Moral Limits of the Market, the Harvard philosopher Michael Sandel points to the range of things that money can buy in modern societies and gently tries to stoke our outrage at the market’s growing dominance. Is he right that we should be alarmed?

While Sandel worries about the corrupting nature of some monetized transactions (do kids really develop a love of reading if they are bribed to read books?), he is also concerned about unequal access to money, which makes trades using money inherently unequal. More generally, he fears that the expansion of anonymous monetary exchange erodes social cohesion, and argues for reducing money’s role in society.

Sandel’s concerns are not entirely new, but his examples are worth reflecting upon. In the United States, some companies pay the unemployed to stand in line for free public tickets to congressional hearings. They then sell the tickets to lobbyists and corporate lawyers who have a business interest in the hearing but are too busy to stand in line.

Clearly, public hearings are an important element of participatory democracy. All citizens should have equal access. So selling access seems to be a perversion of democratic principles.

The fundamental problem, though, is scarcity. We cannot accommodate everyone in the room who might have an interest in a particularly important hearing. So we have to “sell” entry. We can either allow people to use their time (standing in line) to bid for seats, or we can auction seats for money. The former seems fairer, because all citizens seemingly start with equal endowments of time. But is a single mother with a high-pressure job and three young children as equally endowed with spare time as a student on summer vacation? And is society better off if she, the chief legal counsel for a large corporation, spends much of her time standing in line?

Whether it is better to sell entry tickets for time or for money thus depends on what we hope to achieve. If we want to increase society’s productive efficiency, people’s willingness to pay with money is a reasonable indicator of how much they will gain if they have access to the hearing. Auctioning seats for money makes sense – the lawyer contributes more to society by preparing briefs than by standing in line.

On the other hand, if it is important that young, impressionable citizens see how their democracy works, and that we build social solidarity by making corporate executives stand in line with jobless teenagers, it makes sense to force people to bid with their time and to make entry tickets non-transferable. But if we think that both objectives – efficiency and solidarity – should play some role, perhaps we should turn a blind eye to hiring the unemployed to stand in line in lieu of busy lawyers, so long as they do not corner all of the seats.

What about the sale of human organs, another example Sandel worries about? Something seems wrong when a lung or a kidney is sold for money. Yet we celebrate the kindness of a stranger who donates a kidney to a young child. So, clearly, it is not the transfer of the organ that outrages us – we do not think that the donor is misinformed about the value of a kidney or is being fooled into parting with it. Nor, I think, do we have concerns about the scruples of the person selling the organ – after all, they are parting irreversibly with something that is dear to them for a price that few of us would accept.

I think part of our discomfort has to do with the circumstances in which the transaction takes place. What kind of society do we live in if people have to sell their organs to survive?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google.[end-div]

How Great Companies Fail

A fascinating case study shows how Microsoft failed its employees through misguided HR (human resources) policies that pitted colleague against colleague.

[div class=attrib]From the Guardian:[end-div]

The idea for today’s off-topic note came to me when I read “Microsoft’s lost decade”, an aptly titled Vanity Fair story. In the piece, Kurt Eichenwald tracks Microsoft’s decline as he revisits a decade of technical missteps and bad business decisions. Predictably, the piece has generated strong retorts from Microsoft’s Ministry of Truth and from Ballmer himself (“It’s not been a lost decade for me!” he barked from the tumbrel).

But I don’t come to bury Caesar – not, yet, I’ll wait until actual numbers for Windows 8 and the Surface tablets emerge. Instead, let’s consider the centerpiece of Eichenwald’s article, his depiction of the cultural degeneracy and intramural paranoia that comes of a badly implemented performance review system.

Performance assessments are, of course, an important aspect of a healthy company. In order to maintain fighting weight, an organisation must honestly assay its employees’ contributions and cull the dead wood. This is tournament play, after all, and the coach must “release”; players who can’t help get the team to the finals.

But Microsoft’s implementation – “stack ranking”, a bell curve that pits employees and groups against one another like rats in a cage – plunged the company into internecine fights, horse trading, and backstabbing.

…every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor…For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings.

Employees quickly realised that it was more important to focus on organisation politics than actual performance:

Every current and former Microsoft employee I interviewed – every one – cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees.

This brought back bad memories of my corpocrat days working for a noted Valley company. When I landed here in 1985, I was dismayed by the pervasive presence of human resources, an éminence grise that cast a shadow across the entire organisation. Humor being the courtesy of despair, engineers referred to HR as the KGB or, for a more literary reference, the Bene Gesserit, monikers that knowingly imputed an efficiency to a department that offered anything but. Granted, there was no bell curve grading, no obligation to sacrifice the bottom 5%, but the politics were stifling nonetheless, the review process a painful charade.

In memory of those shenanigans, I’ve come up with a possible antidote to manipulative reviews, an attempt to deal honestly and pleasantly with the imperfections of life at work. (Someday I’ll write a Note about an equally important task: How to let go of people with decency – and without lawyers.)

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Telegraph / Microsoft.[end-div]

The Demise of Upward Mobility

Robert J. Samuelson paints a sobering picture of the once credible and seemingly attainable American Dream — the generational progress of upward mobility is no longer a given. He is the author of “The Great Inflation and Its Aftermath: The Past and Future of American Affluence”.

[div class=attrib]From Wilson Quarterly:[end-div]

The future of affluence is not what it used to be. Americans have long believed—it’s part of our national character—that our economic well-being will constantly increase. We see ourselves as a striving, inventive, and pragmatic people destined for higher living standards. History is a continuum of progress, from Robert Fulton’s steamboat to Henry Ford’s assembly line to Bill Gates’ software. Every generation will live better than its predecessors.
Well, maybe not.

For millions of younger Americans—say, those 40 and under—living better than their parents is a pipe dream. They won’t. The threat to their hopes does not arise from an impending collapse of technological gains of the sort epitomized by the creations of Fulton, Ford, and Gates. These advances will almost certainly continue, and per capita income—the average for all Americans and a conventional indicator of living standards—will climb. Statistically, American progress will resume. The Great Recession will be a bump, not a dead end.

The trouble is that many of these gains will bypass the young. The increases that might have fattened their paychecks will be siphoned off to satisfy other groups and other needs. Today’s young workers will have to finance Social Security and Medicare for a rapidly growing cohort of older Americans. Through higher premiums for employer-provided health insurance, they will subsidize care for others. Through higher taxes and fees, they will pay to repair aging infrastructure (roads, bridges, water systems) and to support squeezed public services, from schools to police.

The hit to their disposable incomes would matter less if the young were major beneficiaries of the resultant spending. In some cases—outlays for infrastructure and local services—they may be. But these are exceptions. By 2025 Social Security and Medicare will simply reroute income from the nearly four-fifths of the population that will be under 65 to the older one-fifth. And health care spending at all age levels is notoriously skewed: Ten percent of patients account for 65 percent of medical costs, reports the Kaiser Family Foundation. Although insurance provides peace of mind, the money still goes from young to old: Average health spending for those 45 to 64 is triple that for those 18 to 24.

The living standards of younger Americans will almost certainly suffer in comparison to those of their parents in a second crucial way. Our notion of economic progress is tied to financial security, but the young will have less of it. What good are higher incomes if they’re abruptly revoked? Though it wasn’t a second Great Depression, the Great Recession was a close call, shattering faith that modern economic policies made broad collapses impossible. Except for the savage 1980-82 slump, post-World War II recessions had been modest. Only minorities of Americans had suffered. By contrast, the Great Recession hurt almost everyone, through high unemployment, widespread home foreclosures, huge wealth losses in stocks and real estate—and fears of worse. A 2012 Gallup poll found that 68 percent of Americans knew someone who had lost a job.

The prospect of downward mobility is not just dispiriting. It assails the whole post–World War II faith in prosperity. Beginning in the 1950s, commentators celebrated the onrush of abundance as marking a new era in human progress. In his 1958 bestseller The Affluent Society, Harvard economist John Kenneth Galbraith announced the arrival of a “great and unprecedented affluence” that had eradicated the historical “poverty of the masses.”

Economic growth became a secular religion that was its own reward. Perhaps its chief virtue was that it dampened class conflict. In The Great Leap: The Past Twenty-Five Years in America (1966), John Brooks observed, “The middle class was enlarging itself and ever encroaching on the two extremes”—the very rich and the very poor. Business and labor could afford to reconcile because both could now share the fruits of expanding production. We could afford more spending on public services (education, health, environmental protection, culture) without depressing private incomes. Indeed, that was Galbraith’s main theme: Our prosperity could and should support both.

To be sure, there were crises of faith, moments when economic progress seemed delayed or doomed. The longest lapse occurred in the 1970s, when double-digit inflation spawned pessimism and frequent recessions, culminating in the 1980-82 downturn. Monthly unemployment peaked at 10.8 percent. But after Federal Reserve chairman Paul Volcker and President Ronald Reagan took steps to suppress high inflation, faith returned.
Now, it’s again imperiled. A 2011 Gallup poll found that 55 percent of Americans didn’t think their children would live as well as they did, the highest rate ever. We may face a crimped and contentious future.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ascending and Descending by M.C.Escher. Courtesy of M.C.Escher.[end-div]

Are You Cold or Hot? Depends on Your Politics

The United States is gripped by political deadlock. The Do-Nothing Congress consistently gets lower approval ratings than our banks, Paris Hilton, lawyers and BP during the catastrophe in the Gulf of Mexico. This stasis is driven by seemingly intractable ideological beliefs and a no-compromise attitude from both the left and right sides of the aisle.

So, it should come as no surprise that even your opinion of the weather and temperature is colored by your political persuasion.

Daniel Engber over at Slate sifts through some fascinating studies that highlight how our ingrained ideologies determine our worldview, down to even our basic view of the weather and our home thermostat setting.

[div class=attrib]From Slate:[end-div]

A few weeks ago, an academic journal called Weather, Climate and Society posted a curious finding about how Americans perceive the heat and cold. A team of researchers at the University of Oklahoma asked 8,000 adults living across the country to state both their political leanings and their impressions of the local weather. Are you a liberal or a conservative? Have average temperatures where you live been rising, falling, or staying about the same as previous years? Then they compared the answers to actual thermostat readings from each respondent’s ZIP code. Would their sense of how it feels outside be colored by the way they think?

Yes it would, the study found. So much so, in fact, that the people surveyed all but ignored their actual experience. No matter what the weather records showed for a given neighborhood (despite the global trend, it had gotten colder in some places and warmer in others), conservatives and liberals fell into the same two camps. The former said that temperatures were decreasing or had stayed the same, and the latter claimed they were going up. “Actual temperature deviations proved to be a relatively weak predictor of perceptions,” wrote the authors. (Hat tip to Ars Technica for finding the study.)

People’s opinions, then, seem to have an effect on how they feel the air around them. If you believe in climate change and think the world is getting warmer, you’ll be more inclined to sense that warmth on a walk around the block. And if you tend to think instead in terms of crooked scientists and climate conspiracies, then the local weather will seem a little cooler. Either way, the Oklahoma study suggests that the experience of heat and cold derives from “a complex mix of direct observation, ideology, and cultural cognitions.”

It’s easy to see how these factors might play out when people make grand assessments of the weather that rely on several years’ worth of noisy data. But another complex mix of ideology and culture affects how we experience the weather from moment to moment—and how we choose to cope with it. In yesterday’s column, I discussed the environmental case against air conditioning, and the belief that it’s worse to be hypothermic than overheated. But there are other concerns, too, that make their rounds among the anti-A/C brrr-geoisie. Some view air conditioning itself as a threat to their comfort and their health.

The notion that stale, recycled air might be sickening or dangerous has been circulating for as long as we’ve had home cooling. According to historian Marsha E. Ackermann’s Cool Comfort: America’s Romance With Air-Conditioning, the invention of the air conditioner set off a series of debates among high-profile scholars over whether it was better to fill a building with fresh air or to close it off from the elements altogether. One side argued for ventilation even in the most miserable summer weather; the other claimed that a hot, damp breeze could be a hazard to your health. (The precursor to the modern air conditioner, invented by a Floridian named John Gorrie, was designed according to the latter theory. Gorrie thought his device would stave off malaria and yellow fever.)

The cooling industry worked hard to promote the idea that A/C makes us more healthy and productive, and in the years after World War II it gained acceptance as a standard home appliance. Still, marketers worried about a lingering belief in the importance of fresh air, and especially the notion that the “shock effect” of moving too quickly from warm to cold would make you sick. Some of these fears would be realized in a new and deadly form of pneumonia known as Legionnaires’ disease. In the summer of 1976, around 4,000 members of the Pennsylvania State American Legion met for a conference at the fancy, air-conditioned Bellevue Stratford Hotel in Philadelphia, and over the next month, more than 180 Legionnaires took ill. The bacteria responsible for their condition were found to be propagating in the hotel’s cooling tower. Twenty-nine people died from the disease, and we finally had proof that air conditioning posed a mortal danger to America.

A few years later, a new diagnosis began to spread around the country, based on a nebulous array of symptoms including sore throats and headache that seemed to be associated with indoor air. Epidemiologists called the illness “Sick Building Syndrome,” and looked for its source in large-scale heating and cooling ducts. Even today, the particulars of the condition—and the question of whether or not it really exists—have not been resolved. But there is some good evidence for the idea that climate-control systems can breed allergenic mold or other micro-organisms. For a study published in 2004, researchers in France checked the medical records of 920 middle-aged women, and found that the ones who worked in air-conditioned offices (about 15 percent of the total pool) were almost twice as likely to take sick days or make a visit to an ear-nose-throat doctor.

This will come as no surprise to those who already shun the air conditioner and worship in the cult of fresh air. Like the opponents of A/C from a hundred years ago, they blame the sealed environment for creating a miasma of illness and disease. Well, of course it’s unhealthy to keep the windows closed; you need a natural breeze to blow all those spores and germs away. But their old-fashioned plea invites a response that’s just as antique. Why should the air be any fresher in summer than winter (when so few would let it in)? And what about the dangers that “fresh air” might pose in cities where the breeze swirls with soot and dust? A 2009 study in the journal Epidemiology confirmed that air conditioning can help stave off the effects of particulate matter in the environment. Researchers checked the health records of senior citizens who did or didn’t have air conditioners installed in their homes and found that those who were forced to leave their windows open in the summer—and suck down the dirty air outside—were more likely to end up in the hospital for pollution-related cardiovascular disease. Other studies have found similar correlations between a lack of A/C on sooty days and hospitalization for chronic obstructive pulmonary disease and pneumonia.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Crosley Air Conditioning / Treehugger.[end-div]

The Benefits of Self-Deception

 

Psychologists have long studied the causes and characteristics of deception. In recent times they have had a huge pool of talented liars from which to draw — bankers, mortgage lenders, Enron executives, borrowers, and of course politicians. Now, researchers have begun to took at the art of self-deception, with some interesting results. Self-deception may be a useful tool in influencing others.

[div class=attrib]From the Wall Street Journal:[end-div]

Lying to yourself—or self-deception, as psychologists call it—can actually have benefits. And nearly everybody does it, based on a growing body of research using new experimental techniques.

Self-deception isn’t just lying or faking, but is deeper and more complicated, says Del Paulhus, psychology professor at University of British Columbia and author of a widely used scale to measure self-deceptive tendencies. It involves strong psychological forces that keep us from acknowledging a threatening truth about ourselves, he says.

Believing we are more talented or intelligent than we really are can help us influence and win over others, says Robert Trivers, an anthropology professor at Rutgers University and author of “The Folly of Fools,” a 2011 book on the subject. An executive who talks himself into believing he is a great public speaker may not only feel better as he performs, but increase “how much he fools people, by having a confident style that persuades them that he’s good,” he says.

Researchers haven’t studied large population samples to compare rates of self-deception or compared men and women, but they know based on smaller studies that it is very common. And scientists in many different disciplines are drawn to studying it, says Michael I. Norton, an associate professor at Harvard Business School. “It’s also one of the most puzzling things that humans do.”

Researchers disagree over what exactly happens in the brain during self-deception. Social psychologists say people deceive themselves in an unconscious effort to boost self-esteem or feel better. Evolutionary psychologists, who say different parts of the brain can harbor conflicting beliefs at the same time, say self-deception is a way of fooling others to our own advantage.

In some people, the tendency seems to be an inborn personality trait. Others may develop a habit of self-deception as a way of coping with problems and challenges.

Behavioral scientists in recent years have begun using new techniques in the laboratory to predict when and why people are likely to deceive themselves. For example, they may give subjects opportunities to inflate their own attractiveness, skill or intelligence. Then, they manipulate such variables as subjects’ mood, promises of rewards or opportunities to cheat. They measure how the prevalence of self-deception changes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Truth or Consequences. Courtesy of CBS 1950-51 / Wikia.[end-div]

The Exceptionalism of American Violence

The United States is often cited as the most generous nation on Earth. Unfortunately, it is also one of the most violent, having one of the highest murder rates of any industrialized country. Why this tragic paradox?

In an absorbing article excerpted below, backed by sound research, Anthropologist Eric Michael Johnson points to the lack of social capital on a local and national scale. Here, social capital is defined as interpersonal trust that promotes cooperation between citizens and groups for mutual benefit.

So, combine a culture that allows convenient access to very effective weapons with broad inequality, social isolation and distrust, and you get a very sobering picture — a country where around 70 people are killed each day by others wielding guns (25,423 firearm homicides in 2006-2007, based on Centers for Disease Control statistics).

[div class=attrib]From Scientific American:[end-div]

The United States is the deadliest wealthy country in the world. Can science help us explain, or even solve, our national crisis?

His tortured and sadistic grin beamed like a full moon on that dark night. “Madness, as you know, is like gravity,” he cackled. “All it takes is a little push.” But once the house lights rose, the terror was lifted for most of us. Few imagined that the fictive evil on screen back in 2008 would later inspire a depraved act of mass murder by a young man sitting with us in the audience, a student of neuroscience whose mind was teetering on the edge. What was it that pushed him over?

In the wake of the tragedy that struck Aurora, Colorado last Friday there remain more questions than answers. Just like last time–in January, 2011 when Congresswoman Gabrielle Giffords and 18 others were shot in Tucson, Arizona or before that in April, 2007 when a deranged gunman attacked students and staff at Virginia Tech–this senseless mass shooting has given rise to a national conversation as we struggle to find meaning in the madness.

While everyone agrees the blame should ultimately be placed on the perpetrator of this violence, the fact remains that the United States has one of the highest murder rates in the industrialized world. Of the 34 countries in the Organisation for Economic Co-operation and Development (OECD), the U.S. ranks fifth in homicides just behind Brazil (highest), Mexico, Russia, and Estonia. Our nation also holds the dubious honor of being responsible for half of the worst mass shootings in the last 30 years. How can we explain why the United States has nearly three times more murders per capita than neighboring Canada and ten times more than Japan? What makes the land of the free such a dangerous place to live?

Diagnosing a Murder

There have been hundreds of thoughtful explorations of this problem in the last week, though three in particular have encapsulated the major issues. Could it be, as science writer David Dobbs argues at Wired, that “an American culture that fetishizes violence,” such as the Batman franchise itself, has contributed to our fall? “Culture shapes the expression of mental dysfunction,” Dobbs writes, “just as it does other traits.”

Perhaps the push arrived with the collision of other factors, as veteran journalist Bill Moyers maintains, when the dark side of human nature encountered political allies who nurture our destructive impulses? “Violence is our alter ego, wired into our Stone Age brains,” he says. “The NRA is the best friend a killer’s instinct ever had.”

But then again maybe there is an economic explanation, as my Scientific American colleague John Horgan believes, citing a hypothesis by McMaster University evolutionary psychologists Martin Daly and his late wife Margo Wilson. “Daly and Wilson found a strong correlation between high Gini scores [a measure of inequality] and high homicide rates in Canadian provinces and U.S. counties,” Horgan writes, “blaming homicides not on poverty per se but on the collision of poverty and affluence, the ancient tug-of-war between haves and have-nots.”

In all three cases, as it was with other culprits such as the lack of religion in public schools or the popularity of violent video games (both of which are found in other wealthy countries and can be dismissed), commentators are looking at our society as a whole rather than specific details of the murderer’s background. The hope is that, if we can isolate the factor which pushes some people to murder their fellow citizens, perhaps we can alter our social environment and reduce the likelihood that these terrible acts will be repeated in the future. The only problem is, which one could it be?

The Exceptionalism of American Violence

As it turns out, the “social capital” Sapolsky found that made the Forest Troop baboons so peaceful is an important missing factor that can explain our high homicide rate in the United States. In 1999 Ichiro Kawachi at the Harvard School of Public Health led a study investigating the factors in American homicide for the journal Social Science and Medicine (pdf here). His diagnosis was dire.

“If the level of crime is an indicator of the health of society,” Kawachi wrote, “then the US provides an illustrative case study as one of the most unhealthy of modern industrialized nations.” The paper outlined what the most significant causal factors were for this exaggerated level of violence by developing what was called “an ecological theory of crime.” Whereas many other analyses of homicide take a criminal justice approach to the problem–such as the number of cops on the beat, harshness of prison sentences, or adoption of the death penalty–Kawachi used a public health perspective that emphasized social relations.

In all 50 states and the District of Columbia data were collected using the General Social Survey that measured social capital (defined as interpersonal trust that promotes cooperation between citizens for mutual benefit), along with measures of poverty and relative income inequality, homicide rates, incidence of other crimes–rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft–unemployment, percentage of high school graduates, and average alcohol consumption. By using a statistical method known as principal component analysis Kawachi was then able to identify which ecologic variables were most associated with particular types of crime.

The results were unambiguous: when income inequality was higher, so was the rate of homicide. Income inequality alone explained 74% of the variance in murder rates and half of the aggravated assaults. However, social capital had an even stronger association and, by itself, accounted for 82% of homicides and 61% of assaults. Other factors such as unemployment, poverty, or number of high school graduates were only weakly associated and alcohol consumption had no connection to violent crime at all. A World Bank sponsored study subsequently confirmed these results on income inequality concluding that, worldwide, homicide and the unequal distribution of resources are inextricably tied. (see Figure 2). However, the World Bank study didn’t measure social capital. According to Kawachi it is this factor that should be considered primary; when the ties that bind a community together are severed inequality is allowed to run free, and with deadly consequences.

But what about guns? Multiple studies have shown a direct correlation between the number of guns and the number of homicides. The United States is the most heavily armed country in the world with 90 guns for every 100 citizens. Doesn’t this over-saturation of American firepower explain our exaggerated homicide rate? Maybe not. In a follow-up study in 2001 Kawachi looked specifically at firearm prevalence and social capital among U.S. states. The results showed that when social capital and community involvement declined, gun ownership increased (see Figure 3).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Smith & Wesson M&P Victory model revolver. Courtesy of Oleg Volk / Wikpedia.[end-div]

Crony Capitalism

We excerpt below a fascinating article from the WSJ on the increasingly incestuous and damaging relationship between the finance industry and our political institutions.

[div class=attrib]From the Wall Street Journal:[end-div]

Mitt Romney’s résumé at Bain should be a slam dunk. He has been a successful capitalist, and capitalism is the best thing that has ever happened to the material condition of the human race. From the dawn of history until the 18th century, every society in the world was impoverished, with only the thinnest film of wealth on top. Then came capitalism and the Industrial Revolution. Everywhere that capitalism subsequently took hold, national wealth began to increase and poverty began to fall. Everywhere that capitalism didn’t take hold, people remained impoverished. Everywhere that capitalism has been rejected since then, poverty has increased.

Capitalism has lifted the world out of poverty because it gives people a chance to get rich by creating value and reaping the rewards. Who better to be president of the greatest of all capitalist nations than a man who got rich by being a brilliant capitalist?

Yet it hasn’t worked out that way for Mr. Romney. “Capitalist” has become an accusation. The creative destruction that is at the heart of a growing economy is now seen as evil. Americans increasingly appear to accept the mind-set that kept the world in poverty for millennia: If you’ve gotten rich, it is because you made someone else poorer.

What happened to turn the mood of the country so far from our historic celebration of economic success?

Two important changes in objective conditions have contributed to this change in mood. One is the rise of collusive capitalism. Part of that phenomenon involves crony capitalism, whereby the people on top take care of each other at shareholder expense (search on “golden parachutes”).

But the problem of crony capitalism is trivial compared with the collusion engendered by government. In today’s world, every business’s operations and bottom line are affected by rules set by legislators and bureaucrats. The result has been corruption on a massive scale. Sometimes the corruption is retail, whereby a single corporation creates a competitive advantage through the cooperation of regulators or politicians (search on “earmarks”). Sometimes the corruption is wholesale, creating an industrywide potential for profit that would not exist in the absence of government subsidies or regulations (like ethanol used to fuel cars and low-interest mortgages for people who are unlikely to pay them back). Collusive capitalism has become visible to the public and increasingly defines capitalism in the public mind.

Another change in objective conditions has been the emergence of great fortunes made quickly in the financial markets. It has always been easy for Americans to applaud people who get rich by creating products and services that people want to buy. That is why Thomas Edison and Henry Ford were American heroes a century ago, and Steve Jobs was one when he died last year.

When great wealth is generated instead by making smart buy and sell decisions in the markets, it smacks of inside knowledge, arcane financial instruments, opportunities that aren’t accessible to ordinary people, and hocus-pocus. The good that these rich people have done in the process of getting rich is obscure. The benefits of more efficient allocation of capital are huge, but they are really, really hard to explain simply and persuasively. It looks to a large proportion of the public as if we’ve got some fabulously wealthy people who haven’t done anything to deserve their wealth.

The objective changes in capitalism as it is practiced plausibly account for much of the hostility toward capitalism. But they don’t account for the unwillingness of capitalists who are getting rich the old-fashioned way—earning it—to defend themselves.

I assign that timidity to two other causes. First, large numbers of today’s successful capitalists are people of the political left who may think their own work is legitimate but feel no allegiance to capitalism as a system or kinship with capitalists on the other side of the political fence. Furthermore, these capitalists of the left are concentrated where it counts most. The most visible entrepreneurs of the high-tech industry are predominantly liberal. So are most of the people who run the entertainment and news industries. Even leaders of the financial industry increasingly share the politics of George Soros. Whether measured by fundraising data or by the members of Congress elected from the ZIP Codes where they live, the elite centers with the most clout in the culture are filled with people who are embarrassed to identify themselves as capitalists, and it shows in the cultural effect of their work.

Another factor is the segregation of capitalism from virtue. Historically, the merits of free enterprise and the obligations of success were intertwined in the national catechism. McGuffey’s Readers, the books on which generations of American children were raised, have plenty of stories treating initiative, hard work and entrepreneurialism as virtues, but just as many stories praising the virtues of self-restraint, personal integrity and concern for those who depend on you. The freedom to act and a stern moral obligation to act in certain ways were seen as two sides of the same American coin. Little of that has survived.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Industrial Revolution brought about the end of true capitalism. Courtesy: Time Life Pictures/Mansell/Time Life Pictures/Getty Images.[end-div]