Category Archives: Idea Soup

Women See Bodies; Men See Body Parts

Yet another research study of gender differences shows some fascinating variation in the way men and women see and process their perceptions of others. Men tend to be perceived as a whole, women, on the other hand, are more likely to be perceived as parts.

[div class=attrib]From Scientific American:[end-div]

A glimpse at the magazine rack in any supermarket checkout line will tell you that women are frequently the focus of sexual objectification. Now, new research finds that the brain actually processes images of women differently than those of men, contributing to this trend.

Women are more likely to be picked apart by the brain and seen as parts rather than a whole, according to research published online June 29 in the European Journal of Social Psychology. Men, on the other hand, are processed as a whole rather than the sum of their parts.

“Everyday, ordinary women are being reduced to their sexual body parts,” said study author Sarah Gervais, a psychologist at the University of Nebraska, Lincoln. “This isn’t just something that supermodels or porn stars have to deal with.”

Objectification hurts
Numerous studies have found that feeling objectified is bad for women. Being ogled can make women do worse on math tests, and self-sexualization, or scrutiny of one’s own shape, is linked to body shame, eating disorders and poor mood.

But those findings have all focused on the perception of being sexualized or objectified, Gervais told LiveScience. She and her colleagues wondered about the eye of the beholder: Are people really objectifying women more than men?

To find out, the researchers focused on two types of mental processing, global and local. Global processing is how the brain identifies objects as a whole. It tends to be used when recognizing people, where it’s not just important to know the shape of the nose, for example, but also how the nose sits in relation to the eyes and mouth. Local processing focuses more on the individual parts of an object. You might recognize a house by its door alone, for instance, while you’re less likely to recognize a person’s arm without the benefit of seeing the rest of their body.

If women are sexually objectified, people should process their bodies in a more local way, focusing on individual body parts like breasts. To test the idea, Gervais and her colleagues carried out two nearly identical experiments with a total of 227 undergraduate participants. Each person was shown non-sexualized photographs, each of either a young man or young woman, 48 in total. After seeing each original full-body image, the participants saw two side-by-side photographs. One was the original image, while the other was the original with a slight alteration to the chest or waist (chosen because these are sexualized body parts). Participants had to pick which image they’d seen before.

In some cases, the second set of photos zoomed in on the chest or waist only, asking participants to pick the body part they’d seen previously versus the one that had been altered.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: People focus on the parts of a woman’s body when processing her image, according to research published in June in the European Journal of Social Psychology. Courtesy of LiveScience / Yuri Arcurs, Shutterstock.[end-div]

Die Zombie, Die Zombie

Helen Sword cuts through (pun intended) the corporate-speak that continues to encroach upon our literature, particularly in business and academia, with a plea to kill our “zombie nouns”. Her latest book is “Stylish Academic Writing”.

[div class=attrib]From the New York Times:[end-div]

Take an adjective (implacable) or a verb (calibrate) or even another noun (crony) and add a suffix like ity, tion or ism. You’ve created a new noun: implacability, calibration, cronyism. Sounds impressive, right?

Nouns formed from other parts of speech are called nominalizations. Academics love them; so do lawyers, bureaucrats and business writers. I call them “zombie nouns” because they cannibalize active verbs, suck the lifeblood from adjectives and substitute abstract entities for human beings:

The proliferation of nominalizations in a discursive formation may be an indication of a tendency toward pomposity and abstraction.

The sentence above contains no fewer than seven nominalizations, each formed from a verb or an adjective. Yet it fails to tell us who is doing what. When we eliminate or reanimate most of the zombie nouns (tendency becomes tend, abstraction becomes abstract) and add a human subject and some active verbs, the sentence springs back to life:

Writers who overload their sentences with nominalizations tend to sound pompous and abstract.

Only one zombie noun – the key word nominalizations – has been allowed to remain standing.

At their best, nominalizations help us express complex ideas: perception, intelligence, epistemology. At their worst, they impede clear communication. I have seen academic colleagues become so enchanted by zombie nouns like heteronormativity and interpellation that they forget how ordinary people speak. Their students, in turn, absorb the dangerous message that people who use big words are smarter – or at least appear to be – than those who don’t.

In fact, the more abstract your subject matter, the more your readers will appreciate stories, anecdotes, examples and other handholds to help them stay on track. In her book “Darwin’s Plots,” the literary historian Gillian Beer supplements abstract nouns like evidence, relationships and beliefs with vivid verbs (rebuff, overturn, exhilarate) and concrete nouns that appeal to sensory experience (earth, sun, eyes):

Most major scientific theories rebuff common sense. They call on evidence beyond the reach of our senses and overturn the observable world. They disturb assumed relationships and shift what has been substantial into metaphor. The earth now only seems immovable. Such major theories tax, affront, and exhilarate those who first encounter them, although in fifty years or so they will be taken for granted, part of the apparently common-sense set of beliefs which instructs us that the earth revolves around the sun whatever our eyes may suggest.

Her subject matter – scientific theories – could hardly be more cerebral, yet her language remains firmly anchored in the physical world.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of PLOS (The Public Library of Science).[end-div]

Procrastination is a Good Thing

Procrastinators have known this for a long time: that success comes from making a decision at the last possible moment.

Procrastinating professor Frank Partnoy expands on this theory, captured in his book, “Wait: The Art and Science of Delay“.

[div class=attrib]From Smithsonian:[end-div]

Sometimes life seems to happen at warp speed. But, decisions, says Frank Partnoy, should not. When the financial market crashed in 2008, the former investment banker and corporate lawyer, now a professor of finance and law and co-director of the Center for Corporate and Securities Law at the University of San Diego, turned his attention to literature on decision-making.

“Much recent research about decisions helps us understand what we should do or how we should do it, but it says little about when,” he says.

In his new book, Wait: The Art and Science of Delay, Partnoy claims that when faced with a decision, we should assess how long we have to make it, and then wait until the last possible moment to do so. Should we take his advice on how to “manage delay,” we will live happier lives.

It is not surprising that the author of a book titled Wait is a self-described procrastinator. In what ways do you procrastinate?

I procrastinate in just about every possible way and always have, since my earliest memories going back to when I first starting going to elementary school and had these arguments with my mother about making my bed.

My mom would ask me to make my bed before going to school. I would say, no, because I didn’t see the point of making my bed if I was just going to sleep in it again that night. She would say, well, we have guests coming over at 6 o’clock, and they might come upstairs and look at your room. I said, I would make my bed when we know they are here. I want to see a car in the driveway. I want to hear a knock on the door. I know it will take me about one minute to make my bed so at 5:59, if they are here, I will make my bed.

I procrastinated all through college and law school. When I went to work at Morgan Stanley, I was delighted to find that although the pace of the trading floor is frenetic and people are very fast, there were lots of incredibly successful mentors of procrastination.

Now, I am an academic. As an academic, procrastination is practically a job requirement. If I were to say I would be submitting an academic paper by September 1, and I submitted it in August, people would question my character.

It has certainly been drilled into us that procrastination is a bad thing. Yet, you argue that we should embrace it. Why?

Historically, for human beings, procrastination has not been regarded as a bad thing. The Greeks and Romans generally regarded procrastination very highly. The wisest leaders embraced procrastination and would basically sit around and think and not do anything unless they absolutely had to.

The idea that procrastination is bad really started in the Puritanical era with Jonathan Edwards’s sermon against procrastination and then the American embrace of “a stitch in time saves nine,” and this sort of work ethic that required immediate and diligent action.

But if you look at recent studies, managing delay is an important tool for human beings. People are more successful and happier when they manage delay. Procrastination is just a universal state of being for humans. We will always have more things to do than we can possibly do, so we will always be imposing some sort of unwarranted delay on some tasks. The question is not whether we are procrastinating, it is whether we are procrastinating well.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of eHow.[end-div]

Re-resurgence of the United States

Those who have written off the United States in the 21st century may need to thing again. A combination of healthy demographics, sound intellectual capital, institutionalized innovation and fracking (yes, fracking) have placed the U.S. on a sound footing for the future, despite current political and economic woes.

[div class=attrib]From the Wilson Quarterly:[end-div]

If the United States were a person, a plausible diagnosis could be made that it suffers from manic depression. The country’s self-perception is highly volatile, its mood swinging repeatedly from euphoria to near despair and back again. Less than a decade ago, in the wake of the deceptively easy triumph over the wretched legions of Saddam Hussein, the United States was the lonely superpower, the essential nation. Its free markets and free thinking and democratic values had demonstrated their superiority over all other forms of human organization. Today the conventional wisdom speaks of inevitable decline and of equally inevitable Chinese triumph; of an American financial system flawed by greed and debt; of a political system deadlocked and corrupted by campaign contributions, negative ads, and lobbyists; of a social system riven by disparities of income, education, and opportunity.

It was ever thus. The mood of justified triumph and national solidarity after global victory in 1945 gave way swiftly to an era of loyalty oaths, political witch-hunts, and Senator Joseph McCarthy’s obsession with communist moles. The Soviet acquisition of the atom bomb, along with the victory of Mao Zedong’s communist armies in China, had by the end of the 1940s infected America with the fear of existential defeat. That was to become a pattern; at the conclusion of each decade of the Cold War, the United States felt that it was falling behind. The successful launch of the Sputnik satellite in 1957 triggered fears that the Soviet Union was winning the technological race, and the 1960 presidential election was won at least in part by John F. Kennedy’s astute if disingenuous claim that the nation was threatened by a widening “missile gap.”
At the end of the 1960s, with cities burning in race riots, campuses in an uproar, and a miserably unwinnable war grinding through the poisoned jungles of Indochina, an American fear of losing the titanic struggle with communism was perhaps understandable. Only the farsighted saw the importance of the contrast between American elections and the ruthless swagger of the Red Army’s tanks crushing the Prague Spring of 1968. At the end of the 1970s, with American diplomats held hostage in Tehran, a Soviet puppet ruling Afghanistan, and glib talk of Soviet troops soon washing their feet in the Indian Ocean, Americans waiting in line for gasoline hardly felt like winners. Yet at the end of the 1980s, what a surprise! The Cold War was over and the good guys had won.

Naturally, there were many explanations for this, from President Ronald Reagan’s resolve to Mikhail Gorbachev’s decency; from American industrial prowess to Soviet inefficiency. The most cogent reason was that the United States back in the late 1940s had crafted a bipartisan grand strategy for the Cold War that proved to be both durable and successful. It forged a tripartite economic alliance of Europe, North America, and Japan, backed up by various regional treaty organizations such as NATO, and counted on scientists, inventors, business leaders, and a prosperous and educated work force to deliver both guns and butter for itself and its allies. State spending on defense and science would keep unemployment at bay while Social Security would ensure that the siren songs of communism had little to offer the increasingly comfortable workers of the West. And while the West waited for its wealth and technologies to attain overwhelming superiority, its troops, missiles, and nuclear deterrent would contain Soviet and Chinese hopes of expansion.

It worked. The Soviet Union collapsed, and the Chinese leadership drew the appropriate lessons. (The Chinese view was that by starting with glasnost and political reform, and ducking the challenge of economic reform, Gorbachev had gotten the dynamics of change the wrong way round.) But by the end of 1991, the Democrat who would win the next year’s New Hampshire primary (Senator Paul Tsongas of Massachusetts) had a catchy new campaign slogan: “The Cold War is over—and Japan won.” With the country in a mild recession and mega-rich Japanese investors buying up landmarks such as Manhattan’s Rockefeller Center and California’s Pebble Beach golf course, Tsongas’s theme touched a national chord. But the Japanese economy has barely grown since, while America’s gross domestic product has almost doubled.

There are, of course, serious reasons for concern about the state of the American economy, society, and body politic today. But remember, the United States is like the weather in Ireland; if you don’t like it, just wait a few minutes and it’s sure to shift. This is a country that has been defined by its openness to change and innovation, and the search for the latest and the new has transformed the country’s productivity and potential. This openness, in effect, was America’s secret weapon that won both World War II and the Cold War. We tend to forget that the Soviet Union fulfilled Nikita Khrushchev’s pledge in 1961 to outproduce the United States in steel, coal, cement, and fertilizer within 20 years. But by 1981 the United States was pioneering a new kind of economy, based on plastics, silicon, and transistors, while the Soviet Union lumbered on building its mighty edifice of obsolescence.

This is the essence of America that the doom mongers tend to forget. Just as we did after Ezra Cornell built the nationwide telegraph system and after Henry Ford developed the assembly line, we are again all living in a future invented in America. No other country produced, or perhaps even could have produced, the transformative combination of Microsoft, Apple, Google, Amazon, and Facebook. The American combination of universities, research, venture capital, marketing, and avid consumers is easy to envy but tough to emulate. It’s not just free enterprise. The Internet itself might never have been born but for the Pentagon’s Defense Advanced Research Projects Agency, and much of tomorrow’s future is being developed at the nanotechnology labs at the Argonne National Laboratory outside Chicago and through the seed money of Department of Energy research grants.

American research labs are humming with new game-changing technologies. One MIT-based team is using viruses to bind and create new materials to build better batteries, while another is using viruses to create catalysts that can turn natural gas into oil and plastics. A University of Florida team is pioneering a practical way of engineering solar cells from plastics rather than silicon. The Center for Bits and Atoms at MIT was at the forefront of the revolution in fabricators, assembling 3-D printers and laser milling and cutting machines into a factory-in-a-box that just needs data, raw materials, and a power source to turn out an array of products. Now that the latest F-18 fighters are flying with titanium parts that were made by a 3-D printer, you know the technology has taken off. Some 23,000 such printers were sold last year, most of them to the kind of garage tinkerers—many of them loosely grouped in the “maker movement” of freelance inventors—who more than 70 years ago created Hewlett-Packard and 35 years ago produced the first Apple personal computer.

The real game changer for America is the combination of two not-so-new technologies: hydraulic fracturing (“fracking”) of underground rock formations and horizontal drilling, which allows one well to spin off many more deep underground. The result has been a “frack gas” revolution. As recently as 2005, the U.S. government assumed that the country had about a 10-year supply of natural gas remaining. Now it knows that there is enough for at least several decades. In 2009, the United States outpaced Russia to become the world’s top natural gas producer. Just a few years ago, the United States had five terminals receiving imported liquefied natural gas (LNG), and permits had been issued to build 17 more. Today, one of the five plants is being converted to export U.S. gas, and the owners of three others have applied to do the same. (Two applications to build brand new export terminals are also pending.) The first export contract, worth $8 billion, was signed with Britain’s BG Group, a multinational oil and gas company. Sometime between 2025 and 2030, America is likely to become self-sufficient in energy again. And since imported energy accounts for about half of the U.S. trade deficit, fracking will be a game changer in more ways than one.

The supply of cheap and plentiful local gas is already transforming the U.S. chemical industry by making cheap feedstock available—ethylene, a key component of plastics, and other crucial chemicals are derived from natural gas in a process called ethane cracking. Many American companies have announced major projects that will significantly boost U.S. petrochemical capacity. In addition to expansions along the Gulf Coast, Shell Chemical plans to build a new ethane cracking plant in Pennsylvania, near the Appalachian Mountains’ Marcellus Shale geologic formation. LyondellBasell Industries is seeking to increase ethylene output at its Texas plants, and Williams Companies is investing $3 billion in Gulf Coast development. In short, billions of dollars will pour into regions of the United States that desperately need investment. The American Chemistry Council projects that over several years the frack gas revolution will create 400,000 new jobs, adding $130 billion to the economy and more than $4 billion in annual tax revenues. The prospect of cheap power also promises to improve the balance sheets of the U.S. manufacturing industry.

[div class-attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtey of Wikipedia.[end-div]

Time Flows Uphill

Many people in industrialized countries often describe time as flowing like a river: it flows back into the past, and it flows forward into the future. Of course, for bored workers time sometimes stands still, while for kids on summer vacation time flows all too quickly. And, for many people over, say the age of forty, days often drag, but the years fly by.

For some, time flows uphill, and it flows downhill.

[div class=attrib]From New Scientist:[end-div]

“HERE and now”, “Back in the 1950s”, “Going forward”… Western languages are full of spatial metaphors for time, and whether you are, say, British, French or German, you no doubt think of the past as behind you and the future as stretching out ahead. Time is a straight line that runs through your body.

Once thought to be universal, this “embodied cognition of time” is in fact strictly cultural. Over the past decade, encounters with various remote tribal societies have revealed a rich diversity of the ways in which humans relate to time (see “Attitudes across the latitudes”). The latest, coming from the Yupno people of Papua New Guinea, is perhaps the most remarkable. Time for the Yupno flows uphill and is not even linear.

Rafael Núñez of the University of California, San Diego, led his team into the Finisterre mountain range of north-east Papua New Guinea to study the Yupno living in the village of Gua. There are no roads in this remote region. The Yupno have no electricity or even domestic animals to work the land. They live with very little contact with the western world.

Núñez and his colleagues noticed that the tribespeople made spontaneous gestures when speaking about the past, present and future. They filmed and analysed the gestures and found that for the Yupno the past is always downhill, in the direction of the mouth of the local river. The future, meanwhile, is towards the river’s source, which lies uphill from Gua.

This was true regardless of the direction they were facing. For instance, if they were facing downhill when talking about the future, a person would gesture backwards up the slope. But when they turned around to face uphill, they pointed forwards.

Núñez thinks the explanation is historical. The Yupno’s ancestors arrived by sea and climbed up the 2500-metre-high mountain valley, so lowlands may represent the past, and time flows uphill.

But the most unusual aspect of the Yupno timeline is its shape. The village of Gua, the river’s source and its mouth do not lie in a straight line, so the timeline is kinked. “This is the first time ever that a culture has been documented to have everyday notions of time anchored in topographic properties,” says Núñez.

Within the dark confines of their homes, geographical landmarks disappear and the timeline appears to straighten out somewhat. The Yupno always point towards the doorway when talking about the past, and away from the door to indicate the future, regardless of their home’s orientation. That could be because entrances are always raised, says Núñez. You have to climb down – towards the past – to leave the house, so each home has its own timeline.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Persistence of Memory, by Salvador Dalí. Courtesy of Salvador Dalí, Gala-Salvador Dalí Foundation / Artists Rights Society (ARS), Museum of Modern Art New York / Wikipedia.[end-div]

Corporate Corruption: Greed, Lies and Nothing New

The last couple of decades has seen some remarkable cases of corporate excess and corruption. The deep-rooted human inclinations toward greed, telling falsehoods and exhibiting questionable ethics can probably be traced to the dawn of bipedalism. However, in more recent times we have seen misdeeds particularly in the business world grow in their daring, scale and impact.

We’ve seen Worldcom overstate its cashflows, Parmalat falsifying accounts, Lehman Brothers (and other investment banks) hiding critical information from investors, Enron cooking all their books, Bernard Madoff marketing his immense Ponzi scheme, Halliburton overcharging government contracts, Tyco executives looting their own company, Wells Fargo and other retail banks robo-signing contracts, investment banks selling questionable products to investors and then betting against them, and now ever more recently, Barclays and other big banks manipulating interest rates.

These tales of gluttony and wrongdoing are a dream for social scientists; and for the public in general, well, we tend to let the fat cats just get fatter and nastier. And, where are the regulators, legislators and enforcers of the law? Well, they are generally asleep at the wheel or in bed, so to speak, with their corporate donors. No wonder we all yawn at the latest scandal. However, some suggest this undermines the very foundations of western capitalism.

[div class=attrib]From the New York Times:[end-div]

Perhaps the most surprising aspect of the Libor scandal is how familiar it seems. Sure, for some of the world’s leading banks to try to manipulate one of the most important interest rates in contemporary finance is clearly egregious. But is that worse than packaging billions of dollars worth of dubious mortgages into a bond and having it stamped with a Triple-A rating to sell to some dupe down the road while betting against it? Or how about forging documents on an industrial scale to foreclose fraudulently on countless homeowners?

The misconduct of the financial industry no longer surprises most Americans. Only about one in five has much trust in banks, according to Gallup polls, about half the level in 2007. And it’s not just banks that are frowned upon. Trust in big business overall is declining. Sixty-two percent of Americans believe corruption is widespread across corporate America. According to Transparency International, an anticorruption watchdog, nearly three in four Americans believe that corruption has increased over the last three years.

We should be alarmed that corporate wrongdoing has come to be seen as such a routine occurrence. Capitalism cannot function without trust. As the Nobel laureate Kenneth Arrow observed, “Virtually every commercial transaction has within itself an element of trust.”

The parade of financiers accused of misdeeds, booted from the executive suite and even occasionally jailed, is undermining this essential element. Have corporations lost whatever ethical compass they once had? Or does it just look that way because we are paying more attention than we used to?

This is hard to answer because fraud and corruption are impossible to measure precisely. Perpetrators understandably do their best to hide the dirty deeds from public view. And public perceptions of fraud and corruption are often colored by people’s sense of dissatisfaction with their lives.

Last year, the economists Justin Wolfers and Betsey Stevenson from the University of Pennsylvania published a study suggesting that trust in government and business falls when unemployment rises. “Much of the recent decline in confidence — particularly in the financial sector — may simply be a standard response to a cyclical downturn,” they wrote.

And waves of mistrust can spread broadly. After years of dismal employment prospects, Americans are losing trust in a broad range of institutions, including Congress, the Supreme Court, the presidency, public schools, labor unions and the church.

Corporate wrongdoing may be cyclical, too. Fraud is probably more lucrative, as well as easier to hide, amid the general prosperity of economic booms. And the temptation to bend the rules is probably highest toward the end of an economic upswing, when executives must be the most creative to keep the stream of profits rolling in.

The most toxic, no-doc, reverse amortization, liar loans flourished toward the end of the housing bubble. And we typically discover fraud only after the booms have turned to bust. As Warren Buffett famously said, “You only find out who is swimming naked when the tide goes out.”

Company executives are paid to maximize profits, not to behave ethically. Evidence suggests that they behave as corruptly as they can, within whatever constraints are imposed by law and reputation. In 1977, the United States Congress passed the Foreign Corrupt Practices Act, to stop the rampant practice of bribing foreign officials. Business by American multinationals in the most corrupt countries dropped. But they didn’t stop bribing. And American companies have been lobbying against the law ever since.

Extrapolating from frauds that were uncovered during and after the dot-com bubble, the economists Luigi Zingales and Adair Morse of the University of Chicago and Alexander Dyck of the University of Toronto estimated conservatively that in any given year a fraud was being committed by 11 to 13 percent of the large companies in the country.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Mug shot of Charles Ponzi (March 3, 1882 – January 18, 1949). Charles Ponzi was born in Italy and became known as a swindler for his money scheme. His aliases include Charles Ponei, Charles P. Bianchi, Carl and Carlo.. Courtesy of Wikipedia.[end-div]

London’s Telephone Box

London’s bright red telephone boxes (booths for our readers in the United States) are as iconic and recognizable as the Queen or Big Ben looming over the Houses of Parliament. Once as ubiquitous as the distinctive London Bobby’s (police officer) helmet, many of these red iron chambers have now been replaced by mobile phones. As a result BT has taken to auctioning some of its telephone boxes for a very good cause — ChildLine’s 25th anniversary. Though not before each is painted or re-imagined by an artist or designer. Check out our five favorites below, and see all of BT’s colorful “Artboxes”, here.

Accessorize

Proud of their London heritage, the ArtBox sports Accessorize’s trademark Union Jack design – customized and embellished in true Accessorize fashion.

 

 

 

Big Ben BT ArtBox

When Mandii first came to London from New Zealand, one of the first sights she wanted to see was Big Ben.

 

 

 

Peekaboo

Take a look and see what you find.

Evoking memories of the childhood game, hide and seek ‘Peekaboo’ invites you to consider issues of loneliness and neglect, and the role of the ‘finder’, which can be attributed to ChildLine.

 

 

Slip

A phonebox troubled by a landslide. Just incredible.

 

 

 

 

Londontotem

Loving the block colours and character designs. Their jolly spirit is infection, I mean, just look at their faces! The PhoneBox is like a mini street ornament in London isn’t it? A proper little totem pole in its own right!

 

 

 

[div class=attrib]Read more about BT’s Artbox project after the jump.[end-div]

[div class=attrib]Images courtesy of BT.[end-div]

Truthiness 101

Strangely and ironically it takes a satirist to tell the truth, and of course, academics now study the phenomenon.

[div class=attrib]From Washington Post:[end-div]

Nation, our so-called universities are in big trouble, and not just because attending one of them leaves you with more debt than the Greek government. No, we’re talking about something even more unsettling: the academic world’s obsession with Stephen Colbert.

Last we checked, Colbert was a mere TV comedian, or a satirist if you want to get fancy about it. (And, of course, being college professors, they do.) He’s a TV star, like Donald Trump, only less of a caricature.

Yet ever since Colbert’s show, “The Colbert Report,” began airing on Comedy Central in 2005, these ivory-tower eggheads have been devoting themselves to studying all things Colbertian. They’ve sliced and diced his comic stylings more ways than a Ginsu knife. Every academic discipline — well, among the liberal arts, at least — seems to want a piece of him. Political science. Journalism. Philosophy. Race relations. Communications studies. Theology. Linguistics. Rhetoric.

There are dozens of scholarly articles, monographs, treatises and essays about Colbert, as well as books of scholarly articles, monographs and essays. A University of Oklahoma student even earned her doctorate last year by examining him and his “Daily Show” running mate Jon Stewart. It was called “Political Humor and Third-Person Perception.”

The academic cult of Colbert (or is it “the cul of Colbert”?) is everywhere. Here’s a small sample. Jim .?.?.

?“Is Stephen Colbert America’s Socrates?,” chapter heading in “Stephen Colbert and Philosophy: I Am Philosophy (And So Can You!),” published by Open Court, 2009.

?“The Wørd Made Fresh: A Theological Exploration of Stephen Colbert,” published in Concepts (“an interdisciplinary journal of graduate studies”), Villanova University, 2010.

?“It’s All About Meme: The Art of the Interview and the Insatiable Ego of the Colbert Bump,” chapter heading in “The Stewart/Colbert Effect: Essays on the Real Impacts of Fake News,” published by McFarland Press, 2011.

?“The Irony of Satire: Political Ideology and the Motivation to See What You Want to See in The Colbert Report,” a 2009 study in the International Journal of Press/Politics that its authors described as an investigation of “biased message processing” and “the influence of political ideology on perceptions of Stephen Colbert.” After much study, the authors found “no significant difference between [conservatives and liberals] in thinking Colbert was funny.”

Colbert-ism has insinuated itself into the undergraduate curriculum, too.

Boston University has offered a seminar called “The Colbert Report: American Satire” for the past two years, which explores Colbert’s use of “syllogism, logical fallacy, burlesque, and travesty,” as lecturer Michael Rodriguez described it on the school’s Web site.

This fall, Towson University will roll out a freshman seminar on politics and popular culture, with Colbert as its focus.

All this for a guy who would undoubtedly mock-celebrate the serious study of himself.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class-attrib]Image: Colbert Report. Courtesy of Business Insider / Comedy Central.[end-div]

Extreme Equals Happy, Moderate Equals Unhappy

[div class=attrib]From the New York Times:[end-div]

WHO is happier about life — liberals or conservatives? The answer might seem straightforward. After all, there is an entire academic literature in the social sciences dedicated to showing conservatives as naturally authoritarian, dogmatic, intolerant of ambiguity, fearful of threat and loss, low in self-esteem and uncomfortable with complex modes of thinking. And it was the candidate Barack Obama in 2008 who infamously labeled blue-collar voters “bitter,” as they “cling to guns or religion.” Obviously, liberals must be happier, right?

Wrong. Scholars on both the left and right have studied this question extensively, and have reached a consensus that it is conservatives who possess the happiness edge. Many data sets show this. For example, the Pew Research Center in 2006 reported that conservative Republicans were 68 percent more likely than liberal Democrats to say they were “very happy” about their lives. This pattern has persisted for decades. The question isn’t whether this is true, but why.

Many conservatives favor an explanation focusing on lifestyle differences, such as marriage and faith. They note that most conservatives are married; most liberals are not. (The percentages are 53 percent to 33 percent, according to my calculations using data from the 2004 General Social Survey, and almost none of the gap is due to the fact that liberals tend to be younger than conservatives.) Marriage and happiness go together. If two people are demographically the same but one is married and the other is not, the married person will be 18 percentage points more likely to say he or she is very happy than the unmarried person.

An explanation for the happiness gap more congenial to liberals is that conservatives are simply inattentive to the misery of others. If they recognized the injustice in the world, they wouldn’t be so cheerful. In the words of Jaime Napier and John Jost, New York University psychologists, in the journal Psychological Science, “Liberals may be less happy than conservatives because they are less ideologically prepared to rationalize (or explain away) the degree of inequality in society.” The academic parlance for this is “system justification.”

The data show that conservatives do indeed see the free enterprise system in a sunnier light than liberals do, believing in each American’s ability to get ahead on the basis of achievement. Liberals are more likely to see people as victims of circumstance and oppression, and doubt whether individuals can climb without governmental help. My own analysis using 2005 survey data from Syracuse University shows that about 90 percent of conservatives agree that “While people may begin with different opportunities, hard work and perseverance can usually overcome those disadvantages.” Liberals — even upper-income liberals — are a third less likely to say this.

So conservatives are ignorant, and ignorance is bliss, right? Not so fast, according to a study from the University of Florida psychologists Barry Schlenker and John Chambers and the University of Toronto psychologist Bonnie Le in the Journal of Research in Personality. These scholars note that liberals define fairness and an improved society in terms of greater economic equality. Liberals then condemn the happiness of conservatives, because conservatives are relatively untroubled by a problem that, it turns out, their political counterparts defined.

There is one other noteworthy political happiness gap that has gotten less scholarly attention than conservatives versus liberals: moderates versus extremists.

Political moderates must be happier than extremists, it always seemed to me. After all, extremists actually advertise their misery with strident bumper stickers that say things like, “If you’re not outraged, you’re not paying attention!”

But it turns out that’s wrong. People at the extremes are happier than political moderates. Correcting for income, education, age, race, family situation and religion, the happiest Americans are those who say they are either “extremely conservative” (48 percent very happy) or “extremely liberal” (35 percent). Everyone else is less happy, with the nadir at dead-center “moderate” (26 percent).

What explains this odd pattern? One possibility is that extremists have the whole world figured out, and sorted into good guys and bad guys. They have the security of knowing what’s wrong, and whom to fight. They are the happy warriors.

Whatever the explanation, the implications are striking. The Occupy Wall Street protesters may have looked like a miserable mess. In truth, they were probably happier than the moderates making fun of them from the offices above. And none, it seems, are happier than the Tea Partiers, many of whom cling to guns and faith with great tenacity. Which some moderately liberal readers of this newspaper might find quite depressing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Psychology Today.[end-div]

Resurgence of Western Marxism

The death-knell for Western capitalism has yet to sound. However, increasing economic turmoil, continued shenanigans in the financial industry, burgeoning inequity, and acute global political unease, are combining to undermine the appeal of capitalism to a growing number of young people. Welcome to Marxism 2.012.

[div class=attrib]From the Guardian:[end-div]

Class conflict once seemed so straightforward. Marx and Engels wrote in the second best-selling book of all time, The Communist Manifesto: “What the bourgeoisie therefore produces, above all, are its own grave-diggers. Its fall and the victory of the proletariat are equally inevitable.” (The best-selling book of all time, incidentally, is the Bible – it only feels like it’s 50 Shades of Grey.)

Today, 164 years after Marx and Engels wrote about grave-diggers, the truth is almost the exact opposite. The proletariat, far from burying capitalism, are keeping it on life support. Overworked, underpaid workers ostensibly liberated by the largest socialist revolution in history (China’s) are driven to the brink of suicide to keep those in the west playing with their iPads. Chinese money bankrolls an otherwise bankrupt America.

The irony is scarcely wasted on leading Marxist thinkers. “The domination of capitalism globally depends today on the existence of a Chinese Communist party that gives de-localised capitalist enterprises cheap labour to lower prices and deprive workers of the rights of self-organisation,” says Jacques Rancière, the French marxist thinker and Professor of Philosophy at the University of Paris VIII. “Happily, it is possible to hope for a world less absurd and more just than today’s.”

That hope, perhaps, explains another improbable truth of our economically catastrophic times – the revival in interest in Marx and Marxist thought. Sales of Das Kapital, Marx’s masterpiece of political economy, have soared ever since 2008, as have those of The Communist Manifesto and the Grundrisse (or, to give it its English title, Outlines of the Critique of Political Economy). Their sales rose as British workers bailed out the banks to keep the degraded system going and the snouts of the rich firmly in their troughs while the rest of us struggle in debt, job insecurity or worse. There’s even a Chinese theatre director called He Nian who capitalised on Das Kapital’s renaissance to create an all-singing, all-dancing musical.

And in perhaps the most lovely reversal of the luxuriantly bearded revolutionary theorist’s fortunes, Karl Marx was recently chosen from a list of 10 contenders to appear on a new issue of MasterCard by customers of German bank Sparkasse in Chemnitz. In communist East Germany from 1953 to 1990, Chemnitz was known as Karl Marx Stadt. Clearly, more than two decades after the fall of the Berlin Wall, the former East Germany hasn’t airbrushed its Marxist past. In 2008, Reuters reports, a survey of east Germans found 52% believed the free-market economy was “unsuitable” and 43% said they wanted socialism back. Karl Marx may be dead and buried in Highgate cemetery, but he’s alive and well among credit-hungry Germans. Would Marx have appreciated the irony of his image being deployed on a card to get Germans deeper in debt? You’d think.

Later this week in London, several thousand people will attend Marxism 2012, a five-day festival organised by the Socialist Workers’ Party. It’s an annual event, but what strikes organiser Joseph Choonara is how, in recent years, many more of its attendees are young. “The revival of interest in Marxism, especially for young people comes because it provides tools for analysing capitalism, and especially capitalist crises such as the one we’re in now,” Choonara says.

There has been a glut of books trumpeting Marxism’s relevance. English literature professor Terry Eagleton last year published a book called Why Marx Was Right. French Maoist philosopher Alain Badiou published a little red book called The Communist Hypothesis with a red star on the cover (very Mao, very now) in which he rallied the faithful to usher in the third era of the communist idea (the previous two having gone from the establishment of the French Republic in 1792 to the massacre of the Paris communards in 1871, and from 1917 to the collapse of Mao’s Cultural Revolution in 1976). Isn’t this all a delusion?

Aren’t Marx’s venerable ideas as useful to us as the hand loom would be to shoring up Apple’s reputation for innovation? Isn’t the dream of socialist revolution and communist society an irrelevance in 2012? After all, I suggest to Rancière, the bourgeoisie has failed to produce its own gravediggers. Rancière refuses to be downbeat: “The bourgeoisie has learned to make the exploited pay for its crisis and to use them to disarm its adversaries. But we must not reverse the idea of historical necessity and conclude that the current situation is eternal. The gravediggers are still here, in the form of workers in precarious conditions like the over-exploited workers of factories in the far east. And today’s popular movements – Greece or elsewhere – also indicate that there’s a new will not to let our governments and our bankers inflict their crisis on the people.”

That, at least, is the perspective of a seventysomething Marxist professor. What about younger people of a Marxist temper? I ask Jaswinder Blackwell-Pal, a 22 year-old English and drama student at Goldsmiths College, London, who has just finished her BA course in English and Drama, why she considers Marxist thought still relevant. “The point is that younger people weren’t around when Thatcher was in power or when Marxism was associated with the Soviet Union,” she says. “We tend to see it more as a way of understanding what we’re going through now. Think of what’s happening in Egypt. When Mubarak fell it was so inspiring. It broke so many stereotypes – democracy wasn’t supposed to be something that people would fight for in the Muslim world. It vindicates revolution as a process, not as an event. So there was a revolution in Egypt, and a counter-revolution and a counter-counter revolution. What we learned from it was the importance of organisation.”

This, surely is the key to understanding Marxism’s renaissance in the west: for younger people, it is untainted by association with Stalinist gulags. For younger people too, Francis Fukuyama’s triumphalism in his 1992 book The End of History – in which capitalism seemed incontrovertible, its overthrow impossible to imagine – exercises less of a choke-hold on their imaginations than it does on those of their elders.

Blackwell-Pal will be speaking Thursday on Che Guevara and the Cuban revolution at the Marxism festival. “It’s going to be the first time I’ll have spoken on Marxism,” she says nervously. But what’s the point thinking about Guevara and Castro in this day and age? Surely violent socialist revolution is irrelevant to workers’ struggles today? “Not at all!” she replies. “What’s happening in Britain is quite interesting. We have a very, very weak government mired in in-fighting. I think if we can really organise we can oust them.” Could Britain have its Tahrir Square, its equivalent to Castro’s 26th of July Movement? Let a young woman dream. After last year’s riots and today with most of Britain alienated from the rich men in its government’s cabinet, only a fool would rule it out.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Portrait of Karl Marx. Courtesy of International Institute of Social History in Amsterdam, Netherlands / Wikipedia.[end-div]

Busyness As Chronic Illness

Apparently, being busy alleviates the human existential threat. So, if your roughly 16 hours, or more, of wakefulness each day is crammed with memos, driving, meetings, widgets, calls, charts, quotas, angry customers, school lunches, deciding, reports, bank statements, kids, budgets, bills, baking, making, fixing, cleaning and mad bosses, then your life must be meaningful, right?

Think again.

Author Tim Kreider muses below on this chronic state of affairs, and hits close to the nerve when he suggests that, “I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.”

[div class=attrib]From the New York Times:[end-div]

If you live in America in the 21st century you’ve probably had to listen to a lot of people tell you how busy they are. It’s become the default response when you ask anyone how they’re doing: “Busy!” “So busy.” “Crazy busy.” It is, pretty obviously, a boast disguised as a complaint. And the stock response is a kind of congratulation: “That’s a good problem to have,” or “Better than the opposite.”

Notice it isn’t generally people pulling back-to-back shifts in the I.C.U. or commuting by bus to three minimum-wage jobs  who tell you how busy they are; what those people are is not busy but tired. Exhausted. Dead on their feet. It’s almost always people whose lamented busyness is purely self-imposed: work and obligations they’ve taken on voluntarily, classes and activities they’ve “encouraged” their kids to participate in. They’re busy because of their own ambition or drive or anxiety, because they’re addicted to busyness and dread what they might have to face in its absence.

Almost everyone I know is busy. They feel anxious and guilty when they aren’t either working or doing something to promote their work. They schedule in time with friends the way students with 4.0 G.P.A.’s  make sure to sign up for community service because it looks good on their college applications. I recently wrote a friend to ask if he wanted to do something this week, and he answered that he didn’t have a lot of time but if something was going on to let him know and maybe he could ditch work for a few hours. I wanted to clarify that my question had not been a preliminary heads-up to some future invitation; this was the invitation. But his busyness was like some vast churning noise through which he was shouting out at me, and I gave up trying to shout back over it.

Even children are busy now, scheduled down to the half-hour with classes and extracurricular activities. They come home at the end of the day as tired as grown-ups. I was a member of the latchkey generation and had three hours of totally unstructured, largely unsupervised time every afternoon, time I used to do everything from surfing the World Book Encyclopedia to making animated films to getting together with friends in the woods to chuck dirt clods directly into one another’s eyes, all of which provided me with important skills and insights that remain valuable to this day. Those free hours became the model for how I wanted to live the rest of my life.

The present hysteria is not a necessary or inevitable condition of life; it’s something we’ve chosen, if only by our acquiescence to it. Not long ago I  Skyped with a friend who was driven out of the city by high rent and now has an artist’s residency in a small town in the south of France. She described herself as happy and relaxed for the first time in years. She still gets her work done, but it doesn’t consume her entire day and brain. She says it feels like college — she has a big circle of friends who all go out to the cafe together every night. She has a boyfriend again. (She once ruefully summarized dating in New York: “Everyone’s too busy and everyone thinks they can do better.”) What she had mistakenly assumed was her personality — driven, cranky, anxious and sad — turned out to be a deformative effect of her environment. It’s not as if any of us wants to live like this, any more than any one person wants to be part of a traffic jam or stadium trampling or the hierarchy of cruelty in high school — it’s something we collectively force one another to do.

Busyness serves as a kind of existential reassurance, a hedge against emptiness; obviously your life cannot possibly be silly or trivial or meaningless if you are so busy, completely booked, in demand every hour of the day. I once knew a woman who interned at a magazine where she wasn’t allowed to take lunch hours out, lest she be urgently needed for some reason. This was an entertainment magazine whose raison d’être was obviated when “menu” buttons appeared on remotes, so it’s hard to see this pretense of indispensability as anything other than a form of institutional self-delusion. More and more people in this country no longer make or do anything tangible; if your job wasn’t performed by a cat or a boa constrictor in a Richard Scarry book I’m not sure I believe it’s necessary. I can’t help but wonder whether all this histrionic exhaustion isn’t a way of covering up the fact that most of what we do doesn’t matter.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Entrepreneur.com.[end-div]

Ignorance [is] the Root and Stem of All Evil

Hailing from Classical Greece of around 2,400 years ago, Plato has given our contemporary world many important intellectual gifts. His broad interests in justice, mathematics, virtue, epistemology, rhetoric and art, laid the foundations for Western philosophy and science. Yet in his quest for deeper and broader knowledge he also had some important things to say about ignorance.

Massimo Pigliucci over at Rationally Speaking gives us his take on Platonic Ignorance. His caution is appropriate: in this age of information overload and extreme politicization it is ever more important for us to realize and acknowledge our own ignorance. Spreading falsehoods and characterizing opinion as fact to others — transferred ignorance — is rightly identified by Plato as a moral failing. In his own words (of course translated), “Ignorance [is] the Root and Stem of All Evil”.

[div class=attrib]From Rationally Speaking:[end-div]

Plato famously maintained that knowledge is “justified true belief,” meaning that to claim the status of knowledge our beliefs (say, that the earth goes around the sun, rather than the other way around) have to be both true (to the extent this can actually be ascertained) and justified (i.e., we ought to be able to explain to others why we hold such beliefs, otherwise we are simply repeating the — possibly true — beliefs of someone else).

It is the “justified” part that is humbling, since a moment’s reflection will show that a large number of things we think we know we actually cannot justify, which means that we are simply trusting someone else’s authority on the matter. (Which is okay, as long as we realize and acknowledge that to be the case.)

I was recently intrigued, however, not by Plato’s well known treatment of knowledge, but by his far less discussed views on the opposite of knowledge: ignorance. The occasion for these reflections was a talk by Katja Maria Vogt of Columbia University, delivered at CUNY’s Graduate Center, where I work. Vogt began by recalling the ancient skeptics’ attitude toward ignorance, as a “conscious positive stand,” meaning that skepticism is founded on one’s realization of his own ignorance. In this sense, of course, Socrates’ contention that he knew nothing becomes neither a self-contradiction (isn’t he saying that he knows that he knows nothing, thereby acknowledging that he knows something?), nor false modesty. Socrates was simply saying that he was aware of having no expertise while at the same time devoting his life to the quest for knowledge.

Vogt was particularly interested in Plato’s concept of “transferred ignorance,” which the ancient philosopher singled out as morally problematic. Transferred ignorance is the case when someone imparts “knowledge” that he is not aware is in fact wrong. Let us say, for instance, that I tell you that vaccines cause autism, and I do so on the basis of my (alleged) knowledge of biology and other pertinent matters, while, in fact, I am no medical researcher and have only vague notions of how vaccines actually work (i.e., imagine my name is Jenny McCarthy).

The problem, for Plato, is that in a sense I would be thinking of myself as smarter than I actually am, which of course carries a feeling of power over others. I wouldn’t simply be mistaken in my beliefs, I would be mistaken in my confidence in those beliefs. It is this willful ignorance (after all, I did not make a serious attempt to learn about biology or medical research) that carries moral implications.

So for Vogt the ancient Greeks distinguished between two types of ignorance: the self-aware, Socratic one (which is actually good) and the self-oblivious one of the overconfident person (which is bad). Need I point out that far too little of the former and too much of the latter permeate current political and social discourse? Of course, I’m sure a historian could easily come up with a plethora of examples of bad ignorance throughout human history, all the way back to the beginning of recorded time, but it does strike me that the increasingly fact-free public discourse on issues varying from economic policies to scientific research has brought Platonic transferred ignorance to never before achieved peaks (or, rather, valleys).

And I suspect that this is precisely because of the lack of appreciation of the moral dimension of transferred or willful ignorance. When politicians or commentators make up “facts” — or disregard actual facts to serve their own ideological agendas — they sometimes seem genuinely convinced that they are doing something good, at the very least for their constituents, and possibly for humanity at large. But how can it be good — in the moral sense — to make false knowledge one’s own, and even to actively spread it to others?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Socrates and Plato in a medieval picture. Courtesy of Wikipedia.[end-div]

Have a Laugh, Blame Twitter

Correlate 2 sets of totally independent statistics and you get to blame Twitter for most, if not all, of the world’s ills. That’s what Tim Cooley has done with this funny and informative #Blame Twitter infographic below.

Of course, even though the numbers are all verified and trusted, causation is entirely another factor. So, while 144,595 people die each day (on average), it is not (yet) as a result of using Twitter, and while our planet loses 1 hectare of forest for every 18,000 tweets, it’s not the endless Twittering that is causing de-forestation.

[div class=attrib]Infographic courtesy of Tim Cooley.[end-div]

Child Mutilation and Religious Ritual

A court in Germany recently banned circumcision at birth for religious reasons. Quite understandably the court saw that this practice violates bodily integrity. Aside from being morally repugnant to many theists and non-believers alike, the practice inflicts pain. So, why do some religions continue to circumcise children?

[div class=attrib]From Slate:[end-div]

A German court ruled on Tuesday that parents may not circumcise their sons at birth for religious reasons, because the procedure violates the child’s right to bodily integrity. Both Muslims and Jews circumcise their male children. Why is Christianity the only Abrahamic religion that doesn’t encourage circumcision?

Because Paul believed faith was more important than foreskin. Shortly after Jesus’ death, his followers had a disagreement over the nature of his message. Some acolytes argued that he offered salvation through Judaism, so gentiles who wanted to join his movement should circumcise themselves like any other Jew. The apostle Paul, however, believed that faith in Jesus was the only requirement for salvation. Paul wrote that Jews who believed in Christ could go on circumcising their children, but he urged gentiles not to circumcise themselves or their sons, because trying to mimic the Jews represented a lack of faith in Christ’s ability to save them. By the time that the Book of Acts was written in the late first or early second century, Paul’s position seems to have become the dominant view of Christian theologians. Gentiles were advised to follow only the limited set of laws—which did not include circumcision—that God gave to Noah after the flood rather than the full panoply of rules followed by the Jews.

Circumcision was uniquely associated with Jews in first-century Rome, even though other ethnic and religious groups practiced it. Romans wrote satirical poems mocking the Jews for taking a day off each week, refusing to eat pork, worshipping a sky god, and removing their sons’ foreskin. It is, therefore, neither surprising that early Christian converts sought advice on whether to adopt the practice of circumcision nor that Paul made it the focus of several of his famous letters.

The early compromise that Paul struck—ethnic Jewish Christians should circumcise, while Jesus’ gentile followers should not—held until Christianity became a legal religion in the fourth century. At that time, the two religions split permanently, and it became something of a heresy to suggest that one could be both Jewish and Christian. As part of the effort to distinguish the two religions, circumcisions became illegal for Christians, and Jews were forbidden from circumcising their slaves.

Although the church officially renounced religious circumcision around 300 years after Jesus’s death, Christians long maintained a fascination with it. In the 600s, Christians began celebrating the day Jesus was circumcised. According to medieval Christian legend, an angel bestowed Jesus’ foreskin upon Emperor Charlemagne in the Church of the Holy Sepulchre, where Christ was supposedly buried. Coptic Christians and a few other Christian groups in Africa resumed religious circumcisions long after their European colleagues abandoned it.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Apostle Paul. Courtesy of Wikipedia.[end-div]

Media Consolidation

The age of the rambunctious and megalomaniacal newspaper baron has passed, excepting, of course, Rupert Murdoch. Though while the colorful personalities of the late-19th and early 20th centuries have mostly disappeared, the 21st century has replaced these aging white men with faceless international corporations, all of which are, of course, run by aging white men.

The infographic below puts the current media landscape in clear perspective; one statistic is clear: more and more people are consuming news and entertainment from fewer and fewer sources.

[div class=attrib]Infographic courtesy of Frugal Dad.[end-div]

A View on Innovation

Joi Ito Director of the MIT Media Lab muses on the subject of innovation in this article excerpted from the Edge.

[div class=attrib]From the Edge:[end-div]

I grew up in Japan part of my life, and we were surrounded by Buddhists. If you read some of the interesting books from the Dalai Lama talking about happiness, there’s definitely a difference in the way that Buddhists think about happiness, the world and how it works, versus the West. I think that a lot of science and technology has this somewhat Western view, which is how do you control nature, how do you triumph over nature? Even if you look at the gardens in Europe, a lot of it is about look at what we made this hedge do.

What’s really interesting and important to think about is, as we start to realize that the world is complex, and as the science that we use starts to become complex and, Timothy Leary used this quote, “Newton’s laws work well when things are normal sized, when they’re moving at a normal speed.” You can predict the motion of objects using Newton’s laws in most circumstances, but when things start to get really fast, really big, and really complex, you find out that Newton’s laws are actually local ordinances, and there’s a bunch of other stuff that comes into play.

One of the things that we haven’t done very well is we’ve been looking at science and technology as trying to make things more efficient, more effective on a local scale, without looking at the system around it. We were looking at objects rather than the system, or looking at the nodes rather than the network. When we talk about big data, when we talk about networks, we understand this.

I’m an Internet guy, and I divide the world into my life before the Internet and after the Internet. I helped build one of the first commercial Internet service providers in Japan, and when we were building that, there was a tremendous amount of resistance. There were lawyers who wrote these big articles about how the Internet was illegal because there was no one in charge. There was a competing standard back then called X.25, which was being built by the telephone companies and the government. It was centrally-planned, huge specifications; it was very much under control.

The Internet was completely distributed. David Weinberger would use the term ‘small pieces loosely joined.’ But it was really a decentralized innovation that was somewhat of a kind of working anarchy. As we all know, the Internet won. What the Internet winning was, was the triumph of distributed innovation over centralized innovation. It was a triumph of chaos over control. There were a bunch of different reasons. Moore’s law, lowering the cost of innovation—it was this kind of complexity that was going on, the fact that you could change things later, that made this kind of distributed innovation work. What happened when the Internet happened is that the Internet combined with Moore’s law, kept on driving the cost of innovation lower and lower and lower and lower. When you think about the Googles or the Yahoos or the Facebooks of the world, those products, those services were created not in big, huge R&D labs with hundreds of millions of dollars of funding; they were created by kids in dorm rooms.

In the old days, you’d have to have an idea and then you’d write a proposal for a grant or a VC, and then you’d raise the money, you’d plan the thing, you would hire the people and build it. Today, what you do is you build the thing, you raise the money and then you figure out the plan and then you figure out the business model. It’s completely the opposite, you don’t have to ask permission to innovate anymore. What’s really important is, imagine if somebody came up to you and said, “I’m going to build the most popular encyclopedia in the world, and the trick is anyone can edit it.” You wouldn’t have given the guy a desk, you wouldn’t have given the guy five bucks. But the fact that he can just try that, and in retrospect it works, it’s fine, what we’re realizing is that a lot of the greatest innovations that we see today are things that wouldn’t have gotten approval, right?

The Internet, the DNA and the philosophy of the Internet is all about freedom to connect, freedom to hack, and freedom to innovate. It’s really lowering the cost of distribution and innovation. What’s really important about that is that when you started thinking about how we used to innovate was we used to raise money and we would make plans. Well, it’s an interesting coincidence because the world is now so complex, so fast, so unpredictable, that you can’t. Your plans don’t really work that well. Every single major thing that’s happened, both good and bad, was probably unpredicted, and most of our plans failed.

Today, what you want is you want to have resilience and agility, and you want to be able to participate in, and interact with the disruptive things. Everybody loves the word ‘disruptive innovation.’ Well, how does, and where does disruptive innovation happen? It doesn’t happen in the big planned R&D labs; it happens on the edges of the network. Many important ideas, especially in the consumer Internet space, but more and more now in other things like hardware and biotech, you’re finding it happening around the edges.

What does it mean, innovation on the edges? If you sit there and you write a grant proposal, basically what you’re doing is you’re saying, okay, I’m going to build this, so give me money. By definition it’s incremental because first of all, you’ve got to be able to explain what it is you’re going to make, and you’ve got to say it in a way that’s dumbed-down enough that the person who’s giving you money can understand it. By definition, incremental research isn’t going to be very disruptive. Scholarship is somewhat incremental. The fact that if you have a peer review journal, it means five other people have to believe that what you’re doing is an interesting thing. Some of the most interesting innovations that happen, happen when the person doing it doesn’t even know what’s going on. True discovery, I think, happens in a very undirected way, when you figure it out as you go along.

Look at YouTube. First version of YouTube, if you saw 2005, it’s a dating site with video. It obviously didn’t work. The default was I am male, looking for anyone between 18 and 35, upload video. That didn’t work. They pivot it, it became Flicker for video. That didn’t work. Then eventually they latched onto Myspace and it took off like crazy. But they figured it out as they went along. This sort of discovery as you go along is a really, really important mode of innovation. The problem is, whether you’re talking about departments in academia or you’re talking about traditional sort of R&D, anything under control is not going to exhibit that behavior.

If you apply that to what I’m trying to do at the Media Lab, the key thing about the Media Lab is that we have undirected funds. So if a kid wants to try something, he doesn’t have to write me a proposal. He doesn’t have to explain to me what he wants to do. He can just go, or she can just go, and do whatever they want, and that’s really important, this undirected research.

The other part that’s really important, as you start to look for opportunities is what I would call pattern recognition or peripheral vision. There’s a really interesting study, if you put a dot on a screen and you put images like colors around it. If you tell the person to look at the dot, they’ll see the stuff on the first reading, but the minute you give somebody a financial incentive to watch it, I’ll give you ten bucks to watch the dot, those peripheral images disappear. If you’ve ever gone mushroom hunting, it’s a very similar phenomenon. If you are trying to find mushrooms in a forest, the whole thing is you have to stop looking, and then suddenly your pattern recognition kicks in and the mushrooms pop out. Hunters do this same thing, archers looking for animals.

When you focus on something, what you’re actually doing is only seeing really one percent of your field of vision. Your brain is filling everything else in with what you think is there, but it’s actually usually wrong, right? So what’s really important when you’re trying to discover those disruptive things that are happening in your periphery. If you are a newspaper and you’re trying to figure out what is the world like without printing presses, well, if you’re staring at your printing press, you’re not looking at the stuff around you. So what’s really important is how do you start to look around you?

[div class=attrib]Read the entire article following the jump.[end-div]

Happiness for Pessimists

Pessimists can take heart from Oliver Burkeman’s latest book “The Antidote”. His research shows that there are valid alternatives to the commonly held belief that positive thinking and goal visualization lead inevitably to happiness. He shows that there is “a long tradition in philosophical and spiritual thought which embraces negativity and bathes in insecurity and failure.” Glass half-full types, you may have been right all along.

[tube]bOJL7WkaadY[/tube]

Faux Fashion is More Than Skin-Deep

Some innovative research shows that we are generally more inclined to cheat others if we are clad in counterfeit designer clothing or carrying faux accessories.

[div class=attrib]From Scientific American:[end-div]

Let me tell you the story of my debut into the world of fashion. When Jennifer Wideman Green (a friend of mine from graduate school) ended up living in New York City, she met a number of people in the fashion industry. Through her I met Freeda Fawal-Farah, who worked for Harper’s Bazaar. A few months later Freeda invited me to give a talk at the magazine, and because it was such an atypical crowd for me, I agreed.

I found myself on a stage before an auditorium full of fashion mavens. Each woman was like an exhibit in a museum: her jewelry, her makeup, and, of course, her stunning shoes. I talked about how people make decisions, how we compare prices when we are trying to figure out how much something is worth, how we compare ourselves to others, and so on. They laughed when I hoped they would, asked thoughtful questions, and offered plenty of their own interesting ideas. When I finished the talk, Valerie Salembier, the publisher of Harper’s Bazaar, came onstage, hugged and thanked me—and gave me a stylish black Prada overnight bag.

I headed downtown to my next meeting. I had some time to kill, so I decided to take a walk. As I wandered, I couldn’t help thinking about my big black leather bag with its large Prada logo. I debated with myself: should I carry my new bag with the logo facing outward? That way, other people could see and admire it (or maybe just wonder how someone wearing jeans and red sneakers could possibly have procured it). Or should I carry it with the logo facing toward me, so that no one could recognize that it was a Prada? I decided on the latter and turned the bag around.

Even though I was pretty sure that with the logo hidden no one realized it was a Prada bag, and despite the fact that I don’t think of myself as someone who cares about fashion, something felt different to me. I was continuously aware of the brand on the bag. I was wearing Prada! And it made me feel different; I stood a little straighter and walked with a bit more swagger. I wondered what would happen if I wore Ferrari underwear. Would I feel more invigorated? More confident? More agile? Faster?

I continued walking and passed through Chinatown, which was bustling with activity. Not far away, I spotted an attractive young couple in their twenties taking in the scene. A Chinese man approached them. “Handbags, handbags!” he called, tilting his head to indicate the direction of his small shop. After a moment or two, the woman asked the Chinese man, “You have Prada?”

The vendor nodded. I watched as she conferred with her partner. He smiled at her, and they followed the man to his stand.

The Prada they were referring to, of course, was not actually Prada. Nor were the $5 “designer” sunglasses on display in his stand really Dolce&Gabbana. And the Armani perfumes displayed over by the street food stands? Fakes too.

From Ermine to Armani

Going back a way, ancient Roman law included a set of regulations called sumptuary laws, which filtered down through the centuries into the laws of nearly all European nations. Among other things, the laws dictated who could wear what, according to their station and class. For example, in Renaissance England, only the nobility could wear certain kinds of fur, fabrics, laces, decorative beading per square foot, and so on, while those in the gentry could wear decisively less appealing clothing. (The poorest were generally excluded from the law, as there was little point in regulating musty burlap, wool, and hair shirts.) People who “dressed above their station” were silently, but directly, lying to those around them. And those who broke the law were often hit with fines and other punishments.

What may seem to be an absurd degree of obsessive compulsion on the part of the upper crust was in reality an effort to ensure that people were what they signaled themselves to be; the system was designed to eliminate disorder and confusion. Although our current sartorial class system is not as rigid as it was in the past, the desire to signal success and individuality is as strong today as ever.

When thinking about my experience with the Prada bag, I wondered whether there were other psychological forces related to fakes that go beyond external signaling. There I was in Chinatown holding my real Prada bag, watching the woman emerge from the shop holding her fake one. Despite the fact that I had neither picked out nor paid for mine, it felt to me that there was a substantial difference between the way I related to my bag and the way she related to hers.

More generally, I started wondering about the relationship between what we wear and how we behave, and it made me think about a concept that social scientists call self-signaling. The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people— inferring who we are and what we like from our actions.

For example, imagine that you see a beggar on the street. Rather than ignoring him or giving him money, you decide to buy him a sandwich. The action in itself does not define who you are, your morality, or your character, but you interpret the deed as evidence of your compassionate and charitable character. Now, armed with this “new” information, you start believing more intensely in your own benevolence. That’s self-signaling at work.

The same principle could also apply to fashion accessories. Carrying a real Prada bag—even if no one else knows it is real—could make us think and act a little differently than if we were carrying a counterfeit one. Which brings us to the questions: Does wearing counterfeit products somehow make us feel less legitimate? Is it possible that accessorizing with fakes might affect us in unexpected and negative ways?

Calling All Chloés

I decided to call Freeda and tell her about my recent interest in high fashion. During our conversation, Freeda promised to convince a fashion designer to lend me some items to use in some experiments. A few weeks later, I received a package from the Chloé label containing twenty handbags and twenty pairs of sunglasses. The statement accompanying the package told me that the handbags were estimated to be worth around $40,000 and the sunglasses around $7,000. (The rumor about this shipment quickly traveled around Duke, and I became popular among the fashion-minded crowd.)

With those hot commodities in hand, Francesca Gino, Mike Norton (both professors at Harvard University), and I set about testing whether participants who wore fake products would feel and behave differently from those wearing authentic ones. If our participants felt that wearing fakes would broadcast (even to themselves) a less honorable self-image, we wondered whether they might start thinking of themselves as somewhat less honest. And with this tainted self-concept in mind, would they be more likely to continue down the road of dishonesty?

Using the lure of Chloé accessories, we enlisted many female MBA students for our experiment. We assigned each woman to one of three conditions: authentic, fake or no information. In the authentic condition, we told participants that they would be donning real Chloé designer sunglasses. In the fake condition, we told them that they would be wearing counterfeit sunglasses that looked identical to those made by Chloé (in actuality all the products we used were the real McCoy). Finally, in the no-information condition, we didn’t say anything about the authenticity of the sunglasses.

Once the women donned their sunglasses, we directed them to the hallway, where we asked them to look at different posters and out the windows so that they could later evaluate the quality and experience of looking through their sunglasses. Soon after, we called them into another room for another task.

In this task, the participants were given 20 sets of 12 numbers (3.42, 7.32 and so on), and they were asked to find in each set the two numbers that add up to 10. They had five minutes to solve as many as possible and were paid for each correct answer. We set up the test so that the women could cheat—report that they solved more sets than they did (after shredding their worksheet and all the evidence)—while allowing us to figure out who cheated and by how much (by rigging the shredders so that they only cut the sides of the paper).

Over the years we carried out many versions of this experiment, and we repeatedly find that a lot of people cheated by a few questions. This experiment was not different in this regard, but what was particularly interesting was the effect of wearing counterfeits. While “only” 30 percent of the participants in the authentic condition reported solving more matrices than they actually had, 74 percent of those in the fake condition reported solving more matrices than they actually had. These results gave rise to another interesting question. Did the presumed fakeness of the product make the women cheat more than they naturally would? Or did the genuine Chloé label make them behave more honestly than they would otherwise?

This is why we also had a no-information condition, in which we didn’t mention anything about whether the sunglasses were real or fake. In that condition 42 percent of the women cheated. That result was between the other two, but it was much closer to the authentic condition (in fact, the two conditions were not statistically different from each other). These results suggest that wearing a genuine product does not increase our honesty (or at least not by much). But once we knowingly put on a counterfeit product, moral constraints loosen to some degree, making it easier for us to take further steps down the path of dishonesty.

The moral of the story? If you, your friend, or someone you are dating wears counterfeit products, be careful! Another act of dishonesty may be closer than you expect.

Up to No Good

These results led us to another question: if wearing counterfeits changes the way we view our own behavior, does it also cause us to be more suspicious of others? To find out, we asked another group of participants to put on what we told them were either real or counterfeit Chloé sunglasses. This time, we asked them to fill out a rather long survey with their sunglasses on. In this survey, we included three sets of questions. The questions in set A asked participants to estimate the likelihood that people they know might engage in various ethically questionable behaviors such as standing in the express line with too many groceries. The questions in set B asked them to estimate the likelihood that when people say particular phrases, including “Sorry, I’m late. Traffic was terrible,” they are lying. Set C presented participants with two scenarios depicting someone who has the opportunity to behave dishonestly, and asked them to estimate the likelihood that the person in the scenario would take the opportunity to cheat.

What were the results? You guessed it. When reflecting on the behavior of people they know, participants in the counterfeit condition judged their acquaintances to be more likely to behave dishonestly than did participants in the authentic condition. They also interpreted the list of common excuses as more likely to be lies, and judged the actor in the two scenarios as being more likely to choose the shadier option. We concluded that counterfeit products not only tend to make us more dishonest; they also cause us to view others as less than honest as well.

[div class=attrib]Read the entire article after the jump.[end-div]

Eternal Damnation as Deterrent?

So, you think an all-seeing, all-knowing supreme deity encourages moral behavior and discourages crime? Think again.

[div class=attrib]From New Scientist:[end-div]

There’s nothing like the fear of eternal damnation to encourage low crime rates. But does belief in heaven and a forgiving god encourage lawbreaking? A new study suggests it might – although establishing a clear link between the two remains a challenge.

Azim Shariff at the University of Oregon in Eugene and his colleagues compared global data on people’s beliefs in the afterlife with worldwide crime data collated by the United Nations Office on Drugs and Crime. In total, Shariff’s team looked at data covering the beliefs of 143,000 individuals across 67 countries and from a variety of religious backgrounds.

In most of the countries assessed, people were more likely to report a belief in heaven than in hell. Using that information, the team could calculate the degree to which a country’s rate of belief in heaven outstrips its rate of belief in hell.

Even after the researchers had controlled for a host of crime-related cultural factors – including GDP, income inequality, population density and life expectancy – national crime rates were typically higher in countries with particularly strong beliefs in heaven but weak beliefs in hell.

Licence to steal

“Belief in a benevolent, forgiving god could license people to think they can get away with things,” says Shariff – although he stresses that this conclusion is speculative, and that the results do not necessarily imply causality between religious beliefs and crime rates.

“There are a number of possible causal pathways,” says Richard Sosis, an anthropologist at the University of Connecticut in Storrs, who was not involved in the study. The most likely interpretation is that there are intervening variables at the societal level – societies may have values that are similarly reflected in their legal and religious systems.

In a follow-up study, yet to be published, Shariff and Amber DeBono of Winston–Salem State University in North Carolina primed volunteers who had Christian beliefs by asking them to write variously about God’s forgiving nature, God’s punitive nature, a forgiving human, a punitive human, or a neutral subject. The volunteers were then asked to complete anagram puzzles for a monetary reward of a few cents per anagram.

God helps those who…

Participants were given the opportunity to commit petty theft, with no chance of being caught, by lying about the number of anagrams they had successfully completed. Shariff’s team found that those participants who had written about a forgiving god claimed nearly $2 more than they were entitled to under the rules of the game, whereas those in the other groups awarded themselves less than 50 cents more than they were entitled to.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A detail from the Chapmans’ Hell. Photograph: Andy Butterton/PA. Courtesy of Guardian.[end-div]

College Laundry

If you have attended college you will relate to the following strip that describes your laundry cycle. If you have not yet attended, please write to us if you deviate from your predestined path — a path that all your predecessors have taken.

 

[div class=attrib]Image courtesy of xkcd.com.[end-div]

Our Perception of Time

[div class=attrib]From Evolutionary Philosophy:[end-div]

We have learned to see time as if it appears in chunks – minutes, hours, days, and years. But if time comes in chunks how do we experience past memories in the present? How does the previous moment’s chunk of time connect to the chunk of the present moment?

Wait a minute. It will take an hour. He is five years old. These are all sentences that contain expressions of units of time. We are all tremendously comfortable with the idea that time comes in discrete units – but does it? William James and Charles Sanders Peirce thought not.

If moments of time were truly discrete, separate units lined up like dominoes in a row, how would it be possible to have a memory of a past event? What connects the present moment to all the past moments that have already gone by?

One answer to the question is to suppose the existence of a transcendental self. That means some self that exists over and above our experience and can connect all the moments together for us. Imagine moments in time that stick together like boxcars of a train. If you are in one boxcar – i.e. inside the present moment – how could you possibly know anything about the boxcar behind you – i.e. the moment past? The only way would be to see from outside of your boxcar – you would at least stick your head out of the window to see the boxcar behind you.

If the boxcar represents your experience of the present moment then we are saying that you would have to leave the present moment at least a little bit to be able to see what happened in the moment behind you. How can you leave the present moment? Where do you go if you leave your experience of the present moment? Where is the space that you exist in when you are outside of your experience? It would have to be a space that transcended your experience – a transcendental space outside of reality as we experience it. It would be a supernatural space and the part of you that existed in that space would be a supernatural extra-experiential you.

For those who had been raised in a Christian context this would not be so hard to except because this extra-experiential you would sound a great deal like the soul. In fact Immanuel Kant who first articulated the idea of a transcendental self was through his philosophy actively trying to reserve space for the human soul in an intellectual atmosphere that he saw as excessively materialistic.

William James and Charles Sanders Peirce believed in unity and therefore they could not accept the idea of a transcendental ego that would exist in some transcendent realm. In some of their thinking they were anticipating the later developments of quantum theory and non-locality.

William James described who we appear to travel through a river of time – and like all rivers the river ahead of us already exists before we arrive there. In the same way the future already exists now. Not in a pre-determined sense but at least as some potentiality. As we arrive at the future moment our arrival marks the passage from the fluid form that we call future to the definitive solid form that we experience as the past. We do not create time by passing through it; we simply freeze it in its tracks.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Addiction: Choice or Disease or Victim of Hijacking?

 

The debate concerning human addictions of all colors and forms rages on. Some would have us believe that addiction is a simple choice shaped by our free will; others would argue that addiction is a chronic disease. Yet, perhaps there may be another more nuanced explanation.

[div class=attrib]From the New York Times:[end-div]

Of all the philosophical discussions that surface in contemporary life, the question of free will — mainly, the debate over whether or not we have it — is certainly one of the most persistent.

That might seem odd, as the average person rarely seems to pause to reflect on whether their choices on, say, where they live, whom they marry, or what they eat for dinner, are their own or the inevitable outcome of a deterministic universe. Still, as James Atlas pointed out last month, the spate of “can’t help yourself” books would indicate that people are in fact deeply concerned with how much of their lives they can control. Perhaps that’s because, upon further reflection, we find that our understanding of free will lurks beneath many essential aspects of our existence.

One particularly interesting variation on this question appears in scientific, academic and therapeutic discussions about addiction. Many times, the question is framed as follows: “Is addiction a disease or a choice?”

The argument runs along these lines: If addiction is a disease, then in some ways it is out of our control and forecloses choices. A disease is a medical condition that develops outside of our control; it is, then, not a matter of choice. In the absence of choice, the addicted person is essentially relieved of responsibility. The addict has been overpowered by her addiction.

The counterargument describes addictive behavior as a choice. People whose use of drugs and alcohol leads to obvious problems but who continue to use them anyway are making choices to do so. Since those choices lead to addiction, blame and responsibility clearly rest on the addict’s shoulders. It then becomes more a matter of free will.

Recent scientific studies on the biochemical responses of the brain are currently tipping the scales toward the more deterministic view — of addiction as a disease. The structure of the brain’s reward system combined with certain biochemical responses and certain environments, they appear to show, cause people to become addicted.

In such studies, and in reports of them to news media, the term “the hijacked brain” often appears, along with other language that emphasizes the addict’s lack of choice in the matter. Sometimes the pleasure-reward system has been “commandeered.” Other times it “goes rogue.” These expressions are often accompanied by the conclusion that there are “addicted brains.”

The word “hijacked” is especially evocative; people often have a visceral reaction to it. I imagine that this is precisely why this term is becoming more commonly used in connection with addiction. But it is important to be aware of the effects of such language on our understanding.

When most people think of a hijacking, they picture a person, sometimes wearing a mask and always wielding some sort of weapon, who takes control of a car, plane or train. The hijacker may not himself drive or pilot the vehicle, but the violence involved leaves no doubt who is in charge. Someone can hijack a vehicle for a variety of reasons, but mostly it boils down to needing to escape or wanting to use the vehicle itself as a weapon in a greater plan. Hijacking is a means to an end; it is always and only oriented to the goals of the hijacker. Innocent victims are ripped from their normal lives by the violent intrusion of the hijacker.

In the “hijacked” view of addiction, the brain is the innocent victim of certain substances — alcohol, cocaine, nicotine or heroin, for example — as well as certain behaviors like eating, gambling or sexual activity. The drugs or the neurochemicals produced by the behaviors overpower and redirect the brain’s normal responses, and thus take control of (hijack) it. For addicted people, that martini or cigarette is the weapon-wielding hijacker who is going to compel certain behaviors.

To do this, drugs like alcohol and cocaine and behaviors like gambling light up the brain’s pleasure circuitry, often bringing a burst of euphoria. Other studies indicate that people who are addicted have lower dopamine and serotonin levels in their brains, which means that it takes more of a particular substance or behavior for them to experience pleasure or to reach a certain threshold of pleasure. People tend to want to maximize pleasure; we tend to do things that bring more of it. We also tend to chase it when it subsides, trying hard to recreate the same level of pleasure we have experienced in the past. It is not uncommon to hear addicts talking about wanting to experience the euphoria of a first high. Often they never reach it, but keep trying. All of this lends credence to the description of the brain as hijacked.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of CNN.[end-div]

You as a Data Strip Mine: What Facebook Knows

China, India, Facebook. With its 900 million member-citizens Facebook is the third largest country on the planet, ranked by population. This country has some benefits: no taxes, freedom to join and/or leave, and of course there’s freedom to assemble and a fair degree of free speech.

However, Facebook is no democracy. In fact, its data privacy policies and personal data mining might well put it in the same league as the Stalinist Soviet Union or cold war East Germany.

A fascinating article by Tom Simonite excerpted below sheds light on the data collection and data mining initiatives underway or planned at Facebook.

[div class=attrib]From Technology Review:[end-div]

If Facebook were a country, a conceit that founder Mark Zuckerberg has entertained in public, its 900 million members would make it the third largest in the world.

It would far outstrip any regime past or present in how intimately it records the lives of its citizens. Private conversations, family photos, and records of road trips, births, marriages, and deaths all stream into the company’s servers and lodge there. Facebook has collected the most extensive data set ever assembled on human social behavior. Some of your personal information is probably part of it.

And yet, even as Facebook has embedded itself into modern life, it hasn’t actually done that much with what it knows about us. Now that the company has gone public, the pressure to develop new sources of profit (see “The Facebook Fallacy) is likely to force it to do more with its hoard of information. That stash of data looms like an oversize shadow over what today is a modest online advertising business, worrying privacy-conscious Web users (see “Few Privacy Regulations Inhibit Facebook”) and rivals such as Google. Everyone has a feeling that this unprecedented resource will yield something big, but nobody knows quite what.

Heading Facebook’s effort to figure out what can be learned from all our data is Cameron Marlow, a tall 35-year-old who until recently sat a few feet away from ­Zuckerberg. The group Marlow runs has escaped the public attention that dogs Facebook’s founders and the more headline-grabbing features of its business. Known internally as the Data Science Team, it is a kind of Bell Labs for the social-networking age. The group has 12 researchers—but is expected to double in size this year. They apply math, programming skills, and social science to mine our data for insights that they hope will advance Facebook’s business and social science at large. Whereas other analysts at the company focus on information related to specific online activities, Marlow’s team can swim in practically the entire ocean of personal data that Facebook maintains. Of all the people at Facebook, perhaps even including the company’s leaders, these researchers have the best chance of discovering what can really be learned when so much personal information is compiled in one place.

Facebook has all this information because it has found ingenious ways to collect data as people socialize. Users fill out profiles with their age, gender, and e-mail address; some people also give additional details, such as their relationship status and mobile-phone number. A redesign last fall introduced profile pages in the form of time lines that invite people to add historical information such as places they have lived and worked. Messages and photos shared on the site are often tagged with a precise location, and in the last two years Facebook has begun to track activity elsewhere on the Internet, using an addictive invention called the “Like” button. It appears on apps and websites outside Facebook and allows people to indicate with a click that they are interested in a brand, product, or piece of digital content. Since last fall, Facebook has also been able to collect data on users’ online lives beyond its borders automatically: in certain apps or websites, when users listen to a song or read a news article, the information is passed along to Facebook, even if no one clicks “Like.” Within the feature’s first five months, Facebook catalogued more than five billion instances of people listening to songs online. Combine that kind of information with a map of the social connections Facebook’s users make on the site, and you have an incredibly rich record of their lives and interactions.

“This is the first time the world has seen this scale and quality of data about human communication,” Marlow says with a characteristically serious gaze before breaking into a smile at the thought of what he can do with the data. For one thing, Marlow is confident that exploring this resource will revolutionize the scientific understanding of why people behave as they do. His team can also help Facebook influence our social behavior for its own benefit and that of its advertisers. This work may even help Facebook invent entirely new ways to make money.

Contagious Information

Marlow eschews the collegiate programmer style of Zuckerberg and many others at Facebook, wearing a dress shirt with his jeans rather than a hoodie or T-shirt. Meeting me shortly before the company’s initial public offering in May, in a conference room adorned with a six-foot caricature of his boss’s dog spray-painted on its glass wall, he comes across more like a young professor than a student. He might have become one had he not realized early in his career that Web companies would yield the juiciest data about human interactions.

In 2001, undertaking a PhD at MIT’s Media Lab, Marlow created a site called Blogdex that automatically listed the most “contagious” information spreading on weblogs. Although it was just a research project, it soon became so popular that Marlow’s servers crashed. Launched just as blogs were exploding into the popular consciousness and becoming so numerous that Web users felt overwhelmed with information, it prefigured later aggregator sites such as Digg and Reddit. But Marlow didn’t build it just to help Web users track what was popular online. Blogdex was intended as a scientific instrument to uncover the social networks forming on the Web and study how they spread ideas. Marlow went on to Yahoo’s research labs to study online socializing for two years. In 2007 he joined Facebook, which he considers the world’s most powerful instrument for studying human society. “For the first time,” Marlow says, “we have a microscope that not only lets us examine social behavior at a very fine level that we’ve never been able to see before but allows us to run experiments that millions of users are exposed to.”

Marlow’s team works with managers across Facebook to find patterns that they might make use of. For instance, they study how a new feature spreads among the social network’s users. They have helped Facebook identify users you may know but haven’t “friended,” and recognize those you may want to designate mere “acquaintances” in order to make their updates less prominent. Yet the group is an odd fit inside a company where software engineers are rock stars who live by the mantra “Move fast and break things.” Lunch with the data team has the feel of a grad-student gathering at a top school; the typical member of the group joined fresh from a PhD or junior academic position and prefers to talk about advancing social science than about Facebook as a product or company. Several members of the team have training in sociology or social psychology, while others began in computer science and started using it to study human behavior. They are free to use some of their time, and Facebook’s data, to probe the basic patterns and motivations of human behavior and to publish the results in academic journals—much as Bell Labs researchers advanced both AT&T’s technologies and the study of fundamental physics.

It may seem strange that an eight-year-old company without a proven business model bothers to support a team with such an academic bent, but ­Marlow says it makes sense. “The biggest challenges Facebook has to solve are the same challenges that social science has,” he says. Those challenges include understanding why some ideas or fashions spread from a few individuals to become universal and others don’t, or to what extent a person’s future actions are a product of past communication with friends. Publishing results and collaborating with university researchers will lead to findings that help Facebook improve its products, he adds.

Social Engineering

Marlow says his team wants to divine the rules of online social life to understand what’s going on inside Facebook, not to develop ways to manipulate it. “Our goal is not to change the pattern of communication in society,” he says. “Our goal is to understand it so we can adapt our platform to give people the experience that they want.” But some of his team’s work and the attitudes of Facebook’s leaders show that the company is not above using its platform to tweak users’ behavior. Unlike academic social scientists, Facebook’s employees have a short path from an idea to an experiment on hundreds of millions of people.

In April, influenced in part by conversations over dinner with his med-student girlfriend (now his wife), Zuckerberg decided that he should use social influence within Facebook to increase organ donor registrations. Users were given an opportunity to click a box on their Timeline pages to signal that they were registered donors, which triggered a notification to their friends. The new feature started a cascade of social pressure, and organ donor enrollment increased by a factor of 23 across 44 states.

Marlow’s team is in the process of publishing results from the last U.S. midterm election that show another striking example of Facebook’s potential to direct its users’ influence on one another. Since 2008, the company has offered a way for users to signal that they have voted; Facebook promotes that to their friends with a note to say that they should be sure to vote, too. Marlow says that in the 2010 election his group matched voter registration logs with the data to see which of the Facebook users who got nudges actually went to the polls. (He stresses that the researchers worked with cryptographically “anonymized” data and could not match specific users with their voting records.)

This is just the beginning. By learning more about how small changes on Facebook can alter users’ behavior outside the site, the company eventually “could allow others to make use of Facebook in the same way,” says Marlow. If the American Heart Association wanted to encourage healthy eating, for example, it might be able to refer to a playbook of Facebook social engineering. “We want to be a platform that others can use to initiate change,” he says.

Advertisers, too, would be eager to know in greater detail what could make a campaign on Facebook affect people’s actions in the outside world, even though they realize there are limits to how firmly human beings can be steered. “It’s not clear to me that social science will ever be an engineering science in a way that building bridges is,” says Duncan Watts, who works on computational social science at Microsoft’s recently opened New York research lab and previously worked alongside Marlow at Yahoo’s labs. “Nevertheless, if you have enough data, you can make predictions that are better than simply random guessing, and that’s really lucrative.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of thejournal.ie / abracapocus_pocuscadabra (Flickr).[end-div]

Zen and the Art of Meditation Messaging

Quite often you will be skimming a book or leafing through pages of your favorite magazine and you will recall having “seen” a specific word. However, you will not remember having read that page or section or having looked at that particular word. But, without fail, when you retrace your steps and look back you will find that specific word, that word that you did not consciously “see”. So, what’s going on?

[div class=attrib]From the New Scientist:[end-div]

MEDITATION increases our ability to tap into the hidden recesses of our brain that are usually outside the reach of our conscious awareness.

That’s according to Madelijn Strick of Utrecht University in the Netherlands and colleagues, who tested whether meditation has an effect on our ability to pick up subliminal messages.

The brain registers subliminal messages, but we are often unable to recall them consciously. To investigate, the team recruited 34 experienced practitioners of Zen meditation and randomly assigned them to either a meditation group or a control group. The meditation group was asked to meditate for 20 minutes in a session led by a professional Zen master. The control group was asked to merely relax for 20 minutes.

The volunteers were then asked 20 questions, each with three or four correct answers – for instance: “Name one of the four seasons”. Just before the subjects saw the question on a computer screen one potential answer – such as “spring” – flashed up for a subliminal 16 milliseconds.

The meditation group gave 6.8 answers, on average, that matched the subliminal words, whereas the control group gave just 4.9 (Consciousness and Cognition, DOI: 10.1016/j.concog.2012.02.010).

Strick thinks that the explanation lies in the difference between what the brain is paying attention to and what we are conscious of. Meditators are potentially accessing more of what the brain has paid attention to than non-meditators, she says.

“It is a truly exciting development that the second wave of rigorous, scientific meditation research is now yielding concrete results,” says Thomas Metzinger, at Johannes Gutenberg University in Mainz, Germany. “Meditation may be best seen as a process that literally expands the space of conscious experience.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Yoga.am.[end-div]

D-School is the Place

Forget art school, engineering school, law school and B-school (business). For wannabe innovators the current place to be is D-school. Design school, that is.

Design school teaches a problem solving method known as “design thinking”. Before it was re-branded in corporatespeak this used to be known as “trial and error”.

Many corporations are finding this approach to be both a challenge and a boon; after all, even in 2012, not many businesses encourage their employees to fail.

[div class=attrib]From the Wall Street Journal:[end-div]

In 2007, Scott Cook, founder of Intuit Inc., the software company behind TurboTax, felt the company wasn’t innovating fast enough. So he decided to adopt an approach to product development that has grown increasingly popular in the corporate world: design thinking.

Loosely defined, design thinking is a problem-solving method that involves close observation of users or customers and a development process of extensive—often rapid—trial and error.

Mr. Cook said the initiative, termed “Design for Delight,” involves field research with customers to understand their “pain points”—an examination of what frustrates them in their offices and homes.

Intuit staffers then “painstorm” to come up with a variety of solutions to address the problems, and experiment with customers to find the best ones.

In one instance, a team of Intuit employees was studying how customers could take pictures of tax forms to reduce typing errors. Some younger customers, taking photos with their smartphones, were frustrated that they couldn’t just complete their taxes on their mobiles. Thus was born the mobile tax app SnapTax in 2010, which has been downloaded more than a million times in the past two years, the company said.

At SAP AG, hundreds of employees across departments work on challenges, such as building a raincoat out of a trash bag or designing a better coffee cup. The hope is that the sessions will train them in the tenets of design thinking, which they can then apply to their own business pursuits, said Carly Cooper, an SAP director who runs many of the sessions.

Last year, when SAP employees talked to sales representatives after closing deals, they found that one of the sales representatives’ biggest concerns was simply, when were they going to get paid. The insight led SAP to develop a new mobile product allowing salespeople to check on the status of their commissions.

[div class=attrib]Read the entire article after the jump.[end-div]

The SpeechJammer and Other Innovations to Come

The mind boggles at the possible situations when a SpeechJammer (affectionately known as the “Shutup Gun”) might come in handy – raucous parties, boring office meetings, spousal arguments, playdates with whiny children.

[div class=attrib]From the New York Times:[end-div]

When you aim the SpeechJammer at someone, it records that person’s voice and plays it back to him with a delay of a few hundred milliseconds. This seems to gum up the brain’s cognitive processes — a phenomenon known as delayed auditory feedback — and can painlessly render the person unable to speak. Kazutaka Kurihara, one of the SpeechJammer’s creators, sees it as a tool to prevent loudmouths from overtaking meetings and public forums, and he’d like to miniaturize his invention so that it can be built into cellphones. “It’s different from conventional weapons such as samurai swords,” Kurihara says. “We hope it will build a more peaceful world.”

[div class=attrib]Read the entire list of 32 weird and wonderful innovations after the jump.[end-div]

[div class=attrib]Graphic courtesy of Chris Nosenzo / New York Times.[end-div]