Category Archives: Idea Soup

Do We Become More Conservative as We Age?

A popular stereotype suggests that we become increasingly conservative in our values as we age. Thus, one would expect that older voters would be more likely to vote for Republican candidates. However, a recent social study debunks this view.

[div class=attrib]From Discovery:[end-div]

Amidst the bipartisan banter of election season, there persists an enduring belief that people get more conservative as they age — making older people more likely to vote for Republican candidates.

Ongoing research, however, fails to back up the stereotype. While there is some evidence that today’s seniors may be more conservative than today’s youth, that’s not because older folks are more conservative than they use to be. Instead, our modern elders likely came of age at a time when the political situation favored more conservative views.

In fact, studies show that people may actually get more liberal over time when it comes to certain kinds of beliefs. That suggests that we are not pre-determined to get stodgy, set in our ways or otherwise more inflexible in our retirement years.

Contrary to popular belief, old age can be an open-minded and enlightening time.

NEWS: Is There a Liberal Gene?

“Pigeonholing older people into these rigid attitude boxes or conservative boxes is not a good idea,” said Nick Dangelis, a sociologist and gerontologist at the University of Vermont in Burlington.

“Rather, when they were born, what experiences they had growing up, as well as political, social and economic events have a lot to do with how people behave,” he said. “Our results are showing that these have profound effects.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A Board of Elections volunteer watches people cast their ballots during early voting October 23, 2008 in Savannah, Georgia. Courtesy of MSNBC.[end-div]

An Evolutionary Benefit to Self-deception

[div class=attrib]From Scientific American:[end-div]

We lie to ourselves all the time. We tell ourselves that we are better than average — that we are more moral, more capable, less likely to become sick or suffer an accident. It’s an odd phenomenon, and an especially puzzling one to those who think about our evolutionary origins. Self-deception is so pervasive that it must confer some advantage. But how could we be well served by a brain that deceives us? This is one of the topics tackled by Robert Trivers in his new book, “The Folly of Fools,” a colorful survey of deception that includes plane crashes, neuroscience and the transvestites of the animal world. He answered questions from Mind Matters editor Gareth Cook.

Cook: Do you have any favorite examples of deception in the natural world?
Trivers: Tough call. They are so numerous, intricate and bizarre.  But you can hardly beat female mimics for general interest. These are males that mimic females in order to achieve closeness to a territory-holding male, who then attracts a real female ready to lay eggs. The territory-holding male imagines that he is in bed (so to speak) with two females, when really he is in bed with one female and another male, who, in turn, steals part of the paternity of the eggs being laid by the female. The internal dynamics of such transvestite threesomes is only just being analyzed. But for pure reproductive artistry one can not beat the tiny blister beetles that assemble in arrays of 100’s to 1000’s, linking together to produce the larger illusion of a female solitary bee, which attracts a male bee who flies into the mirage in order to copulate and thereby carries the beetles to their next host.

Cook: At what age do we see the first signs of deception in humans?
Trivers: In the last trimester of pregnancy, that is, while the offspring is still inside its mother. The baby takes over control of the mother’s blood sugar level (raising it), pulse rate (raising it) and blood distribution (withdrawing it from extremities and positioning it above the developing baby). It does so by putting into the maternal blood stream the same chemicals—or close mimics—as those that the mother normally produces to control these variables. You could argue that this benefits mom. She says, my child knows better what it needs than I do so let me give the child control. But it is not in the mother’s best interests to allow the offspring to get everything it wants; the mother must apportion her biological investment among other offspring, past, present and future. The proof is in the inefficiency of the new arrangement, the hallmark of conflict. The offspring produces these chemicals at 1000 times the level that the mother does. This suggests a co-evolutionary struggle in which the mother’s body becomes deafer as the offspring becomes louder.
After birth, the first clear signs of deception come about age 6 months, which is when the child fakes need when there appears to be no good reason. The child will scream and bawl, roll on the floor in apparent agony and yet stop within seconds after the audience leaves the room, only to resume within seconds when the audience is back. Later, the child will hide objects from the view of others and deny that it cares about a punishment when it clearly does.  So-called ‘white lies’, of the sort “The meal you served was delicious” appear after age 5.

[div class=attrib]Read the entire article here.[end-div]

On the Need for Charisma

[div class=attrib]From Project Syndicate:[end-div]

A leadership transition is scheduled in two major autocracies in 2012. Neither is likely to be a surprise. Xi Jinping is set to replace Hu Jintao as President in China, and, in Russia, Vladimir Putin has announced that he will reclaim the presidency from Dmitri Medvedev. Among the world’s democracies, political outcomes this year are less predictable. Nicolas Sarkozy faces a difficult presidential re-election campaign in France, as does Barack Obama in the United States.

In the 2008 US presidential election, the press told us that Obama won because he had “charisma” – the special power to inspire fascination and loyalty. If so, how can his re-election be uncertain just four years later? Can a leader lose his or her charisma? Does charisma originate in the individual, in that person’s followers, or in the situation? Academic research points to all three.

Charisma proves surprisingly hard to identify in advance. A recent survey concluded that “relatively little” is known about who charismatic leaders are. Dick Morris, an American political consultant, reports that in his experience, “charisma is the most elusive of political traits, because it doesn’t exist in reality; only in our perception once a candidate has made it by hard work and good issues.” Similarly, the business press has described many a CEO as “charismatic” when things are going well, only to withdraw the label when profits fall.

Political scientists have tried to create charisma scales that would predict votes or presidential ratings, but they have not proven fruitful. Among US presidents, John F. Kennedy is often described as charismatic, but obviously not for everyone, given that he failed to capture a majority of the popular vote, and his ratings varied during his presidency.

Kennedy’s successor, Lyndon Johnson, lamented that he lacked charisma. That was true of his relations with the public, but Johnson could be magnetic – even overwhelming – in personal contacts. One careful study of presidential rhetoric found that even such famous orators as Franklin Roosevelt and Ronald Reagan could not count on charisma to enact their programs.

Charisma is more easily identified after the fact. In that sense, the concept is circular. It is like the old Chinese concept of the “mandate of heaven”: emperors were said to rule because they had it, and when they were overthrown, it was because they had lost it.

But no one could predict when that would happen. Similarly, success is often used to prove – after the fact – that a modern political leader has charisma. It is much harder to use charisma to predict who will be a successful leader.

[div class=attrib]Read the entire article here.[end-div]

The Unconscious Mind Boosts Creativity

[div class=attrib]From Miller-McCune:[end-div]

New research finds we’re better able to identify genuinely creative ideas when they’ve emerged from the unconscious mind.

Truly creative ideas are both highly prized and, for most of us, maddeningly elusive. If our best efforts produce nothing brilliant, we’re often advised to put aside the issue at hand and give our unconscious minds a chance to work.

Newly published research suggests that is indeed a good idea — but not for the reason you might think.

A study from the Netherlands finds allowing ideas to incubate in the back of the mind is, in a narrow sense, overrated. People who let their unconscious minds take a crack at a problem were no more adept at coming up with innovative solutions than those who consciously deliberated over the dilemma.

But they did perform better on the vital second step of this process: determining which of their ideas was the most creative. That realization provides essential information; without it, how do you decide which solution you should actually try to implement?

Given the value of discerning truly fresh ideas, “we can conclude that the unconscious mind plays a vital role in creative performance,” a research team led by Simone Ritter of the Radboud University Behavioral Science Institute writes in the journal Thinking Skills and Creativity.

In the first of two experiments, 112 university students were given two minutes to come up with creative ideas to an everyday problem: how to make the time spent waiting in line at a cash register more bearable. Half the participants went at it immediately, while the others first spent two minutes performing a distracting task — clicking on circles that appeared on a computer screen. This allowed time for ideas to percolate outside their conscious awareness.

After writing down as many ideas as they could think of, they were asked to choose which of their notions was the most creative.  Participants were scored by the number of ideas they came up with, the creativity level of those ideas (as measured by trained raters), and whether their perception of their most innovative idea coincided with that of the raters.
The two groups scored evenly on both the number of ideas generated and the average creativity of those ideas. But those who had been distracted, and thus had ideas spring from their unconscious minds, were better at selecting their most creative concept.

[div class=attrib]Read the entire article here.[end-div]

Stephen Colbert: Seriously Funny

A fascinating article of Stephen Colbert, a funny man with some serious jokes about our broken political process.

[div class=attrib]From the New York Times magazine:[end-div]

There used to be just two Stephen Colberts, and they were hard enough to distinguish. The main difference was that one thought the other was an idiot. The idiot Colbert was the one who made a nice paycheck by appearing four times a week on “The Colbert Report” (pronounced in the French fashion, with both t’s silent), the extremely popular fake news show on Comedy Central. The other Colbert, the non-idiot, was the 47-year-old South Carolinian, a practicing Catholic, who lives with his wife and three children in suburban Montclair, N.J., where, according to one of his neighbors, he is “extremely normal.” One of the pleasures of attending a live taping of “The Colbert Report” is watching this Colbert transform himself into a Republican superhero.

Suburban Colbert comes out dressed in the other Colbert’s guise — dark two-button suit, tasteful Brooks Brothersy tie, rimless Rumsfeldian glasses — and answers questions from the audience for a few minutes. (The questions are usually about things like Colbert’s favorite sport or favorite character from “The Lord of the Rings,” but on one memorable occasion a young black boy asked him, “Are you my father?” Colbert hesitated a moment and then said, “Kareem?”) Then he steps onstage, gets a last dab of makeup while someone sprays his hair into an unmussable Romney-like helmet, and turns himself into his alter ego. His body straightens, as if jolted by a shock. A self-satisfied smile creeps across his mouth, and a manically fatuous gleam steals into his eyes.

Lately, though, there has emerged a third Colbert. This one is a version of the TV-show Colbert, except he doesn’t exist just on screen anymore. He exists in the real world and has begun to meddle in it. In 2008, the old Colbert briefly ran for president, entering the Democratic primary in his native state of South Carolina. (He hadn’t really switched parties, but the filing fee for the Republican primary was too expensive.) In 2010, invited by Representative Zoe Lofgren, he testified before Congress about the problem of illegal-immigrant farmworkers and remarked that “the obvious answer is for all of us to stop eating fruits and vegetables.”

But those forays into public life were spoofs, more or less. The new Colbert has crossed the line that separates a TV stunt from reality and a parody from what is being parodied. In June, after petitioning the Federal Election Commission, he started his own super PAC — a real one, with real money. He has run TV ads, endorsed (sort of) the presidential candidacy of Buddy Roemer, the former governor of Louisiana, and almost succeeded in hijacking and renaming the Republican primary in South Carolina. “Basically, the F.E.C. gave me the license to create a killer robot,” Colbert said to me in October, and there are times now when the robot seems to be running the television show instead of the other way around.

“It’s bizarre,” remarked an admiring Jon Stewart, whose own program, “The Daily Show,” immediately precedes “The Colbert Report” on Comedy Central and is where the Colbert character got his start. “Here is this fictional character who is now suddenly interacting in the real world. It’s so far up its own rear end,” he said, or words to that effect, “that you don’t know what to do except get high and sit in a room with a black light and a poster.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Images courtesy of Google search.[end-div]

Crossword Puzzles and Cognition

[div class=attrib]From the New Scientist:[end-div]

TACKLING a crossword can crowd the tip of your tongue. You know that you know the answers to 3 down and 5 across, but the words just won’t come out. Then, when you’ve given up and moved on to another clue, comes blessed relief. The elusive answer suddenly occurs to you, crystal clear.

The processes leading to that flash of insight can illuminate many of the human mind’s curious characteristics. Crosswords can reflect the nature of intuition, hint at the way we retrieve words from our memory, and reveal a surprising connection between puzzle solving and our ability to recognise a human face.

“What’s fascinating about a crossword is that it involves many aspects of cognition that we normally study piecemeal, such as memory search and problem solving, all rolled into one ball,” says Raymond Nickerson, a psychologist at Tufts University in Medford, Massachusetts. In a paper published earlier this year, he brought profession and hobby together by analysing the mental processes of crossword solving (Psychonomic Bulletin and Review, vol 18, p 217).

1 across: “You stinker!” – audible cry that allegedly marked displacement activity (6)

Most of our mental machinations take place pre-consciously, with the results dropping into our conscious minds only after they have been decided elsewhere in the brain. Intuition plays a big role in solving a crossword, Nickerson observes. Indeed, sometimes your pre-conscious mind may be so quick that it produces the goods instantly.

At other times, you might need to take a more methodical approach and consider possible solutions one by one, perhaps listing synonyms of a word in the clue.

Even if your list doesn’t seem to make much sense, it might reflect the way your pre-conscious mind is homing in on the solution. Nickerson points to work in the 1990s by Peter Farvolden at the University of Toronto in Canada, who gave his subjects four-letter fragments of seven-letter target words (as may happen in some crossword layouts, especially in the US, where many words overlap). While his volunteers attempted to work out the target, they were asked to give any other word that occurred to them in the meantime. The words tended to be associated in meaning with the eventual answer, hinting that the pre-conscious mind solves a problem in steps.

Should your powers of deduction fail you, it may help to let your mind chew over the clue while your conscious attention is elsewhere. Studies back up our everyday experience that a period of incubation can lead you to the eventual “aha” moment. Don’t switch off entirely, though. For verbal problems, a break from the clue seems to be more fruitful if you occupy yourself with another task, such as drawing a picture or reading (Psychological Bulletin, vol 135, p 94).

So if 1 across has you flummoxed, you could leave it and take a nice bath, or better still read a novel. Or just move on to the next clue.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Newspaper crossword puzzle. Courtesy of Polytechnic West.[end-div]

Morality for Atheists

The social standing of atheists seems to be on the rise. Back in December we cited a research study that found atheists to be more reviled than rapists. Well, a more recent study now finds that atheists are less disliked than members of the Tea Party.

With this in mind Louise Antony ponders how it is possible for atheists to acquire morality without the help of God.

[div class=attrib]From the New York Times:[end-div]

I was heartened to learn recently that atheists are no longer the most reviled group in the United States: according to the political scientists Robert Putnam and David Campbell, we’ve been overtaken by the Tea Party.  But even as I was high-fiving my fellow apostates (“We’re number two!  We’re number two!”), I was wondering anew: why do so many people dislike atheists?

I gather that many people believe that atheism implies nihilism — that rejecting God means rejecting morality.  A person who denies God, they reason, must be, if not actively evil, at least indifferent to considerations of right and wrong.  After all, doesn’t the dictionary list “wicked” as a synonym for “godless?”  And isn’t it true, as Dostoevsky said, that “if God is dead, everything is permitted”?

Well, actually — no, it’s not.  (And for the record, Dostoevsky never said it was.)   Atheism does not entail that anything goes.

Admittedly, some atheists are nihilists.  (Unfortunately, they’re the ones who get the most press.)  But such atheists’ repudiation of morality stems more from an antecedent cynicism about ethics than from any philosophical view about the divine.  According to these nihilistic atheists, “morality” is just part of a fairy tale we tell each other in order to keep our innate, bestial selfishness (mostly) under control.  Belief in objective “oughts” and “ought nots,” they say, must fall away once we realize that there is no universal enforcer to dish out rewards and punishments in the afterlife.  We’re left with pure self-interest, more or less enlightened.

This is a Hobbesian view: in the state of nature “[t]he notions of right and wrong, justice and injustice have no place.  Where there is no common power, there is no law: where no law, no injustice.”  But no atheist has to agree with this account of morality, and lots of us do not.  We “moralistic atheists” do not see right and wrong as artifacts of a divine protection racket.  Rather, we find moral value to be immanent in the natural world, arising from the vulnerabilities of sentient beings and from the capacities of rational beings to recognize and to respond to those vulnerabilities and capacities in others.

This view of the basis of morality is hardly incompatible with religious belief.  Indeed, anyone who believes that God made human beings in His image believes something like this — that there is a moral dimension of things, and that it is in our ability to apprehend it that we resemble the divine.  Accordingly, many theists, like many atheists, believe that moral value is inherent in morally valuable things.  Things don’t become morally valuable because God prefers them; God prefers them because they are morally valuable. At least this is what I was taught as a girl, growing up Catholic: that we could see that God was good because of the things He commands us to do.  If helping the poor were not a good thing on its own, it wouldn’t be much to God’s credit that He makes charity a duty.

It may surprise some people to learn that theists ever take this position, but it shouldn’t.  This position is not only consistent with belief in God, it is, I contend, a more pious position than its opposite.  It is only if morality is independent of God that we can make moral sense out of religious worship.  It is only if morality is independent of God that any person can have a moral basis for adhering to God’s commands.

Let me explain why.  First let’s take a cold hard look at the consequences of pinning morality to the existence of God.  Consider the following moral judgments — judgments that seem to me to be obviously true:

• It is wrong to drive people from their homes or to kill them because you want their land.

• It is wrong to enslave people.

• It is wrong to torture prisoners of war.

•  Anyone who witnesses genocide, or enslavement, or torture, is morally required
to try to stop it.

To say that morality depends on the existence of God is to say that none of these specific moral judgments is true unless God exists.  That seems to me to be a remarkable claim.  If God turned out not to exist — then slavery would be O.K.?  There’d be nothing wrong with torture?  The pain of another human being would mean nothing?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Sam Harris. Courtesy of Salon.[end-div]

The Sheer Joy of Unconnectedness

Seventeenth century polymath Blaise Pascal had it right when he remarked, “Distraction is the only thing that consoles us for our miseries, and yet it is itself the greatest of our miseries.”

Here in the 21st century we have so many distractions that even our distractions get little attention. Author Pico Iyer shares his prognosis, and shows that perhaps the very much younger generation may be making some progress “in terms of sensing not what’s new, but what’s essential.”

[div class=attrib]From the New York Times:[end-div]

ABOUT a year ago, I flew to Singapore to join the writer Malcolm Gladwell, the fashion designer Marc Ecko and the graphic designer Stefan Sagmeister in addressing a group of advertising people on “Marketing to the Child of Tomorrow.” Soon after I arrived, the chief executive of the agency that had invited us took me aside. What he was most interested in, he began — I braced myself for mention of some next-generation stealth campaign — was stillness.

A few months later, I read an interview with the perennially cutting-edge designer Philippe Starck. What allowed him to remain so consistently ahead of the curve? “I never read any magazines or watch TV,” he said, perhaps a little hyperbolically. “Nor do I go to cocktail parties, dinners or anything like that.” He lived outside conventional ideas, he implied, because “I live alone mostly, in the middle of nowhere.”

Around the same time, I noticed that those who part with $2,285 a night to stay in a cliff-top room at the Post Ranch Inn in Big Sur pay partly for the privilege of not having a TV in their rooms; the future of travel, I’m reliably told, lies in “black-hole resorts,” which charge high prices precisely because you can’t get online in their rooms.

Has it really come to this?

In barely one generation we’ve moved from exulting in the time-saving devices that have so expanded our lives to trying to get away from them — often in order to make more time. The more ways we have to connect, the more many of us seem desperate to unplug. Like teenagers, we appear to have gone from knowing nothing about the world to knowing too much all but overnight.

Internet rescue camps in South Korea and China try to save kids addicted to the screen.

Writer friends of mine pay good money to get the Freedom software that enables them to disable (for up to eight hours) the very Internet connections that seemed so emancipating not long ago. Even Intel (of all companies) experimented in 2007 with conferring four uninterrupted hours of quiet time every Tuesday morning on 300 engineers and managers. (The average office worker today, researchers have found, enjoys no more than three minutes at a time at his or her desk without interruption.) During this period the workers were not allowed to use the phone or send e-mail, but simply had the chance to clear their heads and to hear themselves think. A majority of Intel’s trial group recommended that the policy be extended to others.

THE average American spends at least eight and a half hours a day in front of a screen, Nicholas Carr notes in his eye-opening book “The Shallows,” in part because the number of hours American adults spent online doubled between 2005 and 2009 (and the number of hours spent in front of a TV screen, often simultaneously, is also steadily increasing).

The average American teenager sends or receives 75 text messages a day, though one girl in Sacramento managed to handle an average of 10,000 every 24 hours for a month. Since luxury, as any economist will tell you, is a function of scarcity, the children of tomorrow, I heard myself tell the marketers in Singapore, will crave nothing more than freedom, if only for a short while, from all the blinking machines, streaming videos and scrolling headlines that leave them feeling empty and too full all at once.

The urgency of slowing down — to find the time and space to think — is nothing new, of course, and wiser souls have always reminded us that the more attention we pay to the moment, the less time and energy we have to place it in some larger context.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Processing large amounts of information may lead our brains to forget exactly where it all came from. Courtesy of NY Daily News / Chamoun/Getty.[end-div]

Levelling the Political Playing Field

Let’s face it, taking money out of politics in the United States, especially since the 2010 Supreme Court Decision (Citizens United v. Federal Election Commission), is akin to asking a hardcore addict to give up his or her favorite substance — it’s unlikely to be easy, if at all possible.

So, another approach might be to “re-distribute” the funds more equitably. Not a new idea — a number of European nations do this today. However, Max Frankel over at the NY Review of Books offers a thoughtful proposal with a new twist.

[div class=attrib]By Max Frankel:[end-div]

Every election year brings vivid reminders of how money distorts our politics, poisons our lawmaking, and inevitably widens the gulf between those who can afford to buy influence and the vast majority of Americans who cannot. In 2012, this gulf will become a chasm: one analysis predicts that campaign spending on presidential, congressional, and state elections may exceed $6 billion and all previous records. The Supreme Court has held that money is in effect speech, it talks; and those without big money have become progressively voiceless.

That it may cost as much as a billion dollars to run for President is scandal enough, but the multimillions it now takes to pursue or defend a seat in Congress are even more corrupting. Many of our legislators spend hours of every day begging for contributions from wealthy constituents and from the lobbyists for corporate interests. The access and influence that they routinely sell give the moneyed a seat at the tables where laws are written, to the benefit of those contributors and often to the disadvantage of the rest of us.

And why do the candidates need all that money? Because electoral success requires them to buy endless hours of expensive television time for commercials that advertise their virtues and, more often, roundly assail their opponents with often spurious claims. Of the more than a billion dollars spent on political commercials this year, probably more than half will go for attack ads.

It has long been obvious that television ads dominate electioneering in America. Most of those thirty-second ads are glib at best but much of the time they are unfair smears of the opposition. And we all know that those sordid slanders work—the more negative the better—unless they are instantly answered with equally facile and equally expensive rebuttals.

Other election expenses pale beside the ever larger TV budgets. Campaign staffs, phone and email solicitations, billboards and buttons and such could easily be financed with the small contributions of ordinary voters. But the decisive TV competitions leave politicians at the mercy of self-interested wealthy individuals, corporations, unions, and groups, now often disguised in “Super PACs” that can spend freely on any candidate so long as they are not overtly coordinating with that candidate’s campaign. Even incumbents who face no immediate threat feel a need to keep hoarding huge war chests with which to discourage potential challengers. Senator Charles Schumer of New York, for example, was easily reelected to a third term in 2010 but stands poised five years before his next run with a rapidly growing fund of $10 million.

A rational people looking for fairness in their politics would have long ago demanded that television time be made available at no cost and apportioned equally among rival candidates. But no one expects that any such arrangement is now possible. Political ads are jealously guarded as a major source of income by television stations. And what passes for news on most TV channels gives short shrift to most political campaigns except perhaps to “cover” the advertising combat.

As a political reporter and editor, I concluded long ago that efforts to limit campaign contributions and expenditures have been either disingenuous or futile. Most spending caps are too porous. In fact, they have further distorted campaigns by favoring wealthy candidates whose spending on their own behalf the Supreme Court has exempted from all limitations. And the public has overwhelmingly rejected the use of tax money to subsidize campaigning. In any case, private money that wants to buy political influence tends to behave like water running downhill: it will find a way around most obstacles. Since the court’s decision in the 2010 Citizens United case, big money is now able to find endless new paths, channeling even tax-exempt funds into political pools.

There are no easy ways to repair our entire election system. But I believe that a large degree of fairness could be restored to our campaigns if we level the TV playing field. And given the television industry’s huge stake in paid political advertising, it (and the Supreme Court) would surely resist limiting campaign ads, as many European countries do. With so much campaign cash floating around, there is only one attractive remedy I know of: double the price of political commercials so that every candidate’s purchase of TV time automatically pays for a comparable slot awarded to an opponent. The more you spend, the more your rival benefits as well. The more you attack, the more you underwrite the opponent’s responses. The desirable result would likely be that rival candidates would negotiate an arms control agreement, setting their own limits on their TV budgets and maybe even on their rhetoric.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Alliance for a Just Society.[end-div]

Salad Bar Strategies

It turns out that human behavior at the ubiquitous, self-serve salad bar in your suburban restaurant or hotel is a rather complex affair. There is a method to optimizing the type and quantity of food on one’s plate.

[div class=attrib]From the New Scientist:[end-div]

Competition, greed and skulduggery are the name of the game if you want to eat your fill. Smorgasbord behaviour is surprisingly complex.

A mathematician, an engineer and a psychologist go up to a buffet… No, it’s not the start of a bad joke.

While most of us would dive into the sandwiches without thinking twice, these diners see a groaning table as a welcome opportunity to advance their research.

Look behind the salads, sausage rolls and bite-size pizzas and it turns out that buffets are a microcosm of greed, sexual politics and altruism – a place where our food choices are driven by factors we’re often unaware of. Understand the science and you’ll see buffets very differently next time you fill your plate.

The story starts with Lionel Levine of Cornell University in Ithaca, New York, and Katherine Stange of Stanford University, California. They were sharing food at a restaurant one day, and wondered: do certain choices lead to tastier platefuls when food must be divided up? You could wolf down everything in sight, of course, but these guys are mathematicians, so they turned to a more subtle approach: game theory.

Applying mathematics to a buffet is harder than it sounds, so they started by simplifying things. They modelled two people taking turns to pick items from a shared platter – hardly a buffet, more akin to a polite tapas-style meal. It was never going to generate a strategy for any occasion, but hopefully useful principles would nonetheless emerge. And for their bellies, the potential rewards were great.

First they assumed that each diner would have individual preferences. One might place pork pie at the top and beetroot at the bottom, for example, while others might salivate over sausage rolls. That ranking can be plugged into calculations by giving each food item a score, where higher-ranked foods are worth more points. The most enjoyable buffet meal would be the one that scores highest in total.

In some scenarios, the route to the most enjoyable plate was straightforward. If both people shared the same rankings, they should pick their favourites first. But Levine and Stange also uncovered a counter-intuitive effect: it doesn’t always pay to take the favourite item first. To devise an optimum strategy, they say, you should take into account what your food rival considers to be the worst food on the table.

If that makes your brow furrow, consider this: if you know your fellow diner hates chicken legs, you know that can be the last morsel you aim to eat – even if it’s one of your favourites. In principle, if you had full knowledge of your food rival’s preferences, it would be possible to work backwards from their least favourite and identify the optimum order in which to fill your plate, according to the pair’s calculations, which will appear in American Mathematical Monthly (arxiv.org/abs/1104.0961).

So how do you know what to select first? In reality, the buffet might be long gone before you had worked it out. Even if you did, the researchers’ strategy also assumes that you are at a rather polite buffet, taking turns, so it has its limitations. However, it does provide practical advice in some scenarios. For example, imagine Amanda is up against Brian, who she knows has the opposite ranking of tastes to her. Amanda loves sausages, hates pickled onions, and is middling about quiche. Brian loves pickled onions, hates sausages, shares the same view of quiche. Having identified that her favourites are safe, Amanda should prioritise morsels where their taste-ranking matched – the quiche, in other words.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: salad bars: Courtesy of Google search.[end-div]

Social Influence Through Social Media: Not!

Online social networks are an unprecedentedly rich source of material for psychologists, social scientists and observers of human behavior. Now a recent study shows that influence through these networks may not be as powerful or widespread as first thought. The study, “Social Selection and Peer Influence in an Online Social Network,” by Kevin Lewis, Marco Gonzalez and Jason Kaufman is available here.

[div class=attrib]From the Wall Street Journal:[end-div]

Social media gives ordinary people unprecedented power to broadcast their taste in movies, books and film, but for the most part those tastes don’t rub off on other people, a new study of college students finds. Instead, social media appears to strengthen our bonds with people whose tastes already resemble ours.

Researchers followed the Facebook pages and networks of some 1,000 students, at one college, for four years (looking only at public information). The strongest determinant of Facebook friendship was “mere propinquity” — living in the same building, studying the same subject—but people also self-segregated by gender, race, socioeconomic background and place of origin.

When it came to culture, researchers used an algorithm to identify taste “clusters” within the categories of music, movies, and books. They learned that fans of “lite/classic rock”* and “classical/jazz” were significantly more likely than chance would predict to form and maintain friendships, as were devotees of films featuring “dark satire” or “raunchy comedy / gore.” But this was the case for no other music or film genre — and for no books.

What’s more, “jazz/classical” was the only taste to spread from people who possessed it to those who lacked it. The researchers suggest that this is because liking jazz and classical music serves as a class marker, one that college-age people want to acquire. (I’d prefer to believe that they adopt those tastes on aesthetic grounds, but who knows?) “Indie/alt” music, in fact, was the opposite of contagious: People whose friends liked that style music tended to drop that preference themselves, over time.

[div class=attrib]Read the entire article here.[end-div]

Walking Through Doorways and Forgetting

[div class=attrib]From Scientific American:[end-div]

The French poet Paul Valéry once said, “The purpose of psychology is to give us a completely different idea of the things we know best.”  In that spirit, consider a situation many of us will find we know too well:  You’re sitting at your desk in your office at home. Digging for something under a stack of papers, you find a dirty coffee mug that’s been there so long it’s eligible for carbon dating.  Better wash it. You pick up the mug, walk out the door of your office, and head toward the kitchen.  By the time you get to the kitchen, though, you’ve forgotten why you stood up in the first place, and you wander back to your office, feeling a little confused—until you look down and see the cup.

So there’s the thing we know best:  The common and annoying experience of arriving somewhere only to realize you’ve forgotten what you went there to do.  We all know why such forgetting happens: we didn’t pay enough attention, or too much time passed, or it just wasn’t important enough.  But a “completely different” idea comes from a team of researchers at the University of Notre Dame.  The first part of their paper’s title sums it up:  “Walking through doorways causes forgetting.”

Gabriel Radvansky, Sabine Krawietz and Andrea Tamplin seated participants in front of a computer screen running a video game in which they could move around using the arrow keys.  In the game, they would walk up to a table with a colored geometric solid sitting on it. Their task was to pick up the object and take it to another table, where they would put the object down and pick up a new one. Whichever object they were currently carrying was invisible to them, as if it were in a virtual backpack.??Sometimes, to get to the next object the participant simply walked across the room. Other times, they had to walk the same distance, but through a door into a new room. From time to time, the researchers gave them a pop quiz, asking which object was currently in their backpack.  The quiz was timed so that when they walked through a doorway, they were tested right afterwards.  As the title said, walking through doorways caused forgetting: Their responses were both slower and less accurate when they’d walked through a doorway into a new room than when they’d walked the same distance within the same room.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Doorway, Titicaca, Bolivia. Courtesy of M.Gerra Assoc.[end-div]

Irrational Exuberance and Holiday Shopping

‘Tis the season to buy, give, receive and “re-gift” mostly useless and unwanted “stuff”. That’s how many economists would characterize these days of retail madness. Matthew Yglesias over a Slate ponders a more efficient way to re-distribute wealth.

[div class=attrib]From Slate:[end-div]

Christmas is not the most wonderful time of the year for economists. The holiday spirit is puzzlingly difficult to model: It plays havoc with the notion of rational utility-maximization. There’s so much waste! Price-insensitive travelers pack airports beyond capacity on Dec. 24 only to leave planes empty on Christmas Day. Even worse are the gifts, which represent an abandonment of our efficient system of monetary exchange in favor of a semi-barbaric form of bartering.

Still, even the most rational and Scroogey of economists must concede that gift-giving is clearly here to stay. What’s needed is a bit of advice: What can economics tell us about efficient gifting so that your loved ones get the most bang for your buck?

We need to start with the basic problem of gift-giving and barter in general: preference heterogeneity. Different people, in other words, want different stuff and they value it differently.

In a system of monetary exchange, everything has more or less one price. In that sense, we can say that a Lexus or a pile of coconuts is “worth” a certain amount: its market price. But I, personally, would have little use for a Lexus. I live in an apartment building near a Metro station and above a supermarket; I walk to work; and driving up to New York to visit my family is much less practical than taking a bus or a train. So while of course I won’t complain if you buy me a Lexus, its value to me will be low relative to its market price. Similarly, I don’t like coconuts and I’m not on the verge of starvation. If you dump a pile of coconuts in my living room, all you’re doing is creating a hassle for me. The market price of coconuts is low, but the utility I would derive from a gift of coconuts is actually negative.

In the case of the Lexus, the sensible thing for me to do would be to sell the car. But this would be a bit of a hassle and would doubtless leave me with less money in my pocket than you spent.

This gap between what something is worth to me and what it actually costs is “deadweight loss.” The deadweight loss can be thought of in monetary terms, or you might think of it as the hassle involved in returning something for store credit. It’s the gap in usefulness between a $20 gift certificate to the Olive Garden and a $20 bill that could, among other things, be used to buy $20 worth of food at Olive Garden. Research suggests that there’s quite a lot of deadweight loss during the holiday season. Joel Waldfogel’s classic paper (later expanded into a short book) suggests that gift exchange carries with it an average deadweight loss of 10 percent to a third of the value of the gifts. The National Retail Federation is projecting total holiday spending of more than $460 billion, implying $46-$152 billion worth of holiday wastage, potentially equivalent to an entire year’s worth of output from Iowa.

Partially rescuing Christmas is the reality that a lot of gift-giving isn’t exchange at all. Rather, it’s a kind of Robin Hood transfer in which we take resources from (relatively) rich parents and grandparents and give them to kids with little or no income. This is welfare enhancing for the same reason that redistributive taxation is welfare enhancing: People with less money need the stuff more.

[div class=attrib]Read the entire article here.[end-div]

The Psychology of Gift Giving

[div class=attrib]From the Wall Street Journal:[end-div]

Many of my economist friends have a problem with gift-giving. They view the holidays not as an occasion for joy but as a festival of irrationality, an orgy of wealth-destruction.

Rational economists fixate on a situation in which, say, your Aunt Bertha spends $50 on a shirt for you, and you end up wearing it just once (when she visits). Her hard-earned cash has evaporated, and you don’t even like the present! One much-cited study estimated that as much as a third of the money spent on Christmas is wasted, because recipients assign a value lower than the retail price to the gifts they receive. Rational economists thus make a simple suggestion: Give cash or give nothing.

But behavioral economics, which draws on psychology as well as on economic theory, is much more appreciative of gift giving. Behavioral economics better understands why people (rightly, in my view) don’t want to give up the mystery, excitement and joy of gift giving.

In this view, gifts aren’t irrational. It’s just that rational economists have failed to account for their genuine social utility. So let’s examine the rational and irrational reasons to give gifts.

Some gifts, of course, are basically straightforward economic exchanges. This is the case when we buy a nephew a package of socks because his mother says he needs them. It is the least exciting kind of gift but also the one that any economist can understand.

A second important kind of gift is one that tries to create or strengthen a social connection. The classic example is when somebody invites us for dinner and we bring something for the host. It’s not about economic efficiency. It’s a way to express our gratitude and to create a social bond with the host.

Another category of gift, which I like a lot, is what I call “paternalistic” gifts—things you think somebody else should have. I like a certain Green Day album or Julian Barnes novel or the book “Predictably Irrational,” and I think that you should like it, too. Or I think that singing lessons or yoga classes will expand your horizons—and so I buy them for you.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Google search results for ‘gifts’.[end-div]

Hitchens Returns to Stardust

Having just posted this article on Christopher Hitchens earlier in the week we at theDiagonal are compelled to mourn and signal his departure. Christopher Hitchens died on December 15, 2011 from pneumonia and complications from esophageal cancer.

His incisive mind, lucid reason, quick wit and forceful skepticism will be sorely missed. Luckily, his written words, of which there are many, will live on.

Richard Dawkins writes of his fellow atheist:

Farewell, great voice. Great voice of reason, of humanity, of humour. Great voice against cant, against hypocrisy, against obscurantism and pretension, against all tyrants including God.

Author Ian McEwan writes of his close friend’s last weeks, which we excerpt below.

[div class=attrib]From the Guardian:[end-div]

The place where Christopher Hitchens spent his last few weeks was hardly bookish, but he made it his own. Close to downtown Houston, Texas is the medical centre, a cluster of high-rises like La Défense of Paris, or the City of London, a financial district of a sort, where the common currency is illness. This complex is one of the world’s great concentrations of medical expertise and technology. Its highest building, 40 or 50 storeys up, denies the possibility of a benevolent god – a neon sign proclaims from its roof a cancer hospital for children. This “clean-sliced cliff”, as Larkin puts it in his poem about a tower-block hospital, was right across the way from Christopher’s place – which was not quite as high, and adults only.

No man was ever as easy to visit in hospital. He didn’t want flowers and grapes, he wanted conversation, and presence. All silences were useful. He liked to find you still there when he woke from his frequent morphine-induced dozes. He wasn’t interested in being ill, the way most ill people are. He didn’t want to talk about it.

When I arrived from the airport on my last visit, he saw sticking out of my luggage a small book. He held out his hand for it – Peter Ackroyd‘s London Under, a subterranean history of the city. Then we began a 10-minute celebration of its author. We had never spoken of him before, and Christopher seemed to have read everything. Only then did we say hello. He wanted the Ackroyd, he said, because it was small and didn’t hurt his wrist to hold. But soon he was making pencilled notes in its margins. By that evening he’d finished it.

He could have written a review, but he was due to turn in a long piece on Chesterton. And so this was how it would go: talk about books and politics, then he dozed while I read or wrote, then more talk, then we both read. The intensive care unit room was crammed with flickering machines and sustaining tubes, but they seemed almost decorative. Books, journalism, the ideas behind both, conquered the sterile space, or warmed it, they raised it to the condition of a good university library. And they protected us from the bleak high-rise view through the plate glass windows, of that world, in Larkin’s lines, whose loves and chances “are beyond the stretch/Of any hand from here!”

In the afternoon I was helping him out of bed, the idea being that he was to take a shuffle round the nurses’ station to exercise his legs. As he leaned his trembling, diminished weight on me, I said, only because I knew he was thinking it, “Take my arm old toad …” He gave me that shifty sideways grin I remembered so well from healthy days. It was the smile of recognition, or one that anticipates in late afternoon an “evening of shame” – that is to say, pleasure, or, one of his favourite terms, “sodality”.

His unworldly fluency never deserted him, his commitment was passionate, and he never deserted his trade. He was the consummate writer, the brilliant friend. In Walter Pater’s famous phrase, he burned “with this hard gem-like flame”. Right to the end.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Christopher Hitchens with Ian McEwan (left) and Martin Amis in Uruguay, posing for a picture which appeared in his memoirs, Hitch 22. Courtesy of Guardian / PR.[end-div]

Why Converse When You Can Text?

The holidays approach, which for many means spending a more than usual amount of time with extended family and distant relatives. So, why talk face-to-face when you could text Great Uncle Aloysius instead?

Dominique Browning suggests lowering the stress levels of family get-togethers through more texting and less face-time.

[div class=attrib]From the New York Times:[end-div]

ADMIT it. The holiday season has just begun, and already we’re overwhelmed by so much … face time. It’s hard, face-to-face emoting, face-to-face empathizing, face-to-face expressing, face-to-face criticizing. Thank goodness for less face time; when it comes to disrupting, if not severing, lifetimes of neurotic relational patterns, technology works even better than psychotherapy.

We look askance at those young adults in a swivet of tech-enabled multifriending, endlessly texting, tracking one another’s movements — always distracted from what they are doing by what they are not doing, always connecting to people they are not with rather than people right in front of them.

But being neither here nor there has real upsides. It’s less strenuous. And it can be more uplifting. Or, at least, safer, which has a lot going for it these days.

Face time — or what used to be known as spending time with friends and family — is exhausting. Maybe that’s why we’re all so quick to abandon it. From grandfathers to tweenies, we’re all taking advantage of the ways in which we can avoid actually talking, much less seeing, one another — but still stay connected.

The last time I had face time with my mother, it started out fine. “What a lovely blouse,” she said, plucking lovingly (as I chose to think) at my velvet sleeve. I smiled, pleased that she was noticing that I had made an effort. “Too bad it doesn’t go with your skirt.” Had we been on Skype, she would never have noticed my (stylishly intentional, I might add, just ask Marni) intriguing mix of textures. And I would have been spared another bout of regressive face time freak-out.

Face time means you can’t search for intriguing recipes while you are listening to a fresh round of news about a friend’s search for a soul mate. You can’t mute yourself out of an endless meeting, or listen to 10 people tangled up in planning while you vacuum the living room. You can’t get “cut off” — Whoops! Sorry! Tunnel! — in the middle of a tedious litany of tax problems your accountant has spotted.

My move away from face time started with my children; they are generally the ones who lead us into the future. It happened gradually. First, they left home. That did it for face time. Then I stopped getting return phone calls to voice mails. That did it for voice time, which I’d used to wean myself from face time. What happened?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: People texting. Courtesy of Mashable.com.[end-div]

Consciousness as Illusion?

Massimo Pigliucci over at Rationally Speaking ponders free will, moral responsibility and consciousness and, as always, presents a well reasoned and eloquent argument — we do exist!

[div class=attrib]From Rationally Speaking:[end-div]

For some time I have been noticing the emergence of a strange trinity of beliefs among my fellow skeptics and freethinkers: an increasing number of them, it seems, don’t believe that they can make decisions (the free will debate), don’t believe that they have moral responsibility (because they don’t have free will, or because morality is relative — take your pick), and they don’t even believe that they exist as conscious beings because, you know, consciousness is an illusion.

As I have argued recently, there are sensible ways to understand human volition (a much less metaphysically loaded and more sensible term than free will) within a lawful universe (Sean Carroll agrees and, interestingly, so does my sometime opponent Eliezer Yudkowsky). I also devoted an entire series on this blog to a better understanding of what morality is, how it works, and why it ain’t relative (within the domain of social beings capable of self-reflection). Let’s talk about consciousness then.

The oft-heard claim that consciousness is an illusion is an extraordinary one, as it relegates to an entirely epiphenomenal status what is arguably the most distinctive characteristic of human beings, the very thing that seems to shape and give meaning to our lives, and presumably one of the major outcome of millions of years of evolution pushing for a larger brain equipped with powerful frontal lobes capable to carry out reasoning and deliberation.

Still, if science tells us that consciousness is an illusion, we must bow to that pronouncement and move on (though we apparently cannot escape the illusion, partly because we have no free will). But what is the extraordinary evidence for this extraordinary claim? To begin with, there are studies of (very few) “split brain” patients which seem to indicate that the two hemispheres of the brain — once separated — display independent consciousness (under experimental circumstances), to the point that they may even try to make the left and right sides of the body act antagonistically to each other.

But there are a couple of obvious issues here that block an easy jump from observations on those patients to grand conclusions about the illusoriness of consciousness. First off, the two hemispheres are still conscious, so at best we have evidence that consciousness is divisible, not that it is an illusion (and that subdivision presumably can proceed no further than n=2). Second, these are highly pathological situations, and though they certainly tell us something interesting about the functioning of the brain, they are informative mostly about what happens when the brain does not function. As a crude analogy, imagine sawing a car in two, noticing that the front wheels now spin independently of the rear wheels, and concluding that the synchronous rotation of the wheels in the intact car is an “illusion.” Not a good inference, is it?

Let’s pursue this illusion thing a bit further. Sometimes people also argue that physics tells us that the way we perceive the world is also an illusion. After all, apparently solid objects like tables are made of quarks and the forces that bind them together, and since that’s the fundamental level of reality (well, unless you accept string theory) then clearly our senses are mistaken.

But our senses are not mistaken at all, they simply function at the (biologically) appropriate level of perception of reality. We are macroscopic objects and need to navigate the world as such. It would be highly inconvenient if we could somehow perceive quantum level phenomena directly, and in a very strong sense the solidity of a table is not an illusion at all. It is rather an emergent property of matter that our evolved senses exploit to allow us to sit down and have a nice meal at that table without worrying about the zillions of subnuclear interactions going on about it all the time.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Consciousness Art. Courtesy of Google search.[end-div]

Would You Let An Atheist Teacher Babysit Your Children?

For adults living in North America, the answer is that it’s probably more likely that they would prefer a rapist teacher as babysitter over an atheistic one. Startling as that may seem, the conclusion is backed by some real science, excerpted below.

[div class=attrib]From the Washington Post:[end-div]

A new study finds that atheists are among society’s most distrusted group, comparable even to rapists in certain circumstances.

Psychologists at the University of British Columbia and the University of Oregon say that their study demonstrates that anti-atheist prejudice stems from moral distrust, not dislike, of nonbelievers.

“It’s pretty remarkable,” said Azim Shariff, an assistant professor of psychology at the University of Oregon and a co-author of the study, which appears in the current issue of Journal of Personality and Social Psychology.

The study, conducted among 350 Americans adults and 420 Canadian college students, asked participants to decide if a fictional driver damaged a parked car and left the scene, then found a wallet and took the money, was the driver more likely to be a teacher, an atheist teacher, or a rapist teacher?

The participants, who were from religious and nonreligious backgrounds, most often chose the atheist teacher.

The study is part of an attempt to understand what needs religion fulfills in people. Among the conclusions is a sense of trust in others.

“People find atheists very suspect,” Shariff said. “They don’t fear God so we should distrust them; they do not have the same moral obligations of others. This is a common refrain against atheists. People fear them as a group.”

[div class=attrib]Follow the entire article here.[end-div]

[div class=attrib]Image: Ariane Sherine and Professor Richard Dawkins pose in front of a London bus featuring an atheist advertisement with the slogan “There’s probably no God. Now stop worrying and enjoy your life”. Courtesy Heathcliff  O’Malley / Daily Telegraph.[end-div]

 

Hitchens on the Desire to Have Died

Christopher Hitchens, incisive, erudite and eloquent as ever.

Author, polemicist par-excellence, journalist, atheist, Orwellian (as in, following in George Orwell’s steps), and literary critic, Christopher Hitchens shows us how the pen truly is mightier than the sword (though me might well argue to the contrary).

Now fighting oesophageal cancer, Hitchen’s written word continues to provide clarity and insight. We excerpt below part of his recent, very personal essay for Vanity Fair, on the miracle (scientific, that is) and madness of modern medicine.

[div class=attrib]From Vanity Fair:[end-div]

Death has this much to be said for it:
You don’t have to get out of bed for it.
Wherever you happen to be
They bring it to you—free.
—Kingsley Amis

Pointed threats, they bluff with scorn
Suicide remarks are torn
From the fool’s gold mouthpiece the hollow horn
Plays wasted words, proves to warn
That he not busy being born is busy dying.
—Bob Dylan, “It’s Alright, Ma (I’m Only Bleeding)”

When it came to it, and old Kingsley suffered from a demoralizing and disorienting fall, he did take to his bed and eventually turned his face to the wall. It wasn’t all reclining and waiting for hospital room service after that—“Kill me, you fucking fool!” he once alarmingly exclaimed to his son Philip—but essentially he waited passively for the end. It duly came, without much fuss and with no charge.

Mr. Robert Zimmerman of Hibbing, Minnesota, has had at least one very close encounter with death, more than one update and revision of his relationship with the Almighty and the Four Last Things, and looks set to go on demonstrating that there are many different ways of proving that one is alive. After all, considering the alternatives …

Before I was diagnosed with esophageal cancer a year and a half ago, I rather jauntily told the readers of my memoirs that when faced with extinction I wanted to be fully conscious and awake, in order to “do” death in the active and not the passive sense. And I do, still, try to nurture that little flame of curiosity and defiance: willing to play out the string to the end and wishing to be spared nothing that properly belongs to a life span. However, one thing that grave illness does is to make you examine familiar principles and seemingly reliable sayings. And there’s one that I find I am not saying with quite the same conviction as I once used to: In particular, I have slightly stopped issuing the announcement that “Whatever doesn’t kill me makes me stronger.”

In fact, I now sometimes wonder why I ever thought it profound. It is usually attributed to Friedrich Nietzsche: Was mich nicht umbringt macht mich stärker. In German it reads and sounds more like poetry, which is why it seems probable to me that Nietzsche borrowed it from Goethe, who was writing a century earlier. But does the rhyme suggest a reason? Perhaps it does, or can, in matters of the emotions. I can remember thinking, of testing moments involving love and hate, that I had, so to speak, come out of them ahead, with some strength accrued from the experience that I couldn’t have acquired any other way. And then once or twice, walking away from a car wreck or a close encounter with mayhem while doing foreign reporting, I experienced a rather fatuous feeling of having been toughened by the encounter. But really, that’s to say no more than “There but for the grace of god go I,” which in turn is to say no more than “The grace of god has happily embraced me and skipped that unfortunate other man.”

Or take an example from an altogether different and more temperate philosopher, nearer to our own time. The late Professor Sidney Hook was a famous materialist and pragmatist, who wrote sophisticated treatises that synthesized the work of John Dewey and Karl Marx. He too was an unrelenting atheist. Toward the end of his long life he became seriously ill and began to reflect on the paradox that—based as he was in the medical mecca of Stanford, California—he was able to avail himself of a historically unprecedented level of care, while at the same time being exposed to a degree of suffering that previous generations might not have been able to afford. Reasoning on this after one especially horrible experience from which he had eventually recovered, he decided that he would after all rather have died:

I lay at the point of death. A congestive heart failure was treated for diagnostic purposes by an angiogram that triggered a stroke. Violent and painful hiccups, uninterrupted for several days and nights, prevented the ingestion of food. My left side and one of my vocal cords became paralyzed. Some form of pleurisy set in, and I felt I was drowning in a sea of slime In one of my lucid intervals during those days of agony, I asked my physician to discontinue all life-supporting services or show me how to do it.

The physician denied this plea, rather loftily assuring Hook that “someday I would appreciate the unwisdom of my request.” But the stoic philosopher, from the vantage point of continued life, still insisted that he wished he had been permitted to expire. He gave three reasons. Another agonizing stroke could hit him, forcing him to suffer it all over again. His family was being put through a hellish experience. Medical resources were being pointlessly expended. In the course of his essay, he used a potent phrase to describe the position of others who suffer like this, referring to them as lying on “mattress graves.”

If being restored to life doesn’t count as something that doesn’t kill you, then what does? And yet there seems no meaningful sense in which it made Sidney Hook “stronger.” Indeed, if anything, it seems to have concentrated his attention on the way in which each debilitation builds on its predecessor and becomes one cumulative misery with only one possible outcome. After all, if it were otherwise, then each attack, each stroke, each vile hiccup, each slime assault, would collectively build one up and strengthen resistance. And this is plainly absurd. So we are left with something quite unusual in the annals of unsentimental approaches to extinction: not the wish to die with dignity but the desire to have died.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Christopher Hitchens, 2010. Courtesy of Wikipedia.[end-div]

Do We Need Intellectuals in Politics?

The question, as posed by the New York Times, may have been somewhat rhetorical. However, as we can see from the rise of the technocratic classes in Europe intellectuals still seem to be in reasonably strong demand, albeit if no longer revered.

[div class=attrib]From the New York Times:[end-div]

The rise of Newt Gingrich, Ph.D.— along with the apparent anti-intellectualism of many of the other Republican candidates — has once again raised the question of the role of intellectuals in American politics.

In writing about intellectuals, my temptation is to begin by echoing Marianne Moore on poetry: I, too, dislike them.  But that would be a lie: all else equal, I really like intellectuals.  Besides, I’m an intellectual myself, and their self-deprecation is one thing I really do dislike about many intellectuals.

What is an intellectual?  In general, someone seriously devoted to what used to be called the “life of the mind”: thinking pursued not instrumentally, for the sake of practical goals, but simply for the sake of knowing and understanding.  Nowadays, universities are the most congenial spots for intellectuals, although even there corporatism and careerism are increasing threats.

Intellectuals tell us things we need to know: how nature and society work, what happened in our past, how to analyze concepts, how to appreciate art and literature.   They also keep us in conversation with the great minds of our past.  This conversation may not, as some hope, tap into a source of enduring wisdom, but it at least provides a critical standpoint for assessing the limits of our current cultural assumptions.

In his “Republic,” Plato put forward the ideal of a state ruled by intellectuals who combined comprehensive theoretical knowledge with the practical capacity for applying it to concrete problems.  In reality, no one has theoretical expertise in more than a few specialized subjects, and there is no strong correlation between having such knowledge and being able to use it to resolve complex social and political problems.  Even more important, our theoretical knowledge is often highly limited, so that even the best available expert advice may be of little practical value.  An experienced and informed non-expert may well have a better sense of these limits than experts strongly invested in their disciplines.  This analysis supports the traditional American distrust of intellectuals: they are not in general highly suited for political office.

But it does not support the anti-intellectualism that tolerates or even applauds candidates who disdain or are incapable of serious engagement with intellectuals.   Good politicians need not be intellectuals, but they should have intellectual lives.  Concretely, they should have an ability and interest in reading the sorts of articles that appear in, for example, Scientific American, The New York Review of Books, and the science, culture and op-ed sections of major national newspapers — as well as the books discussed in such articles.

It’s often said that what our leaders need is common sense, not fancy theories.  But common-sense ideas that work in individuals’ everyday lives are often useless for dealing with complex problems of society as a whole.  For example, it’s common sense that government payments to the unemployed will lead to more jobs because those receiving the payments will spend the money, thereby increasing demand, which will lead businesses to hire more workers.  But it’s also common sense that if people are paid for not working, they will have less incentive to work, which will increase unemployment.  The trick is to find the amount of unemployment benefits that will strike the most effective balance between stimulating demand and discouraging employment.  This is where our leaders need to talk to economists.

[div class=attrib]Read the entire article here.[end-div]

The Renaissance of Narcissism

In recent years narcissism has been taking a bad rap. So much so that Narcissistic Personality Disorder (NPD) was slated for removal from the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders – DSM-V. The DSM-V is the professional reference guide published by the American Psychiatric Association (APA). Psychiatrists and clinical psychologists had decided that they needed only 5 fundamental types of personality disorder: anti-social, avoidant, borderline, obsessive-compulsive and schizotypal. Hence no need for NPD.

Interestingly in mid-2010, the APA reversed itself by saving narcissism from the personality disorders chopping block. While this may be a win for narcissists by having their “condition” back in the official catalog, some suggest this is a huge mistake. After all narcissism now seems to have become a culturally fashionable, de rigeur activity rather than a full-blown pathological disorder.

[div class=attrib]From the Telegraph:[end-div]

… You don’t need to be a psychiatrist to see that narcissism has shifted from a pathological condition to a norm, if not a means of survival.

Narcissism appears as a necessity in a society of the spectacle, which runs from Andy Warhol’s “15 minutes of fame” prediction through reality television and self-promotion to YouTube hits.

While the media and social media had a role in normalising narcissism, photography has played along. We exist in and for society, only once we have been photographed. The photographic portrait is no longer linked to milestones like graduation ceremonies and weddings, or exceptional moments such as vacations, parties or even crimes. It has become part of a daily, if not minute-by-minute, staging of the self. Portraits appear to have been eclipsed by self-portraits: Tweeted, posted, shared.

According to Greek mythology, Narcissus was the man who fell in love with his reflection in a pool of water. According to the DSM-IV, 50-70 per cent of those diagnosed with NPD are men. But according to my Canadian upbringing looking at one’s reflection in a mirror for too long was a weakness particular to the fairer sex and an anti-social taboo.

I recall doubting Cindy Sherman’s Untitled Film Stills (1977-80): wasn’t she just a narcissist taking pictures of herself all day long? At least she was modest enough to use a remote shutter trigger. Digital narcissism has recently gained attention with Gabriela Herman’s portrait series Bloggers (2010-11), which captures bloggers gazing into their glowing screens. Even closer to our narcissistic norm are Wolfram Hahn’s portraits of people taking pictures of themselves (Into the Light, 2009-10).

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Cindy Sherman: the Early Works 1975-77. Courtesy of the Telegraph / Frieze.[end-div]

Hari Seldon, Meet Neuroeconomics

Fans of Isaac Asimov’s groundbreaking Foundation novels will know Hari Seldon as the founder of “psychohistory”. Entirely fictional, psychohistory is a statistical science that makes possible predictions of future behavior of large groups of people, and is based on a mathematical analysis of history and sociology.

Now, 11,000 years or so back into our present reality comes the burgeoning field of “neuroeconomics”. As Slate reports, Seldon’s “psychohistory” may not be as far-fetched or as far away as we think.

[div class=attrib]From Slate:[end-div]

Neuroscience—the science of how the brain, that physical organ inside one’s head, really works—is beginning to change the way we think about how people make decisions. These findings will inevitably change the way we think about how economies function. In short, we are at the dawn of “neuroeconomics.”

Efforts to link neuroscience and economics have occurred mostly in just the last few years, and the growth of neuroeconomics is still in its early stages. But its nascence follows a pattern: Revolutions in science tend to come from completely unexpected places. A field of science can turn barren if no fundamentally new approaches to research are on the horizon. Scholars can become so trapped in their methods—in the language and assumptions of the accepted approach to their discipline—that their research becomes repetitive or trivial.

Then something exciting comes along from someone who was never involved with these methods—some new idea that attracts young scholars and a few iconoclastic old scholars, who are willing to learn a different science and its research methods. At some moment in this process, a scientific revolution is born.

The neuroeconomic revolution has passed some key milestones quite recently, notably the publication last year of neuroscientist Paul Glimcher’s book Foundations of Neuroeconomic Analysis—a pointed variation on the title of Paul Samuelson’s 1947 classic work, Foundations of Economic Analysis, which helped to launch an earlier revolution in economic theory.

Much of modern economic and financial theory is based on the assumption that people are rational, and thus that they systematically maximize their own happiness, or as economists call it, their “utility.” When Samuelson took on the subject in his 1947 book, he did not look into the brain, but relied instead on “revealed preference.” People’s objectives are revealed only by observing their economic activities. Under Samuelson’s guidance, generations of economists have based their research not on any physical structure underlying thought and behavior, but on the assumption of rationality.

While Glimcher and his colleagues have uncovered tantalizing evidence, they have yet to find most of the fundamental brain structures. Maybe that is because such structures simply do not exist, and the whole utility-maximization theory is wrong, or at least in need of fundamental revision. If so, that finding alone would shake economics to its foundations.

Another direction that excites neuroscientists is how the brain deals with ambiguous situations, when probabilities are not known or other highly relevant information is not available. It has already been discovered that the brain regions used to deal with problems when probabilities are clear are different from those used when probabilities are unknown. This research might help us to understand how people handle uncertainty and risk in, say, financial markets at a time of crisis.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Hari Seldon, Foundation by Isaac Asimov.[end-div]

 


Boost Your Brainpower: Chew Gum

So you wish to boost your brain function? Well, forget the folate, B vitamins, omega-3 fatty acids, ginko biloba, and the countless array of other supplements. Researchers have confirmed that chewing gum increases cognitive abilities. However, while gum chewers perform significantly better on a battery of psychological tests, the boost is fleeting — lasting only on average for the first 20 minutes of testing.

[div class=attrib]From Wired:[end-div]

Why do people chew gum? If an anthropologist from Mars ever visited a typical supermarket, they’d be confounded by those shelves near the checkout aisle that display dozens of flavored gum options. Chewing without eating seems like such a ridiculous habit, the oral equivalent of running on a treadmill. And yet, people have been chewing gum for thousands of years, ever since the ancient Greeks began popping wads of mastic tree resin in their mouth to sweeten the breath. Socrates probably chewed gum.

It turns out there’s an excellent rationale for this long-standing cultural habit: Gum is an effective booster of mental performance, conferring all sorts of benefits without any side effects. The latest investigation of gum chewing comes from a team of psychologists at St. Lawrence University. The experiment went like this: 159 students were given a battery of demanding cognitive tasks, such as repeating random numbers backward and solving difficult logic puzzles. Half of the subjects chewed gum (sugar-free and sugar-added) while the other half were given nothing. Here’s where things get peculiar: Those randomly assigned to the gum-chewing condition significantly outperformed those in the control condition on five out of six tests. (The one exception was verbal fluency, in which subjects were asked to name as many words as possible from a given category, such as “animals.”) The sugar content of the gum had no effect on test performance.

While previous studies achieved similar results — chewing gum is often a better test aid than caffeine — this latest research investigated the time course of the gum advantage. It turns out to be rather short lived, as gum chewers only showed an increase in performance during the first 20 minutes of testing. After that, they performed identically to non-chewers.

What’s responsible for this mental boost? Nobody really knows. It doesn’t appear to depend on glucose, since sugar-free gum generated the same benefits. Instead, the researchers propose that gum enhances performance due to “mastication-induced arousal.” The act of chewing, in other words, wakes us up, ensuring that we are fully focused on the task at hand.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Chewing gum tree, Mexico D.F. Courtesy of mexicolore.[end-div]

It’s Actually 4.74 Degrees of Kevin Bacon

Six degrees of separation is commonly held urban myth that on average everyone on Earth is six connections or less away from any other person. That is, through a chain of friend of a friend (of a friend, etc) relationships you can find yourself linked to the President, the Chinese Premier, a farmer on the steppes of Mongolia, Nelson Mandela, the editor of theDiagonal, and any one of the other 7 billion people on the planet.

The recent notion of degrees of separation stems from original research by Michael Gurevich at Massachusetts Institute of Technology on the structure of social networks in his 1961. Subsequently, an Austrian mathematician, Manfred Kochen, proposed in his theory of connectedness for a U.S.-sized population, that “it is practically certain that any two individuals can contact one another by means of at least two intermediaries.” In 1967 psychologist Stanley Milgram and colleagues validated this through his acquaintanceship network experiments on what was then called the Small World Problem. In one example, with 296 volunteers who were asked to send a message by postcard, through friends and then friends of friends, to a specific person living near Boston. Milgram’s work published in Psychology Today showed that people in the United States seemed to be connected by approximately three friendship links, on average. The experiment generated a tremendous amount of publicity, and as a result to this day he is incorrectly attributed with originating the ideas and quantification of interconnectedness and even the statement “six degrees of separation”.

In fact, the statement was originally articulated in 1929 by Hungarian author, Frigyes Karinthy and later popularized by in a play written by John Guare. Karinthy believed that the modern world was ‘shrinking’ due to the accelerating interconnectedness of humans. He hypothesized that any two individuals could be connected through at most five acquaintances. In 1990, playwright John Guare unveiled a play (followed by a movie in 1993) titled “Six Degrees of Separation”. This popularized the notion and enshrined it into popular culture. In the play one of the characters reflects on the idea that any two individuals are connected by at most five others:

I read somewhere that everybody on this planet is separated by only six other people. Six degrees of separation between us and everyone else on this planet. The President of the United States, a gondolier in Venice, just fill in the names. I find it A) extremely comforting that we’re so close, and B) like Chinese water torture that we’re so close because you have to find the right six people to make the right connection… I am bound to everyone on this planet by a trail of six people.

Then in 1994 along came the Kevin Bacon trivia game, “Six Degrees of Kevin Bacon” invented as a play on the original concept. The goal of the game is to link any actor to Kevin Bacon through no more than six connections, where two actors are connected if they have appeared in a movie or commercial together.

Now, in 2011 comes a study of connectedness of Facebook users. Using Facebook’s population of over 700 million users, researchers found that the average number of links from any arbitrarily selected user to another was 4.74; for Facebook users in the U.S., the average number of of links was just 4.37. Facebook posted detailed findings on its site, here.

So, the Small World Problem popularized by Milgram and colleagues is actually becoming smaller as Frigyes Karinthy had originally suggested back in 1929. As a result, you may not be as “far” from the Chinese Premier or Nelson Mandela as you may have previously believed.

[div class=attrib]Image: Six Degrees of Separation Poster by James McMullan. Courtesy of Wikipedia.[end-div]