Tag Archives: morality

Morality and a Second Language

Frequent readers will know that I’m intrigued by social science research into the human condition. Well, this collection of studies is fascinating. To summarize the general finding: you are less likely to follow ethical behavior if you happen to be thinking in an acquired, second language. Put another way, you are more moral when you think in your mother tongue.

Perhaps counter-intuitively a moral judgement made in a foreign language requires more cognitive processing power than one made in the language of childhood. Consequently, moral judgements of dubious or reprehensible behavior are likely to be seen as less wrong than those evaluated in native tongue.

I suppose there is a very valuable lesson here: if you plan to do some shoplifting or rob a bank then you should evaluate the pros and cons of your criminal enterprise in the second language that you learned in school.

From Scientific American:

What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.

And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?

Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.

In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?

Most people agree that they would. But what if the only way to stop the trolley is by pushing a large stranger off a footbridge into its path? People tend to be very reluctant to say they would do this, even though in both scenarios, one person is sacrificed to save five. But Costa and his colleagues found that posing the dilemma in a language that volunteers had learned as a foreign tongue dramatically increased their stated willingness to shove the sacrificial person off the footbridge, from fewer than 20% of respondents working in their native language to about 50% of those using the foreign one. (Both native Spanish- and English-speakers were included, with English and Spanish as their respective foreign languages; the results were the same for both groups, showing that the effect was about using a foreign language, and not about which particular language—English or Spanish—was used.)

Using a very different experimental setup, Janet Geipel and her colleagues also found that using a foreign language shifted their participants’ moral verdicts. In their study, volunteers read descriptions of acts that appeared to harm no one, but that many people find morally reprehensible—for example, stories in which siblings enjoyed entirely consensual and safe sex, or someone cooked and ate his dog after it had been killed by a car. Those who read the stories in a foreign language (either English or Italian) judged these actions to be less wrong than those who read them in their native tongue.

Read the entire article here.

Your Local Morality Police

Hot on the heals of my recent post on the thought police around the globe comes a more specific look at the morality police in selected Islamic nations.

I’ve written this before, and I’ll write it again: I am constantly reminded of my good fortune at having been born in (UK) and later moved to (US) nations that value freedom of speech, freedom of association and freedom of religion.

Though, the current electioneering in the US does have me wondering how a Christian evangelical theocracy under a President Cruz would look.

From the BBC:

Police forces tasked with implementing strict state interpretations of Islamic morality exist in several other states, including Saudi Arabia, Sudan and Malaysia.

Many – especially those with an affinity with Western lifestyles – chafe against such restrictions on daily life, but others support the idea, and growing religious conservatism has led to pressure for similar forces to be created in countries that do not have them.

Here are some places where “morality police” forces patrol:


Name: Gasht-e Ershad (Persian for Guidance Patrols), supported by Basij militia

Who they are: Iran has had various forms of “morality police” since the 1979 Islamic Revolution, but the Gasht-e Ershad are currently the main agency tasked enforcing Iran’s Islamic code of conduct in public.

Their focus is on ensuring observance of hijab – mandatory rules requiring women to cover their hair and bodies and discouraging cosmetics.


Name: Committee for the Promotion of Virtue and the Prevention of Vice, or Mutawa (Arabic for Particularly obedient to God)

Who they are: Formed in 1940, the Mutawa is tasked with enforcing Islamic religious law – Sharia – in public places.

This includes rules forbidding unrelated males and females to socialise in public, as well as a dress code that encourages women to wear a veil covering all but their eyes.

Read the entire story here.

The New Morality: Shame Replaces Guilt

I don’t often agree with author and columnist David Brooks, but I think he makes a very important observation regarding the continued evolution of moral relativism. Importantly, he notes that while our collective morality has become increasingly subjective, rather than governed by universal moral principles, it is now being driven more so by shame rather than guilt.

Brooks highlights an insightful essay by Andy Crouch, executive editor of Christianity Today, which lays the blame for the rise in shame versus guilt in some part on our immersion in online social networks. But, as Crouch points out despite our increasingly shame-driven culture (in the West), shame and shaming is not a new phenomenon.

Yet while shame culture has been with us for thousands of years the contemporary version offers a subtle but key difference. In ancient societies — and still mostly in Eastern cultures — avoidance of shame is about dignity and honor; in our Western world the new shame culture it is about pursuit of celebrity within the group.

From NYT:

In 1987, Allan Bloom wrote a book called “The Closing of the American Mind.” The core argument was that American campuses were awash in moral relativism. Subjective personal values had replaced universal moral principles. Nothing was either right or wrong. Amid a wave of rampant nonjudgmentalism, life was flatter and emptier.

Bloom’s thesis was accurate at the time, but it’s not accurate anymore. College campuses are today awash in moral judgment.

Many people carefully guard their words, afraid they might transgress one of the norms that have come into existence. Those accused of incorrect thought face ruinous consequences. When a moral crusade spreads across campus, many students feel compelled to post in support of it on Facebook within minutes. If they do not post, they will be noticed and condemned.

Some sort of moral system is coming into place. Some new criteria now exist, which people use to define correct and incorrect action. The big question is: What is the nature of this new moral system?

Last year, Andy Crouch published an essay in Christianity Today that takes us toward an answer.

Crouch starts with the distinction the anthropologist Ruth Benedict popularized, between a guilt culture and a shame culture. In a guilt culture you know you are good or bad by what your conscience feels. In a shame culture you know you are good or bad by what your community says about you, by whether it honors or excludes you. In a guilt culture people sometimes feel they do bad things; in a shame culture social exclusion makes people feel they are bad.

Crouch argues that the omnipresence of social media has created a new sort of shame culture. The world of Facebook, Instagram and the rest is a world of constant display and observation. The desire to be embraced and praised by the community is intense. People dread being exiled and condemned. Moral life is not built on the continuum of right and wrong; it’s built on the continuum of inclusion and exclusion.

This creates a set of common behavior patterns. First, members of a group lavish one another with praise so that they themselves might be accepted and praised in turn.

Second, there are nonetheless enforcers within the group who build their personal power and reputation by policing the group and condemning those who break the group code. Social media can be vicious to those who don’t fit in. Twitter can erupt in instant ridicule for anyone who stumbles.

Third, people are extremely anxious that their group might be condemned or denigrated. They demand instant respect and recognition for their group. They feel some moral wrong has been perpetrated when their group has been disrespected, and react with the most violent intensity.

Crouch describes how video gamers viciously went after journalists, mostly women, who had criticized the misogyny of their games. Campus controversies get so hot so fast because even a minor slight to a group is perceived as a basic identity threat.

The ultimate sin today, Crouch argues, is to criticize a group, especially on moral grounds. Talk of good and bad has to defer to talk about respect and recognition. Crouch writes, “Talk of right and wrong is troubling when it is accompanied by seeming indifference to the experience of shame that accompanies judgments of ‘immorality.’”

He notes that this shame culture is different from the traditional shame cultures, the ones in Asia, for example. In traditional shame cultures the opposite of shame was honor or “face” — being known as a dignified and upstanding citizen. In the new shame culture, the opposite of shame is celebrity — to be attention-grabbing and aggressively unique on some media platform.

Read the entire column here.

Religious Upbringing Reduces Altruism


Ready? This one may come as a shock to some. Yet another body of research shows that children raised in religious families are less likely to be selfless and generous towards others. Yes, that’s right, morality and altruism do not automatically spring forth from religiosity. Increasingly, it looks like altruism is a much deeper human (and animal) trait, and indeed studies show that altruistic behaviors are common in primates and other animals.

From Scientific American:

Organized religion is a cornerstone of spiritual community and culture around the world. Religion, especially religious education, also attracts secular support because many believe that religion fosters morality. A majority of the United States believes that faith in a deity is necessary to being a moral person.

In principle, religion’s emphasis on morality can smooth wrinkles out of the social fabric. Along those lines, believers are often instructed to act selflessly towards others. Islam places an emphasis on charity and alms-giving, Christianity on loving your neighbor as yourself. Taoist ethics, derived from the qualities of water, include the principle of selflessness

However, new research conducted in six countries around the world suggests that a religious upbringing may actually yield children who are less altruistic. Over 1000 children ages five to twelve took part in the study, from the United States, Canada, Jordan, Turkey, South Africa, and China. By finding that religious-raised children are less altruistic in the laboratory, the study alerts us to the possibility that religion might not have the wholesome effects we expect on the development of morality. The social practice of religion can complicate the precepts of a religious text. But in order to interpret these findings, we have to first look at how to test morality.

In an experiment snappily named the dictator game, a child designated “dictator” is tested for altruistic tendencies. This dictator child is conferred with great power to decide whether to share stickers with others. Researchers present the child with thirty stickers and instruct her to take ten favorite stickers. The researchers carefully mention that there isn’t time to play this game with everyone, setting up the main part of the experiment: to share or not to share. The child is given two envelopes and asked whether she will share stickers with other children at the school who cannot play the game. While the researcher faces the wall, the child can slip some stickers into the donation envelope and some into the other envelope to keep.

As the researchers expected, younger children were less likely to share stickers than older children. Also consistent with previous studies, children from a wealthier socioeconomic status shared more. More surprising was the tendency of children from religious households to share less than those from nonreligious backgrounds. When separated and analyzed by specific religion, the finding remained: children from both Christian and Muslim families on average shared less than nonreligious children. (Other religious designations were not represented in large enough numbers for separate statistical comparison.) Older kids from all backgrounds shared more than younger ones, but the tendency for religious children to share less than similar-aged children became more pronounced with age. The authors think this could be due to cumulative effects of time spent growing up in a religious household. While the large numbers of subjects strengthens the finding of a real difference between the groups of children, the actual disparity in typical sharing was about one sticker. We need to know if the gap in sticker sharing is meaningful in the real world.

Read the entire article here.

Image: Religious symbols from the top nine organized faiths of the world. From left to right: 1st Row: Christian Cross, Jewish Star of David, Hindu Aumkar 2nd Row: Islamic Star and crescent, Buddhist Wheel of Dharma, Shinto Torii 3rd Row: Sikh Khanda, Bahá’í star, Jain Ahimsa Symbol. Courtesy: Rursus / Wikipedia. Public Domain.

Fictionalism of Free Will and Morality

In a recent opinion column William Irwin professor of philosophy at King’s College summarizes an approach to accepting the notion of free will rather than believing it. While I’d eventually like to see an explanation for free will and morality in biological and chemical terms — beyond metaphysics — I will (or may, if free will does not exist) for the time being have to content myself with mere acceptance. But, I my acceptance is not based on the notion that “free will” is pre-determined by a supernatural being — rather, I suspect it’s an illusion, instigated in the dark recesses of our un- or sub-conscious, and our higher reasoning functions rationalize it post factum in the full light of day. Morality on the other hand, as Irwin suggests, is an rather different state of mind altogether.

From the NYT:

Few things are more annoying than watching a movie with someone who repeatedly tells you, “That couldn’t happen.” After all, we engage with artistic fictions by suspending disbelief. For the sake of enjoying a movie like “Back to the Future,” I may accept that time travel is possible even though I do not believe it. There seems no harm in that, and it does some good to the extent that it entertains and edifies me.

Philosophy can take us in the other direction, by using reason and rigorous questioning to lead us to disbelieve what we would otherwise believe. Accepting the possibility of time travel is one thing, but relinquishing beliefs in God, free will, or objective morality would certainly be more troublesome. Let’s focus for a moment on morality.

The philosopher Michael Ruse has argued that “morality is a collective illusion foisted upon us by our genes.” If that’s true, why have our genes played such a trick on us? One possible answer can be found in the work of another philosopher Richard Joyce, who has argued that this “illusion” — the belief in objective morality — evolved to provide a bulwark against weakness of the human will. So a claim like “stealing is morally wrong” is not true, because such beliefs have an evolutionary basis but no metaphysical basis. But let’s assume we want to avoid the consequences of weakness of will that would cause us to act imprudently. In that case, Joyce makes an ingenious proposal: moral fictionalism.

Following a fictionalist account of morality, would mean that we would accept moral statements like “stealing is wrong” while not believing they are true. As a result, we would act as if it were true that “stealing is wrong,” but when pushed to give our answer to the theoretical, philosophical question of whether “stealing is wrong,” we would say no. The appeal of moral fictionalism is clear. It is supposed to help us overcome weakness of will and even take away the anxiety of choice, making decisions easier.

Giving up on the possibility of free will in the traditional sense of the term, I could adopt compatibilism, the view that actions can be both determined and free. As long as my decision to order pasta is caused by some part of me — say my higher order desires or a deliberative reasoning process — then my action is free even if that aspect of myself was itself caused and determined by a chain of cause and effect. And my action is free even if I really could not have acted otherwise by ordering the steak.

Unfortunately, not even this will rescue me from involuntary free will fictionalism. Adopting compatibilism, I would still feel as if I have free will in the traditional sense and that I could have chosen steak and that the future is wide open concerning what I will have for dessert. There seems to be a “user illusion” that produces the feeling of free will.

William James famously remarked that his first act of free will would be to believe in free will. Well, I cannot believe in free will, but I can accept it. In fact, if free will fictionalism is involuntary, I have no choice but to accept free will. That makes accepting free will easy and undeniably sincere. Accepting the reality of God or morality, on the other hand, are tougher tasks, and potentially disingenuous.

Read the entire article here.

The Literal Word


I’ve been following the recent story of a country clerk in Kentucky who is refusing to grant marriage licenses to same-sex couples. The clerk cites her profound Christian beliefs for contravening the new law of the land. I’m reminded that most people who ardently follow a faith, as proscribed by the literal word from a God, tend to interpret, cherry-pick and obey what they wish. And, those same individuals will fervently ignore many less palatable demands from their God. So, let’s review a few biblical pronouncements, lest we forget what all believers in the Christian bible should be doing.

From the Independent:

Social conservatives who object to marriage licenses for gay couples claim to defend “Christian marriage,” meaning one man paired with one woman for life, which they say is prescribed by God in the Bible.

But in fact, Bible writers give the divine thumbs-up to many kinds of sexual union or marriage. They also use several literary devices to signal God’s approval for one or another sexual liaison: The law or a prophet might prescribe it, Jesus might endorse it, or God might reward it with the greatest of all blessings: boy babies who go on to become powerful men.

While the approved list does include one man coupled with one woman, the Bible explicitly endorses polygamy and sexual slavery, providing detailed regulations for each; and at times it also rewards rape and incest.

Polygamy. Polygamy is the norm in the Old Testament and accepted without reproof by Jesus (Matthew 22:23-32). Biblicalpolygamy.com contains pages dedicated to 40 biblical figures, each of whom had multiple wives.

Sex slaves. The Bible provides instructions on how to acquire several types of sex slaves. For example, if a man buys a Hebrew girl and “she please not her master” he can’t sell her to a foreigner; and he must allow her to go free if he doesn’t provide for her (Exodus 21:8).

War booty. Virgin females are counted, literally, among the booty of war. In the book of Numbers (31:18) God’s servant commands the Israelites to kill all of the used Midianite women along with all boy children, but to keep the virgin girls for themselves. The Law of Moses spells out a ritual to purify a captive virgin before sex. (Deuteronomy 21:10-14).

Incest. Incest is mostly forbidden in the Bible, but God makes exceptions. Abraham and Sarah, much favoured by God, are said to be half-siblings. Lot’s daughters get him drunk and mount him, and God rewards them with male babies who become patriarchs of great nations (Genesis 19).

Brother’s widow. If a brother dies with no children, it becomes a man’s duty to impregnate the brother’s widow. Onan is struck dead by God because he prefers to spill his seed on the ground rather than providing offspring for his brother (Genesis 38:8-10). A New Testament story (Matthew 22:24-28) shows that the tradition has survived.

Wife’s handmaid. After seven childless decades, Abraham’s frustrated wife Sarah says, “Go, sleep with my slave; perhaps I can build a family through her.”  Her slave, Hagar, becomes pregnant. Two generations later, the sister-wives of Jacob repeatedly send their slaves to him, each trying to produce more sons than the other (Genesis 30:1-22).

Read the entire story here.

Image: Biblical engraving: Sarah Offering Hagar to Her Husband, Abraham, c1897. Courtesy of Wikipedia.

Which is Your God?

Is your God the one to be feared from the Old Testament? Or is yours the God who brought forth the angel Moroni? Or are your Gods those revered by Hindus or Ancient Greeks or the Norse? Theists have continuing trouble in answering these fundamental questions much to the consternation, and satisfaction, of atheists.

In a thoughtful interview with Gary Gutting, Louise Antony a professor of philosophy at the University of Massachusetts, structures these questions in the broader context of morality and social justice.

From the NYT:

Gary Gutting: You’ve taken a strong stand as an atheist, so you obviously don’t think there are any good reasons to believe in God. But I imagine there are philosophers whose rational abilities you respect who are theists. How do you explain their disagreement with you? Are they just not thinking clearly on this topic?

Louise Antony: I’m not sure what you mean by saying that I’ve taken a “strong stand as an atheist.” I don’t consider myself an agnostic; I claim to know that God doesn’t exist, if that’s what you mean.

G.G.: That is what I mean.

L.A.: O.K. So the question is, why do I say that theism is false, rather than just unproven? Because the question has been settled to my satisfaction. I say “there is no God” with the same confidence I say “there are no ghosts” or “there is no magic.” The main issue is supernaturalism — I deny that there are beings or phenomena outside the scope of natural law.

That’s not to say that I think everything is within the scope of human knowledge. Surely there are things not dreamt of in our philosophy, not to mention in our science – but that fact is not a reason to believe in supernatural beings. I think many arguments for the existence of a God depend on the insufficiencies of human cognition. I readily grant that we have cognitive limitations. But when we bump up against them, when we find we cannot explain something — like why the fundamental physical parameters happen to have the values that they have — the right conclusion to draw is that we just can’t explain the thing. That’s the proper place for agnosticism and humility.

But getting back to your question: I’m puzzled why you are puzzled how rational people could disagree about the existence of God. Why not ask about disagreements among theists? Jews and Muslims disagree with Christians about the divinity of Jesus; Protestants disagree with Catholics about the virginity of Mary; Protestants disagree with Protestants about predestination, infant baptism and the inerrancy of the Bible. Hindus think there are many gods while Unitarians think there is at most one. Don’t all these disagreements demand explanation too? Must a Christian Scientist say that Episcopalians are just not thinking clearly? Are you going to ask a Catholic if she thinks there are no good reasons for believing in the angel Moroni?

G.G.: Yes, I do think it’s relevant to ask believers why they prefer their particular brand of theism to other brands. It seems to me that, at some point of specificity, most people don’t have reasons beyond being comfortable with one community rather than another. I think it’s at least sometimes important for believers to have a sense of what that point is. But people with many different specific beliefs share a belief in God — a supreme being who made and rules the world. You’ve taken a strong stand against that fundamental view, which is why I’m asking you about that.

L.A.: Well I’m challenging the idea that there’s one fundamental view here. Even if I could be convinced that supernatural beings exist, there’d be a whole separate issue about how many such beings there are and what those beings are like. Many theists think they’re home free with something like the argument from design: that there is empirical evidence of a purposeful design in nature. But it’s one thing to argue that the universe must be the product of some kind of intelligent agent; it’s quite something else to argue that this designer was all-knowing and omnipotent. Why is that a better hypothesis than that the designer was pretty smart but made a few mistakes? Maybe (I’m just cribbing from Hume here) there was a committee of intelligent creators, who didn’t quite agree on everything. Maybe the creator was a student god, and only got a B- on this project.

In any case though, I don’t see that claiming to know that there is no God requires me to say that no one could have good reasons to believe in God. I don’t think there’s some general answer to the question, “Why do theists believe in God?” I expect that the explanation for theists’ beliefs varies from theist to theist. So I’d have to take things on a case-by-case basis.

I have talked about this with some of my theist friends, and I’ve read some personal accounts by theists, and in those cases, I feel that I have some idea why they believe what they believe. But I can allow there are arguments for theism that I haven’t considered, or objections to my own position that I don’t know about. I don’t think that when two people take opposing stands on any issue that one of them has to be irrational or ignorant.

G.G.: No, they may both be rational. But suppose you and your theist friend are equally adept at reasoning, equally informed about relevant evidence, equally honest and fair-minded — suppose, that is, you are what philosophers call epistemic peers: equally reliable as knowers. Then shouldn’t each of you recognize that you’re no more likely to be right than your peer is, and so both retreat to an agnostic position?

L.A.: Yes, this is an interesting puzzle in the abstract: How could two epistemic peers — two equally rational, equally well-informed thinkers — fail to converge on the same opinions? But it is not a problem in the real world. In the real world, there are no epistemic peers — no matter how similar our experiences and our psychological capacities, no two of us are exactly alike, and any difference in either of these respects can be rationally relevant to what we believe.

G.G.: So is your point that we always have reason to think that people who disagree are not epistemic peers?

L.A.: It’s worse than that. The whole notion of epistemic peers belongs only to the abstract study of knowledge, and has no role to play in real life. Take the notion of “equal cognitive powers”: speaking in terms of real human minds, we have no idea how to seriously compare the cognitive powers of two people.

Read the entire article here.

The Rise and Fall of Morally Potent Obscenity

There was a time in the U.S. when the many would express shock and decry the verbal (or non-verbal) obscenity of the few. It was also easier for parents to shield the sensitive ears and eyes of their children from the infrequent obscenities of pop stars, politicians and others seeking the media spotlight.

Nowadays, we collectively yawn at the antics of the next post-pubescent alumnus of the Disney Channel. Our pop icons, politicians, news anchors and their ilk have made rudeness, vulgarity and narcissism the norm. Most of us no longer seem to be outraged — some are saddened, some are titillated — and then we shift our ever-decreasing attention spans to the next 15 minute teen-sensation. The vulgar and vain is now ever-present. So we become desensitized, and our public figures and wannabe stars seek the next even-bigger-thing to get themselves noticed before we look elsewhere.

The essayist Lee Siegel seems to be on to something — many of our current obscenity-makers harken back to a time when their vulgarity actually conveyed meaning and could raise a degree of moral indignation in the audience. But now it’s just the new norm and a big yawn.

From Lee Siegel / WSJ:

“What’s celebrity sex, Dad?” It was my 7-year-old son, who had been looking over my shoulder at my computer screen. He mispronounced “celebrity” but spoke the word “sex” as if he had been using it all his life. “Celebrity six,” I said, abruptly closing my AOL screen. “It’s a game famous people play in teams of three,” I said, as I ushered him out of my office and downstairs into what I assumed was the safety of the living room.

No such luck. His 3-year-old sister had gotten her precocious little hands on my wife’s iPhone as it was charging on a table next to the sofa. By randomly tapping icons on the screen, she had conjured up an image of Beyoncé barely clad in black leather, caught in a suggestive pose that I hoped would suggest nothing at all to her or her brother.

And so it went on this typical weekend. The eff-word popped out of TV programs we thought were friendly enough to have on while the children played in the next room. Ads depicting all but naked couples beckoned to them from the mainstream magazines scattered around the house. The kids peered over my shoulder as I perused my email inbox, their curiosity piqued by the endless stream of solicitations having to do with one aspect or another of sex, sex, sex!

When did the culture become so coarse? It’s a question that quickly gets you branded as either an unsophisticated rube or some angry culture warrior. But I swear on my hard drive that I’m neither. My favorite movie is “Last Tango in Paris.” I agree (on a theoretical level) with the notorious rake James Goldsmith, who said that when a man marries his mistress, he creates a job vacancy. I once thought of writing a book-length homage to the eff-word in American culture, the apotheosis of which was probably Sir Ben Kingsley pronouncing it with several syllables in an episode of “The Sopranos.”

I’m cool, and I’m down with everything, you bet, but I miss a time when there were powerful imprecations instead of mere obscenity—or at least when sexual innuendo, because it was innuendo, served as a delicious release of tension between our private and public lives. Long before there was twerking, there were Elvis’s gyrations, which shocked people because gyrating hips are more associated with women (thrusting his hips forward would have had a masculine connotation). But Elvis’s physical motions on stage were all allusion, just as his lyrics were:

Touch it, pound it, what good does it do

There’s just no stoppin’ the way I feel for you

Cos’ every minute, every hour you’ll be shaken

By the strength and mighty power of my love

The relative subtlety stimulates the imagination, while casual obscenity drowns it out. And such allusiveness maintains social norms even as they are being violated—that’s sexy. The lyrics of Elvis’s “Power of My Love” gave him authority as a respected social figure, which made his asocial insinuations all the more gratifying.

The same went, in a later era, for the young Madonna : “Two by two their bodies become one.” It’s an electric image because you are actively engaged in completing it. Contrast that with the aging Madonna trash-talking like a kid:

Some girls got an attitude

Fake t— and a nasty mood

Hot s— when she’s in the nude

(In the naughty naked nude)

It’s the difference between locker-room talk and the language of seduction and desire. As Robbie Williams and the Pet Shop Boys observed a few years ago in their song “She’s Madonna”: “She’s got to be obscene to be believed.”

Everyone remembers the Rolling Stones’ “Brown Sugar,” whose sexual and racial provocations were perfectly calibrated for 1971. Few, if any, people can recall their foray into explicit obscenity two years later with “Star Star.” The earlier song was sly and licentious; behind the sexual allusions were the vitality and energy to carry them out. The explicitness of “Star Star” was for bored, weary, repressed squares in the suburbs, with their swingers parties and “key clubs.”

Just as religious vows of abstinence mean nothing without the temptations of desire—which is why St. Augustine spends so much time in his “Confessions” detailing the way he abandoned himself to the “fleshpots of Carthage”—violating a social norm when the social norm is absent yields no real pleasure. The great provocations are also great releases because they exist side by side with the prohibitions that they are provoking. Once you spell it all out, the tension between temptation and taboo disappears.

The open secret of violating a taboo with language that—through its richness, wit or rage—acknowledges the taboo is that it represents a kind of moralizing. In fact, all the magnificent potty mouths—from D.H. Lawrence to Norman Mailer, the Beats, the rockers, the proto-punks, punks and post-punks, Richard Pryor, Sam Kinison, Patti Smith, and up through, say, Sarah Silverman and the creators of “South Park”—have been moralizers. The late Lou Reed’s “I Wanna Be Black” is so full of racial slurs, obscenity and repugnant sexual imagery that I could not find one meaningful phrase to quote in this newspaper. It is also a wryly indignant song that rips into the racism of liberals whose reverence for black culture is a crippling caricature of black culture.

Though many of these vulgar outlaws were eventually warily embraced by the mainstream, to one degree or another, it wasn’t until long after their deaths that society assimilated them, still warily, and sometimes not at all. In their own lifetimes, they mostly existed on the margins or in the depths; you had to seek them out in society’s obscure corners. That was especially the case during the advent of new types of music. Swing, bebop, Sinatra, cool jazz, rock ‘n’ roll—all were specialized, youth-oriented upheavals in sound and style, and they drove the older generation crazy.

These days, with every new ripple in the culture transmitted, commented-on, analyzed, mocked, mashed-up and forgotten on countless universal devices every few minutes, everything is available to everyone instantly, every second, no matter how coarse or abrasive. You used to have to find your way to Lou Reed. Now as soon as some pointlessly vulgar song gets recorded, you hear it in a clothing store.

The shock value of earlier vulgarity partly lay in the fact that a hitherto suppressed impulse erupted into the public realm. Today Twitter, Snapchat, Instagram and the rest have made impulsiveness a new social norm. No one is driving anyone crazy with some new form of expression. You’re a parent and you don’t like it when Kanye West sings: “I sent this girl a picture of my d—. I don’t know what it is with females. But I’m not too good with that s—”? Shame on you.

The fact is that you’re hearing the same language, witnessing the same violence, experiencing the same graphic sexual imagery on cable, or satellite radio, or the Internet, or even on good old boring network TV, where almost explicit sexual innuendo and nakedly explicit violence come fast and furious. Old and young, high and low, the idiom is the same. Everything goes.

Graphic references to sex were once a way to empower the individual. The unfair boss, the dishonest general, the amoral politician might elevate themselves above other mortals and abuse their power, but everyone has a naked body and a sexual capacity with which to throw off balance the enforcers of some oppressive social norm. That is what Montaigne meant when he reminded his readers that “both kings and philosophers defecate.” Making public the permanent and leveling truths of our animal nature, through obscenity or evocations of sex, is one of democracy’s sacred energies. “Even on the highest throne in the world,” Montaigne writes, “we are still sitting on our asses.”

But we’ve lost the cleansing quality of “dirty” speech. Now it’s casual, boorish, smooth and corporate. Everybody is walking around sounding like Howard Stern. The trash-talking Jay-Z and Kanye West are superwealthy businessmen surrounded by bodyguards, media consultants and image-makers. It’s the same in other realms, too. What was once a cable revolution against treacly, morally simplistic network television has now become a formulaic ritual of “complex,” counterintuitive, heroic bad-guy characters like the murderous Walter White on “Breaking Bad” and the lovable serial killer in “Dexter.” And the constant stream of Internet gossip and brainless factoids passing themselves off as information has normalized the grossest references to sex and violence.

Back in the 1990s, growing explicitness and obscenity in popular culture gave rise to the so-called culture wars, in which the right and the left fought over the limits of free speech. Nowadays no one blames the culture for what the culture itself has become. This is, fundamentally, a positive development. Culture isn’t an autonomous condition that develops in isolation from other trends in society.

The JFK assassination, the bloody rampage of Charles Manson and his followers, the incredible violence of the Vietnam War—shocking history-in-the-making that was once hidden now became visible in American living rooms, night after night, through new technology, TV in particular. Culture raced to catch up with the straightforward transcriptions of current events.

And, of course, the tendency of the media, as old as Lord Northcliffe and the first mass-circulation newspapers, to attract business through sex and violence only accelerated. Normalized by TV and the rest of the media, the counterculture of the 1970s was smoothly assimilated into the commercial culture of the 1980s. Recall the 15-year-old Brooke Shields appearing in a commercial for Calvin Klein jeans in 1980, spreading her legs and saying, “Do you know what comes between me and my Calvins? Nothing.” From then on, there was no going back.

Today, our cultural norms are driven in large part by technology, which in turn is often shaped by the lowest impulses in the culture. Behind the Internet’s success in making obscene images commonplace is the dirty little fact that it was the pornography industry that revolutionized the technology of the Internet. Streaming video, technology like Flash, sites that confirm the validity of credit cards were all innovations of the porn business. The Internet and pornography go together like, well, love and marriage. No wonder so much culture seems to aspire to porn’s depersonalization, absolute transparency and intolerance of secrets.

Read the entire article here.

Great Literature and Human Progress

Professor of Philosophy Gregory Currie tackles a thorny issue in his latest article. The question he seeks to answer is, “does great literature make us better?” It’s highly likely that a poll of most nations would show the majority of people  believe that literature does in fact propel us in a forward direction, intellectually, morally, emotionally and culturally. It seem like a no-brainer. But where is the hard evidence?

From the New York Times:

You agree with me, I expect, that exposure to challenging works of literary fiction is good for us. That’s one reason we deplore the dumbing-down of the school curriculum and the rise of the Internet and its hyperlink culture. Perhaps we don’t all read very much that we would count as great literature, but we’re apt to feel guilty about not doing so, seeing it as one of the ways we fall short of excellence. Wouldn’t reading about Anna Karenina, the good folk of Middlemarch and Marcel and his friends expand our imaginations and refine our moral and social sensibilities?

If someone now asks you for evidence for this view, I expect you will have one or both of the following reactions. First, why would anyone need evidence for something so obviously right? Second, what kind of evidence would he want? Answering the first question is easy: if there’s no evidence – even indirect evidence – for the civilizing value of literary fiction, we ought not to assume that it does civilize. Perhaps you think there are questions we can sensibly settle in ways other than by appeal to evidence: by faith, for instance. But even if there are such questions, surely no one thinks this is one of them.

What sort of evidence could we present? Well, we can point to specific examples of our fellows who have become more caring, wiser people through encounters with literature. Indeed, we are such people ourselves, aren’t we?

I hope no one is going to push this line very hard. Everything we know about our understanding of ourselves suggests that we are not very good at knowing how we got to be the kind of people we are. In fact we don’t really know, very often, what sorts of people we are. We regularly attribute our own failures to circumstance and the failures of others to bad character. But we can’t all be exceptions to the rule (supposing it is a rule) that people do bad things because they are bad people.

We are poor at knowing why we make the choices we do, and we fail to recognize the tiny changes in circumstances that can shift us from one choice to another. When it comes to other people, can you be confident that your intelligent, socially attuned and generous friend who reads Proust got that way partly because of the reading? Might it not be the other way around: that bright, socially competent and empathic people are more likely than others to find pleasure in the complex representations of human interaction we find in literature?

There’s an argument we often hear on the other side, illustrated earlier this year by a piece on The New Yorker’s Web site. Reminding us of all those cultured Nazis, Teju Cole notes the willingness of a president who reads novels and poetry to sign weekly drone strike permissions. What, he asks, became of “literature’s vaunted power to inspire empathy?” I find this a hard argument to like, and not merely because I am not yet persuaded by the moral case against drones. No one should be claiming that exposure to literature protects one against moral temptation absolutely, or that it can reform the truly evil among us. We measure the effectiveness of drugs and other medical interventions by thin margins of success that would not be visible without sophisticated statistical techniques; why assume literature’s effectiveness should be any different?

We need to go beyond the appeal to common experience and into the territory of psychological research, which is sophisticated enough these days to make a start in testing our proposition.

Psychologists have started to do some work in this area, and we have learned a few things so far. We know that if you get people to read a short, lowering story about a child murder they will afterward report feeling worse about the world than they otherwise would. Such changes, which are likely to be very short-term, show that fictions press our buttons; they don’t show that they refine us emotionally or in any other way.

We have learned that people are apt to pick up (purportedly) factual information stated or implied as part of a fictional story’s background. Oddly, people are more prone to do that when the story is set away from home: in a study conducted by Deborah Prentice and colleagues and published in 1997, Princeton undergraduates retained more from a story when it was set at Yale than when it was set on their own campus (don’t worry Princetonians, Yalies are just as bad when you do the test the other way around). Television, with its serial programming, is good for certain kinds of learning; according to a study from 2001 undertaken for the Kaiser Foundation, people who regularly watched the show “E.R.” picked up a good bit of medical information on which they sometimes acted. What we don’t have is compelling evidence that suggests that people are morally or socially better for reading Tolstoy.

Not nearly enough research has been conducted; nor, I think, is the relevant psychological evidence just around the corner. Most of the studies undertaken so far don’t draw on serious literature but on short snatches of fiction devised especially for experimental purposes. Very few of them address questions about the effects of literature on moral and social development, far too few for us to conclude that literature either does or doesn’t have positive moral effects.

There is a puzzling mismatch between the strength of opinion on this topic and the state of the evidence. In fact I suspect it is worse than that; advocates of the view that literature educates and civilizes don’t overrate the evidence — they don’t even think that evidence comes into it. While the value of literature ought not to be a matter of faith, it looks as if, for many of us, that is exactly what it is.

Read the entire article here.

Image: The Odyssey, Homer. Book cover. Courtesy of Goodreads.com

Self-Assured Destruction (SAD)

The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.

Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.

From TomDispatch:


What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.

For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.

And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.

The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.

There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.

In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”

Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.

Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.

So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.

Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.

And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.

So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.

The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.

We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.

What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.

So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.

Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.

And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.

Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.

Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.

Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.

At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.

Read the entire article here.

Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.

Child Mutilation and Religious Ritual

A court in Germany recently banned circumcision at birth for religious reasons. Quite understandably the court saw that this practice violates bodily integrity. Aside from being morally repugnant to many theists and non-believers alike, the practice inflicts pain. So, why do some religions continue to circumcise children?

[div class=attrib]From Slate:[end-div]

A German court ruled on Tuesday that parents may not circumcise their sons at birth for religious reasons, because the procedure violates the child’s right to bodily integrity. Both Muslims and Jews circumcise their male children. Why is Christianity the only Abrahamic religion that doesn’t encourage circumcision?

Because Paul believed faith was more important than foreskin. Shortly after Jesus’ death, his followers had a disagreement over the nature of his message. Some acolytes argued that he offered salvation through Judaism, so gentiles who wanted to join his movement should circumcise themselves like any other Jew. The apostle Paul, however, believed that faith in Jesus was the only requirement for salvation. Paul wrote that Jews who believed in Christ could go on circumcising their children, but he urged gentiles not to circumcise themselves or their sons, because trying to mimic the Jews represented a lack of faith in Christ’s ability to save them. By the time that the Book of Acts was written in the late first or early second century, Paul’s position seems to have become the dominant view of Christian theologians. Gentiles were advised to follow only the limited set of laws—which did not include circumcision—that God gave to Noah after the flood rather than the full panoply of rules followed by the Jews.

Circumcision was uniquely associated with Jews in first-century Rome, even though other ethnic and religious groups practiced it. Romans wrote satirical poems mocking the Jews for taking a day off each week, refusing to eat pork, worshipping a sky god, and removing their sons’ foreskin. It is, therefore, neither surprising that early Christian converts sought advice on whether to adopt the practice of circumcision nor that Paul made it the focus of several of his famous letters.

The early compromise that Paul struck—ethnic Jewish Christians should circumcise, while Jesus’ gentile followers should not—held until Christianity became a legal religion in the fourth century. At that time, the two religions split permanently, and it became something of a heresy to suggest that one could be both Jewish and Christian. As part of the effort to distinguish the two religions, circumcisions became illegal for Christians, and Jews were forbidden from circumcising their slaves.

Although the church officially renounced religious circumcision around 300 years after Jesus’s death, Christians long maintained a fascination with it. In the 600s, Christians began celebrating the day Jesus was circumcised. According to medieval Christian legend, an angel bestowed Jesus’ foreskin upon Emperor Charlemagne in the Church of the Holy Sepulchre, where Christ was supposedly buried. Coptic Christians and a few other Christian groups in Africa resumed religious circumcisions long after their European colleagues abandoned it.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Apostle Paul. Courtesy of Wikipedia.[end-div]

Eternal Damnation as Deterrent?

So, you think an all-seeing, all-knowing supreme deity encourages moral behavior and discourages crime? Think again.

[div class=attrib]From New Scientist:[end-div]

There’s nothing like the fear of eternal damnation to encourage low crime rates. But does belief in heaven and a forgiving god encourage lawbreaking? A new study suggests it might – although establishing a clear link between the two remains a challenge.

Azim Shariff at the University of Oregon in Eugene and his colleagues compared global data on people’s beliefs in the afterlife with worldwide crime data collated by the United Nations Office on Drugs and Crime. In total, Shariff’s team looked at data covering the beliefs of 143,000 individuals across 67 countries and from a variety of religious backgrounds.

In most of the countries assessed, people were more likely to report a belief in heaven than in hell. Using that information, the team could calculate the degree to which a country’s rate of belief in heaven outstrips its rate of belief in hell.

Even after the researchers had controlled for a host of crime-related cultural factors – including GDP, income inequality, population density and life expectancy – national crime rates were typically higher in countries with particularly strong beliefs in heaven but weak beliefs in hell.

Licence to steal

“Belief in a benevolent, forgiving god could license people to think they can get away with things,” says Shariff – although he stresses that this conclusion is speculative, and that the results do not necessarily imply causality between religious beliefs and crime rates.

“There are a number of possible causal pathways,” says Richard Sosis, an anthropologist at the University of Connecticut in Storrs, who was not involved in the study. The most likely interpretation is that there are intervening variables at the societal level – societies may have values that are similarly reflected in their legal and religious systems.

In a follow-up study, yet to be published, Shariff and Amber DeBono of Winston–Salem State University in North Carolina primed volunteers who had Christian beliefs by asking them to write variously about God’s forgiving nature, God’s punitive nature, a forgiving human, a punitive human, or a neutral subject. The volunteers were then asked to complete anagram puzzles for a monetary reward of a few cents per anagram.

God helps those who…

Participants were given the opportunity to commit petty theft, with no chance of being caught, by lying about the number of anagrams they had successfully completed. Shariff’s team found that those participants who had written about a forgiving god claimed nearly $2 more than they were entitled to under the rules of the game, whereas those in the other groups awarded themselves less than 50 cents more than they were entitled to.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A detail from the Chapmans’ Hell. Photograph: Andy Butterton/PA. Courtesy of Guardian.[end-div]

Creativity and Immorality

[div class=attrib]From Scientific American:[end-div]

In the mid 1990’s, Apple Computers was a dying company.  Microsoft’s Windows operating system was overwhelmingly favored by consumers, and Apple’s attempts to win back market share by improving the Macintosh operating system were unsuccessful.  After several years of debilitating financial losses, the company chose to purchase a fledgling software company called NeXT.  Along with purchasing the rights to NeXT’s software, this move allowed Apple to regain the services of one of the company’s founders, the late Steve Jobs.  Under the guidance of Jobs, Apple returned to profitability and is now the largest technology company in the world, with the creativity of Steve Jobs receiving much of the credit.

However, despite the widespread positive image of Jobs as a creative genius, he also has a dark reputation for encouraging censorship,“ losing sight of honesty and integrity”, belittling employees, and engaging in other morally questionable actions. These harshly contrasting images of Jobs raise the question of why a CEO held in such near-universal positive regard could also be the same one accused of engaging in such contemptible behavior.  The answer, it turns out, may have something to do with the aspect of Jobs which is so admired by so many.

In a recent paper published in the Journal of Personality and Social Psychology, researchers at Harvard and Duke Universities demonstrate that creativity can lead people to behave unethically.  In five studies, the authors show that creative individuals are more likely to be dishonest, and that individuals induced to think creatively were more likely to be dishonest. Importantly, they showed that this effect is not explained by any tendency for creative people to be more intelligent, but rather that creativity leads people to more easily come up with justifications for their unscrupulous actions.

In one study, the authors administered a survey to employees at an advertising agency.  The survey asked the employees how likely they were to engage in various kinds of unethical behaviors, such as taking office supplies home or inflating business expense reports.  The employees were also asked to report how much creativity was required for their job.  Further, the authors asked the executives of the company to provide creativity ratings for each department within the company.

Those who said that their jobs required more creativity also tended to self-report a greater likelihood of unethical behavior.  And if the executives said that a particular department required more creativity, the individuals in that department tended to report greater likelihoods of unethical behavior.

The authors hypothesized that it is creativity which causes unethical behavior by allowing people the means to justify their misdeeds, but it is hard to say for certain whether this is correct given the correlational nature of the study.  It could just as easily be true, after all, that unethical behavior leads people to be more creative, or that there is something else which causes both creativity and dishonesty, such as intelligence.  To explore this, the authors set up an experiment in which participants were induced into a creative mindset and then given the opportunity to cheat.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock.[end-div]

Vampire Wedding and the Moral Molecule

Attend a wedding. Gather the hundred or so guests, and take their blood. Take samples that is. Then, measure the levels of a hormone called oxytocin. This is where neuroeconomist Paul Zak’s story beings — around a molecular messenger thought to be responsible for facilitating trust and empathy in all our intimate relationships.

[div class=attrib]From “The Moral Molecule” by Paul J. Zak, to be published May 10, courtesy of the Wall Street Journal:[end-div]

Could a single molecule—one chemical substance—lie at the very center of our moral lives?

Research that I have done over the past decade suggests that a chemical messenger called oxytocin accounts for why some people give freely of themselves and others are coldhearted louts, why some people cheat and steal and others you can trust with your life, why some husbands are more faithful than others, and why women tend to be nicer and more generous than men. In our blood and in the brain, oxytocin appears to be the chemical elixir that creates bonds of trust not just in our intimate relationships but also in our business dealings, in politics and in society at large.

Known primarily as a female reproductive hormone, oxytocin controls contractions during labor, which is where many women encounter it as Pitocin, the synthetic version that doctors inject in expectant mothers to induce delivery. Oxytocin is also responsible for the calm, focused attention that mothers lavish on their babies while breast-feeding. And it is abundant, too, on wedding nights (we hope) because it helps to create the warm glow that both women and men feel during sex, a massage or even a hug.

Since 2001, my colleagues and I have conducted a number of experiments showing that when someone’s level of oxytocin goes up, he or she responds more generously and caringly, even with complete strangers. As a benchmark for measuring behavior, we relied on the willingness of our subjects to share real money with others in real time. To measure the increase in oxytocin, we took their blood and analyzed it. Money comes in conveniently measurable units, which meant that we were able to quantify the increase in generosity by the amount someone was willing to share. We were then able to correlate these numbers with the increase in oxytocin found in the blood.

Later, to be certain that what we were seeing was true cause and effect, we sprayed synthetic oxytocin into our subjects’ nasal passages—a way to get it directly into their brains. Our conclusion: We could turn the behavioral response on and off like a garden hose. (Don’t try this at home: Oxytocin inhalers aren’t available to consumers in the U.S.)

More strikingly, we found that you don’t need to shoot a chemical up someone’s nose, or have sex with them, or even give them a hug in order to create the surge in oxytocin that leads to more generous behavior. To trigger this “moral molecule,” all you have to do is give someone a sign of trust. When one person extends himself to another in a trusting way—by, say, giving money—the person being trusted experiences a surge in oxytocin that makes her less likely to hold back and less likely to cheat. Which is another way of saying that the feeling of being trusted makes a person more…trustworthy. Which, over time, makes other people more inclined to trust, which in turn…

If you detect the makings of an endless loop that can feed back onto itself, creating what might be called a virtuous circle—and ultimately a more virtuous society—you are getting the idea.

Obviously, there is more to it, because no one chemical in the body functions in isolation, and other factors from a person’s life experience play a role as well. Things can go awry. In our studies, we found that a small percentage of subjects never shared any money; analysis of their blood indicated that their oxytocin receptors were malfunctioning. But for everyone else, oxytocin orchestrates the kind of generous and caring behavior that every culture endorses as the right way to live—the cooperative, benign, pro-social way of living that every culture on the planet describes as “moral.” The Golden Rule is a lesson that the body already knows, and when we get it right, we feel the rewards immediately.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]CPK model of the Oxitocin molecule C43H66N12O12S2. Courtesy of Wikipedia.[end-div]

Your Guide to Online Morality

By most estimates Facebook has around 800 million registered users. This means that its policies governing what is or is not appropriate user content should bear detailed scrutiny. So, a look at Facebook’s recently publicized guidelines for sexual and violent content show a somewhat peculiar view of morality. It’s a view that some characterize as typically American prudishness, but with a blind eye towards violence.

[div class=attrib]From the Guardian:[end-div]

Facebook bans images of breastfeeding if nipples are exposed – but allows “graphic images” of animals if shown “in the context of food processing or hunting as it occurs in nature”. Equally, pictures of bodily fluids – except semen – are allowed as long as no human is included in the picture; but “deep flesh wounds” and “crushed heads, limbs” are OK (“as long as no insides are showing”), as are images of people using marijuana but not those of “drunk or unconscious” people.

The strange world of Facebook’s image and post approval system has been laid bare by a document leaked from the outsourcing company oDesk to the Gawker website, which indicates that the sometimes arbitrary nature of picture and post approval actually has a meticulous – if faintly gore-friendly and nipple-unfriendly – approach.

For the giant social network, which has 800 million users worldwide and recently set out plans for a stock market flotation which could value it at up to $100bn (£63bn), it is a glimpse of its inner workings – and odd prejudices about sex – that emphasise its American origins.

Facebook has previously faced an outcry from breastfeeding mothers over its treatment of images showing them with their babies. The issue has rumbled on, and now seems to have been embedded in its “Abuse Standards Violations”, which states that banned items include “breastfeeding photos showing other nudity, or nipple clearly exposed”. It also bans “naked private parts” including “female nipple bulges and naked butt cracks” – though “male nipples are OK”.

The guidelines, which have been set out in full, depict a world where sex is banned but gore is acceptable. Obvious sexual activity, even if “naked parts” are hidden, people “using the bathroom”, and “sexual fetishes in any form” are all also banned. The company also bans slurs or racial comments “of any kind” and “support for organisations and people primarily known for violence”. Also banned is anyone who shows “approval, delight, involvement etc in animal or human torture”.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Guardian / Photograph: Dominic Lipinski/PA.[end-div]

Time for An Over-The-Counter Morality Pill?

Stories of people who risk life and limb to help a stranger and those who turn a blind eye are as current as they are ancient. Almost on a daily basis the 24-hours news cycle carries a heartwarming story of someone doing good to or for another; and seemingly just as often comes the story of indifference. Social and psychological researchers have studied this behavior in humans, and animals, for decades. However, only recently has progress been made in identifying some underlying factors. Peter Singer, a professor of bioethics at Princeton University, and researcher Agata Sagan recap some current understanding.

All of this leads to a conundrum: would it be ethical to market a “morality” pill that would make us do more good more often?

[div class=attrib]From the New York Times:[end-div]

Last October, in Foshan, China, a 2-year-old girl was run over by a van. The driver did not stop. Over the next seven minutes, more than a dozen people walked or bicycled past the injured child. A second truck ran over her. Eventually, a woman pulled her to the side, and her mother arrived. The child died in a hospital. The entire scene was captured on video and caused an uproar when it was shown by a television station and posted online. A similar event occurred in London in 2004, as have others, far from the lens of a video camera.

Yet people can, and often do, behave in very different ways.

A news search for the words “hero saves” will routinely turn up stories of bystanders braving oncoming trains, swift currents and raging fires to save strangers from harm. Acts of extreme kindness, responsibility and compassion are, like their opposites, nearly universal.

Why are some people prepared to risk their lives to help a stranger when others won’t even stop to dial an emergency number?

Scientists have been exploring questions like this for decades. In the 1960s and early ’70s, famous experiments by Stanley Milgram and Philip Zimbardo suggested that most of us would, under specific circumstances, voluntarily do great harm to innocent people. During the same period, John Darley and C. Daniel Batson showed that even some seminary students on their way to give a lecture about the parable of the Good Samaritan would, if told that they were running late, walk past a stranger lying moaning beside the path. More recent research has told us a lot about what happens in the brain when people make moral decisions. But are we getting any closer to understanding what drives our moral behavior?

Here’s what much of the discussion of all these experiments missed: Some people did the right thing. A recent experiment (about which we have some ethical reservations) at the University of Chicago seems to shed new light on why.

Researchers there took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.

Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

[div class=attrib]Read more here.[end-div]

Morality for Atheists

The social standing of atheists seems to be on the rise. Back in December we cited a research study that found atheists to be more reviled than rapists. Well, a more recent study now finds that atheists are less disliked than members of the Tea Party.

With this in mind Louise Antony ponders how it is possible for atheists to acquire morality without the help of God.

[div class=attrib]From the New York Times:[end-div]

I was heartened to learn recently that atheists are no longer the most reviled group in the United States: according to the political scientists Robert Putnam and David Campbell, we’ve been overtaken by the Tea Party.  But even as I was high-fiving my fellow apostates (“We’re number two!  We’re number two!”), I was wondering anew: why do so many people dislike atheists?

I gather that many people believe that atheism implies nihilism — that rejecting God means rejecting morality.  A person who denies God, they reason, must be, if not actively evil, at least indifferent to considerations of right and wrong.  After all, doesn’t the dictionary list “wicked” as a synonym for “godless?”  And isn’t it true, as Dostoevsky said, that “if God is dead, everything is permitted”?

Well, actually — no, it’s not.  (And for the record, Dostoevsky never said it was.)   Atheism does not entail that anything goes.

Admittedly, some atheists are nihilists.  (Unfortunately, they’re the ones who get the most press.)  But such atheists’ repudiation of morality stems more from an antecedent cynicism about ethics than from any philosophical view about the divine.  According to these nihilistic atheists, “morality” is just part of a fairy tale we tell each other in order to keep our innate, bestial selfishness (mostly) under control.  Belief in objective “oughts” and “ought nots,” they say, must fall away once we realize that there is no universal enforcer to dish out rewards and punishments in the afterlife.  We’re left with pure self-interest, more or less enlightened.

This is a Hobbesian view: in the state of nature “[t]he notions of right and wrong, justice and injustice have no place.  Where there is no common power, there is no law: where no law, no injustice.”  But no atheist has to agree with this account of morality, and lots of us do not.  We “moralistic atheists” do not see right and wrong as artifacts of a divine protection racket.  Rather, we find moral value to be immanent in the natural world, arising from the vulnerabilities of sentient beings and from the capacities of rational beings to recognize and to respond to those vulnerabilities and capacities in others.

This view of the basis of morality is hardly incompatible with religious belief.  Indeed, anyone who believes that God made human beings in His image believes something like this — that there is a moral dimension of things, and that it is in our ability to apprehend it that we resemble the divine.  Accordingly, many theists, like many atheists, believe that moral value is inherent in morally valuable things.  Things don’t become morally valuable because God prefers them; God prefers them because they are morally valuable. At least this is what I was taught as a girl, growing up Catholic: that we could see that God was good because of the things He commands us to do.  If helping the poor were not a good thing on its own, it wouldn’t be much to God’s credit that He makes charity a duty.

It may surprise some people to learn that theists ever take this position, but it shouldn’t.  This position is not only consistent with belief in God, it is, I contend, a more pious position than its opposite.  It is only if morality is independent of God that we can make moral sense out of religious worship.  It is only if morality is independent of God that any person can have a moral basis for adhering to God’s commands.

Let me explain why.  First let’s take a cold hard look at the consequences of pinning morality to the existence of God.  Consider the following moral judgments — judgments that seem to me to be obviously true:

• It is wrong to drive people from their homes or to kill them because you want their land.

• It is wrong to enslave people.

• It is wrong to torture prisoners of war.

•  Anyone who witnesses genocide, or enslavement, or torture, is morally required
to try to stop it.

To say that morality depends on the existence of God is to say that none of these specific moral judgments is true unless God exists.  That seems to me to be a remarkable claim.  If God turned out not to exist — then slavery would be O.K.?  There’d be nothing wrong with torture?  The pain of another human being would mean nothing?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Sam Harris. Courtesy of Salon.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]

Morality 1: Good without gods

[div class=attrib]From QualiaSoup:[end-div]

Some people claim that morality is dependent upon religion, that atheists cannot possibly be moral since god and morality are intertwined (well, in their minds). Unfortunately, this is one way that religious people dehumanise atheists who have a logical way of thinking about what constitutes moral social behaviour. More than simply being a (incorrect) definition in the Oxford dictionary, morality is actually the main subject of many philosophers’ intellectual lives. This video, the first of a multi-part series, begins this discussion by defining morality and then moving on to look at six hypothetical cultures’ and their beliefs.


Test-tube truths

[div class=attrib]From Eurozine:[end-div]

In his new book, American atheist Sam Harris argues that science can replace theology as the ultimate moral authority. Kenan Malik is sceptical of any such yearning for moral certainty, be it scientific or divine.

“If God does not exist, everything is permitted.” Dostoevsky never actually wrote that line, though so often is it attributed to him that he may as well have. It has become the almost reflexive response of believers when faced with an argument for a godless world. Without religious faith, runs the argument, we cannot anchor our moral truths or truly know right from wrong. Without belief in God we will be lost in a miasma of moral nihilism. In recent years, the riposte of many to this challenge has been to argue that moral codes are not revealed by God but instantiated in nature, and in particular in the brain. Ethics is not a theological matter but a scientific one. Science is not simply a means of making sense of facts about the world, but also about values, because values are in essence facts in another form.

Few people have expressed this argument more forcefully than the neuroscientist Sam Harris. Over the past few years, through books such as The End of Faith and Letter to a Christian Nation, Harris has gained a considerable reputation as a no-holds-barred critic of religion, in particular of Islam, and as an acerbic champion of science. In his new book, The Moral Landscape: How Science Can Determine Human Values, he sets out to demolish the traditional philosophical distinction between is and ought, between the way the world is and the way that it should be, a distinction we most associate with David Hume.

What Hume failed to understand, Harris argues, is that science can bridge the gap between ought and is, by turning moral claims into empirical facts. Values, he argues, are facts about the “states of the world” and “states of the human brain”. We need to think of morality, therefore, as “an undeveloped branch of science”: “Questions about values are really questions about the wellbeing of conscious creatures. Values, therefore, translate into facts that can be scientifically understood: regarding positive and negative social emotions, the effects of specific laws on human relationships, the neurophysiology of happiness and suffering, etc.” Science, and neuroscience in particular, does not simply explain why we might respond in particular ways to equality or to torture but also whether equality is a good, and torture morally acceptable. Where there are disagreements over moral questions, Harris believes, science will decide which view is right “because the discrepant answers people give to them translate into differences in our brains, in the brains of others and in the world at large.”

Harris is nothing if not self-confident. There is a voluminous philosophical literature that stretches back almost to the origins of the discipline on the relationship between facts and values. Harris chooses to ignore most of it. He does not wish to engage “more directly with the academic literature on moral philosophy”, he explains in a footnote, because he did not develop his arguments “by reading the work of moral philosophers” and because he is “convinced that every appearance of terms like ‘metaethics’, ‘deontology’, ‘noncognitivism’, ‘antirealism’, ’emotivism’, etc directly increases the amount of boredom in the universe.”

[div class=attrib]More from theSource here.[end-div]

Sex appeal

[div class=attrib]From Eurozine:[end-div]

Having condemned hyper-sexualized culture, the American religious Right is now wildly pro-sex, as long as it is marital sex. By replacing the language of morality with the secular notion of self-esteem, repression has found its way back onto school curricula – to the detriment of girls and women in particular. “We are living through an assault on female sexual independence”, writes Dagmar Herzog.

“Waves of pleasure flow over me; it feels like sliding down a mountain waterfall,” rhapsodises one delighted woman. Another recalls: “It’s like having a million tiny pleasure balloons explode inside of me all at once.”

These descriptions come not from Cosmopolitan, not from an erotic website, not from a Black Lace novel and certainly not from a porn channel. They are, believe it or not, part of the new philosophy of the Religious Right in America. We’ve always known that sex sells. Well, now it’s being used to sell both God and the Republicans in one extremely suggestive package. And in dressing up the old repressive values in fishnet stockings and flouncy lingerie, the forces of conservatism have beaten the liberals at their own game.

Choose almost any sex-related issue. From pornography and sex education to reproductive rights and treatment for sexually transmitted diseases, Americans have allowed a conservative religious movement not only to dictate the terms of conversation but also to change the nation’s laws and public health policies. And meanwhile American liberals have remained defensive and tongue-tied.

So how did the Religious Right – that avid and vocal movement of politicised conservative evangelical Protestants (joined together also with a growing number of conservative Catholics) – manage so effectively to harness what has traditionally been the province of the permissive left?

Quite simply, it has changed tactics and is now going out of its way to assert, loudly and enthusiastically, that, in contrast to what is generally believed, it is far from being sexually uptight. On the contrary, it is wildly pro-sex, provided it’s marital sex. Evangelical conservatives in particular have begun not only to rail against the evils of sexual misery within marriage (and the way far too many wives feel like not much more than sperm depots for insensitive, emotionally absent husbands), but also, in the most graphically detailed, explicit terms, to eulogise about the prospect of ecstasy.

[div class=attrib]More from theSource here.[end-div]