Tag Archives: judgement

Morality and a Second Language

Frequent readers will know that I’m intrigued by social science research into the human condition. Well, this collection of studies is fascinating. To summarize the general finding: you are less likely to follow ethical behavior if you happen to be thinking in an acquired, second language. Put another way, you are more moral when you think in your mother tongue.

Perhaps counter-intuitively a moral judgement made in a foreign language requires more cognitive processing power than one made in the language of childhood. Consequently, moral judgements of dubious or reprehensible behavior are likely to be seen as less wrong than those evaluated in native tongue.

I suppose there is a very valuable lesson here: if you plan to do some shoplifting or rob a bank then you should evaluate the pros and cons of your criminal enterprise in the second language that you learned in school.

From Scientific American:

What defines who we are? Our habits? Our aesthetic tastes? Our memories? If pressed, I would answer that if there is any part of me that sits at my core, that is an essential part of who I am, then surely it must be my moral center, my deep-seated sense of right and wrong.

And yet, like many other people who speak more than one language, I often have the sense that I’m a slightly different person in each of my languages—more assertive in English, more relaxed in French, more sentimental in Czech. Is it possible that, along with these differences, my moral compass also points in somewhat different directions depending on the language I’m using at the time?

Psychologists who study moral judgments have become very interested in this question. Several recent studies have focused on how people think about ethics in a non-native language—as might take place, for example, among a group of delegates at the United Nations using a lingua franca to hash out a resolution. The findings suggest that when people are confronted with moral dilemmas, they do indeed respond differently when considering them in a foreign language than when using their native tongue.

In a 2014 paper led by Albert Costa, volunteers were presented with a moral dilemma known as the “trolley problem”: imagine that a runaway trolley is careening toward a group of five people standing on the tracks, unable to move. You are next to a switch that can shift the trolley to a different set of tracks, thereby sparing the five people, but resulting in the death of one who is standing on the side tracks. Do you pull the switch?

Most people agree that they would. But what if the only way to stop the trolley is by pushing a large stranger off a footbridge into its path? People tend to be very reluctant to say they would do this, even though in both scenarios, one person is sacrificed to save five. But Costa and his colleagues found that posing the dilemma in a language that volunteers had learned as a foreign tongue dramatically increased their stated willingness to shove the sacrificial person off the footbridge, from fewer than 20% of respondents working in their native language to about 50% of those using the foreign one. (Both native Spanish- and English-speakers were included, with English and Spanish as their respective foreign languages; the results were the same for both groups, showing that the effect was about using a foreign language, and not about which particular language—English or Spanish—was used.)

Using a very different experimental setup, Janet Geipel and her colleagues also found that using a foreign language shifted their participants’ moral verdicts. In their study, volunteers read descriptions of acts that appeared to harm no one, but that many people find morally reprehensible—for example, stories in which siblings enjoyed entirely consensual and safe sex, or someone cooked and ate his dog after it had been killed by a car. Those who read the stories in a foreign language (either English or Italian) judged these actions to be less wrong than those who read them in their native tongue.

Read the entire article here.

I Don’t Know, But I Like What I Like: The New Pluralism

choiceIn an insightful opinion piece, excerpted below, a millennial wonders if our fragmented and cluttered, information-rich society has damaged pluralism by turning action into indecision. Even aesthetic preferences come to be so laden with judgmental baggage that expressing a preference for one type of art, or car, or indeed cereal, seems to become an impossible conundrum  for many born in the mid-1980s or later. So, a choice becomes a way to alienate those not chosen — when did selecting a cereal become such an onerous exercise in political correctness and moral relativism?

From the New York Times:

Critics of the millennial generation, of which I am a member, consistently use terms like “apathetic,” “lazy” and “narcissistic” to explain our tendency to be less civically and politically engaged. But what these critics seem to be missing is that many millennials are plagued not so much by apathy as by indecision. And it’s not surprising: Pluralism has been a large influence on our upbringing. While we applaud pluralism’s benefits, widespread enthusiasm has overwhelmed desperately needed criticism of its side effects.

By “pluralism,” I mean a cultural recognition of difference: individuals of varying race, gender, religious affiliation, politics and sexual preference, all exalted as equal. In recent decades, pluralism has come to be an ethical injunction, one that calls for people to peacefully accept and embrace, not simply tolerate, differences among individuals. Distinct from the free-for-all of relativism, pluralism encourages us (in concept) to support our own convictions while also upholding an “energetic engagement with diversity, ” as Harvard’s Pluralism Project suggested in 1991. Today, paeans to pluralism continue to sound throughout the halls of American universities, private institutions, left-leaning households and influential political circles.

However, pluralism has had unforeseen consequences. The art critic Craig Owens once wrote that pluralism is not a “recognition, but a reduction of difference to absolute indifference, equivalence, interchangeability.” Some millennials who were greeted by pluralism in this battered state are still feelings its effects. Unlike those adults who encountered pluralism with their beliefs close at hand, we entered the world when truth-claims and qualitative judgments were already on trial and seemingly interchangeable. As a result, we continue to struggle when it comes to decisively avowing our most basic convictions.

Those of us born after the mid-1980s whose upbringing included a liberal arts education and the fruits of a fledgling World Wide Web have grown up (and are still growing up) with an endlessly accessible stream of texts, images and sounds from far-reaching times and places, much of which were unavailable to humans for all of history. Our most formative years include not just the birth of the Internet and the ensuing accelerated global exchange of information, but a new orthodoxy of multiculturalist ethics and “political correctness.”

These ideas were reinforced in many humanities departments in Western universities during the 1980s, where facts and claims to objectivity were eagerly jettisoned. Even “the canon” was dislodged from its historically privileged perch, and since then, many liberal-minded professors have avoided opining about “good” literature or “high art” to avoid reinstating an old hegemony. In college today, we continue to learn about the byproducts of absolute truths and intractable forms of ideology, which historically seem inextricably linked to bigotry and prejudice.

For instance, a student in one of my English classes was chastened for his preference for Shakespeare over that of the Haitian-American writer Edwidge Danticat. The professor challenged the student to apply a more “disinterested” analysis to his reading so as to avoid entangling himself in a misinformed gesture of “postcolonial oppression.” That student stopped raising his hand in class.

I am not trying to tackle the challenge as a whole or indict contemporary pedagogies, but I have to ask: How does the ethos of pluralism inside universities impinge on each student’s ability to make qualitative judgments outside of the classroom, in spaces of work, play, politics or even love?

In 2004, the French sociologist of science Bruno Latour intimated that the skeptical attitude which rebuffs claims to absolute knowledge might have had a deleterious effect on the younger generation: “Good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on.” Latour identified a condition that resonates: Our tenuous claims to truth have not simply been learned in university classrooms or in reading theoretical texts but reinforced by the decentralized authority of the Internet. While trying to form our fundamental convictions in this dizzying digital and intellectual global landscape, some of us are finding it increasingly difficult to embrace qualitative judgments.

Matters of taste in music, art and fashion, for example, can become a source of anxiety and hesitation. While clickable ways of “liking” abound on the Internet, personalized avowals of taste often seem treacherous today. Admittedly, many millennials (and nonmillennials) might feel comfortable simply saying, “I like what I like,” but some of us find ourselves reeling in the face of choice. To affirm a preference for rap over classical music, for instance, implicates the well-meaning millennial in a web of judgments far beyond his control. For the millennial generation, as a result, confident expressions of taste have become more challenging, as aesthetic preference is subjected to relentless scrutiny.

Philosophers and social theorists have long weighed in on this issue of taste. Pierre Bourdieu claimed that an “encounter with a work of art is not ‘love at first sight’ as is generally supposed.” Rather, he thought “tastes” function as “markers of ‘class.’ ” Theodor Adorno and Max Horkheimer argued that aesthetic preference could be traced along socioeconomic lines and reinforce class divisions. To dislike cauliflower is one thing. But elevating the work of one writer or artist over another has become contested territory.

This assured expression of “I like what I like,” when strained through pluralist-inspired critical inquiry, deteriorates: “I like what I like” becomes “But why do I like what I like? Should I like what I like? Do I like it because someone else wants me to like it? If so, who profits and who suffers from my liking what I like?” and finally, “I am not sure I like what I like anymore.” For a number of us millennials, commitment to even seemingly simple aesthetic judgments have become shot through with indecision.

Read the entire article here.

Business Decison-Making Welcomes Science

data-visualization-ayasdi

It is likely that business will never eliminate gut instinct from the decision-making process. However, as data, now big data, increasingly pervades every crevice of every organization, the use of data-driven decisions will become the norm. As this happens, more and more businesses find themselves employing data scientists to help filter, categorize, mine and analyze these mountains of data in meaningful ways.

The caveat, of course, is that data, big data and an even bigger reliance on that data requires subject matter expertise and analysts with critical thinking skills and sound judgement — data cannot be used blindly.

From Technology review:

Throughout history, innovations in instrumentation—the microscope, the telescope, and the cyclotron—have repeatedly revolutionized science by improving scientists’ ability to measure the natural world. Now, with human behavior increasingly reliant on digital platforms like the Web and mobile apps, technology is effectively “instrumenting” the social world as well. The resulting deluge of data has revolutionary implications not only for social science but also for business decision making.

As enthusiasm for “big data” grows, skeptics warn that overreliance on data has pitfalls. Data may be biased and is almost always incomplete. It can lead decision makers to ignore information that is harder to obtain, or make them feel more certain than they should. The risk is that in managing what we have measured, we miss what really matters—as Vietnam-era Secretary of Defense Robert McNamara did in relying too much on his infamous body count, and as bankers did prior to the 2007–2009 financial crisis in relying too much on flawed quantitative models.

The skeptics are right that uncritical reliance on data alone can be problematic. But so is overreliance on intuition or ideology. For every Robert McNamara, there is a Ron Johnson, the CEO whose disastrous tenure as the head of JC Penney was characterized by his dismissing data and evidence in favor of instincts. For every flawed statistical model, there is a flawed ideology whose inflexibility leads to disastrous results.

So if data is unreliable and so is intuition, what is a responsible decision maker supposed to do? While there is no correct answer to this question—the world is too complicated for any one recipe to apply—I believe that leaders across a wide range of contexts could benefit from a scientific mind-set toward decision making.

A scientific mind-set takes as its inspiration the scientific method, which at its core is a recipe for learning about the world in a systematic, replicable way: start with some general question based on your experience; form a hypothesis that would resolve the puzzle and that also generates a testable prediction; gather data to test your prediction; and finally, evaluate your hypothesis relative to competing hypotheses.

The scientific method is largely responsible for the astonishing increase in our understanding of the natural world over the past few centuries. Yet it has been slow to enter the worlds of politics, business, policy, and marketing, where our prodigious intuition for human behavior can always generate explanations for why people do what they do or how to make them do something different. Because these explanations are so plausible, our natural tendency is to want to act on them without further ado. But if we have learned one thing from science, it is that the most plausible explanation is not necessarily correct. Adopting a scientific approach to decision making requires us to test our hypotheses with data.

While data is essential for scientific decision making, theory, intuition, and imagination remain important as well—to generate hypotheses in the first place, to devise creative tests of the hypotheses that we have, and to interpret the data that we collect. Data and theory, in other words, are the yin and yang of the scientific method—theory frames the right questions, while data answers the questions that have been asked. Emphasizing either at the expense of the other can lead to serious mistakes.

Also important is experimentation, which doesn’t mean “trying new things” or “being creative” but quite specifically the use of controlled experiments to tease out causal effects. In business, most of what we observe is correlation—we do X and Y happens—but often what we want to know is whether or not X caused Y. How many additional units of your new product did your advertising campaign cause consumers to buy? Will expanded health insurance coverage cause medical costs to increase or decline? Simply observing the outcome of a particular choice does not answer causal questions like these: we need to observe the difference between choices.

Replicating the conditions of a controlled experiment is often difficult or impossible in business or policy settings, but increasingly it is being done in “field experiments,” where treatments are randomly assigned to different individuals or communities. For example, MIT’s Poverty Action Lab has conducted over 400 field experiments to better understand aid delivery, while economists have used such experiments to measure the impact of online advertising.

Although field experiments are not an invention of the Internet era—randomized trials have been the gold standard of medical research for decades—digital technology has made them far easier to implement. Thus, as companies like Facebook, Google, Microsoft, and Amazon increasingly reap performance benefits from data science and experimentation, scientific decision making will become more pervasive.

Nevertheless, there are limits to how scientific decision makers can be. Unlike scientists, who have the luxury of withholding judgment until sufficient evidence has accumulated, policy makers or business leaders generally have to act in a state of partial ignorance. Strategic calls have to be made, policies implemented, reward or blame assigned. No matter how rigorously one tries to base one’s decisions on evidence, some guesswork will be required.

Exacerbating this problem is that many of the most consequential decisions offer only one opportunity to succeed. One cannot go to war with half of Iraq and not the other just to see which policy works out better. Likewise, one cannot reorganize the company in several different ways and then choose the best. The result is that we may never know which good plans failed and which bad plans worked.

Read the entire article here.

Image: Screenshot of Iris, Ayasdi’s data-visualization tool. Courtesy of Ayasdi / Wired.