Tag Archives: psychology

Lies By Any Other Name

Lies_and_the_lying_liarsCertain gestures and facial movements are usually good indicators of a lie in progress. If your boss averts her eyes when she tells you “what a good employee you are”, or if your spouse looks at his finger nails when telling you “how gorgeous your new dress looks”, you can be almost certain that you are being told some half-truths or mistruths. Psychologists have studied these visual indicators for as long as humans have told lies.

Since dishonesty is so widespread and well-studied it comes as no surprise that there are verbal cues as well — just as telling as sweaty palms. A well-used verbal clue to insincerity, ironically, is the phrase “to be honest“. Verbal tee-ups such as this are known by behavioral scientists as qualifiers or performatives. There is a growing list.

From the WSJ:

A friend of mine recently started a conversation with these words: “Don’t take this the wrong way…”

I wish I could tell you what she said next. But I wasn’t listening—my brain had stalled. I was bracing for the sentence that would follow that phrase, which experience has taught me probably wouldn’t be good.

Certain phrases just seem to creep into our daily speech. We hear them a few times and suddenly we find ourselves using them. We like the way they sound, and we may find they are useful. They may make it easier to say something difficult or buy us a few extra seconds to collect our next thought.

Yet for the listener, these phrases are confusing. They make it fairly impossible to understand, or even accurately hear, what the speaker is trying to say.

Consider: “I want you to know…” or “I’m just saying…” or “I hate to be the one to tell you this…” Often, these phrases imply the opposite of what the words mean, as with the phrase, “I’m not saying…” as in “I’m not saying we have to stop seeing each other, but…”

Take this sentence: “I want to say that your new haircut looks fabulous.” In one sense, it’s true: The speaker does wish to tell you that your hair looks great. But does he or she really think it is so or just want to say it? It’s unclear.

Language experts have textbook names for these phrases—”performatives,” or “qualifiers.” Essentially, taken alone, they express a simple thought, such as “I am writing to say…” At first, they seem harmless, formal, maybe even polite. But coming before another statement, they often signal that bad news, or even some dishonesty on the part of the speaker, will follow.

“Politeness is another word for deception,” says James W. Pennebaker, chair of the psychology department of the University of Texas at Austin, who studies these phrases. “The point is to formalize social relations so you don’t have to reveal your true self.”

In other words, “if you’re going to lie, it’s a good way to do it—because you’re not really lying. So it softens the blow,” Dr. Pennebaker says.

Of course, it’s generally best not to lie, Dr. Pennebaker notes. But because these sayings so frequently signal untruth, they can be confusing even when used in a neutral context. No wonder they often lead to a breakdown in personal communications.

Some people refer to these phrases as “tee-ups.” That is fitting. What do you do with a golf ball? You put it on a peg at the tee—tee it up—and then give it a giant wallop.

Betsy Schow says she felt like she was “hit in the stomach by a cannonball” the day she was preparing to teach one of her first yoga classes. A good friend—one she’d assumed had shown up to support her—approached her while she was warming up. She was in the downward facing dog pose when she heard her friend say, “I am only telling you this because I love you…”

The friend pointed out that lumps were showing beneath Ms. Schow’s yoga clothes and said people laughed at her behind her back because they thought she wasn’t fit enough to teach yoga. Ms. Schow had recently lost a lot of weight and written a book about it. She says the woman also mentioned that Ms. Schow’s friends felt she was “acting better than they were.” Then the woman offered up the name of a doctor who specializes in liposuction. “Hearing that made me feel sick,” says Ms. Schow, a 32-year-old fitness consultant in Alpine, Utah. “Later, I realized that her ‘help’ was no help at all.”

Tee-ups have probably been around as long as language, experts say. They seem to be used with equal frequency by men and women, although there aren’t major studies of the issue. Their use may be increasing as a result of social media, where people use phrases such as “I am thinking that…” or “As far as I know…” both to avoid committing to a definitive position and to manage the impression they make in print.

“Awareness about image management is increased any time people put things into print, such as in email or on social networks,” says Jessica Moore, department chair and assistant professor at the College of Communication at Butler University, Indianapolis. “Thus people often make caveats to their statements that function as a substitute for vocalized hedges.” And people do this hedging—whether in writing or in speech—largely unconsciously, Dr. Pennebaker says. “We are emotionally distancing ourselves from our statement, without even knowing it,” he says.

So, if tee-ups are damaging our relationships, yet we often don’t even know we’re using them, what can we do? Start by trying to be more aware of what you are saying. Tee-ups should serve as yellow lights. If you are about to utter one, slow down. Proceed with caution. Think about what you are about to say.

“If you are feeling a need to use them a lot, then perhaps you should consider the possibility that you are saying too many unpleasant things to or about other people,” says Ellen Jovin, co-founder of Syntaxis, a communication-skills training firm in New York. She considers some tee-up phrases to be worse than others. “Don’t take this the wrong way…” is “ungracious,” she says. “It is a doomed attempt to evade the consequences of a comment.”

Read the entire article here.

Image: Lies and the Lying Liars Who Tell Them, book cover, by Al Franken. Courtesy of Wikipedia.

Younger Narcissists Use Twitter…

…older narcissists use Facebook.

google-search-selfie

Online social media and social networks provide a wonderful petri-dish with which to study humanity. For those who are online and connected — and that is a significant proportion of the world’s population — their every move, click, purchase, post and like can be collected, aggregated, dissected and analyzed (and sold). These trails through the digital landscape provide a fertile ground for psychologists and social scientists of all types to examine our behaviors and motivations, in real-time. By their very nature online social networks offer researchers a vast goldmine of data from which to extract rich nuggets of behavioral and cultural trends — a digital trail is easy to find and impossible to erase. A perennial favorite for researchers is the area of narcissism (and we suspect it is a favorite of narcissists as well).

From the Atlantic:

It’s not hard to see why the Internet would be a good cave for a narcissist to burrow into. Generally speaking, they prefer shallow relationships (preferably one-way, with the arrow pointing toward themselves), and need outside sources to maintain their inflated but delicate egos. So, a shallow cave that you can get into, but not out of. The Internet offers both a vast potential audience, and the possibility for anonymity, and if not anonymity, then a carefully curated veneer of self that you can attach your name to.

In 1987, the psychologists Hazel Markus and Paula Nurius claimed that a person has two selves: the “now self” and the “possible self.” The Internet allows a person to become her “possible self,” or at least present a version of herself that is closer to it.

When it comes to studies of online narcissism, and there have been many, social media dominates the discussion. One 2010 study notes that the emergence of the possible self “is most pronounced in anonymous online worlds, where accountability is lacking and the ‘true’ self can come out of hiding.” But non-anonymous social networks like Facebook, which this study was analyzing, “provide an ideal environment for the expression of the ‘hoped-for possible self,’ a subgroup of the possible-self. This state emphasizes realistic socially desirable identities an individual would like to establish given the right circumstances.”

The study, which found that people higher in narcissism were more active on Facebook, points out that you tend to encounter “identity statements” on social networks more than you would in real life. When you’re introduced to someone in person, it’s unlikely that they’ll bust out with a pithy sound bite that attempts to sum up all that they are and all they hope to be, but people do that in their Twitter bio or Facebook “About Me” section all the time.

Science has linked narcissism with high levels of activity on Facebook, Twitter, and Myspace (back in the day). But it’s important to narrow in farther and distinguish what kinds of activity the narcissists are engaging in, since hours of scrolling through your news feed, though time-wasting, isn’t exactly self-centered. And people post online for different reasons. For example, Twitter has been shown to sometimes fulfill a need to connect with others. The trouble with determining what’s normal and what’s narcissism is that both sets of people generally engage in the same online behaviors, they just have different motives for doing so.

A recent study published in Computers in Human Behavior dug into the how and why of narcissists’ social media use, looking at both college students and an older adult population. The researchers measured how often people tweeted or updated their Facebook status, but also why, asking them how much they agreed with statements like “It is important that my followers admire me,” and “It is important that my profile makes others want to be my friend.”

Overall, Twitter use was more correlated with narcissism, but lead researcher Shaun W. Davenport, chair of management and entrepreneurship at High Point University, points out that there was a key difference between generations. Older narcissists were more likely to take to Facebook, whereas younger narcissists were more active on Twitter.

“Facebook has really been around the whole time Generation Y was growing up and they see it more as a tool for communication,” Davenport says. “They use it like other generations use the telephone… For older adults who didn’t grow up using Facebook, it takes more intentional motives [to use it], like narcissism.”

Whereas on Facebook, the friend relationship is reciprocal, you don’t have to follow someone on Twitter who follows you (though it is often polite to do so, if you are the sort of person who thinks of Twitter more as an elegant tea room than, I don’t know, someplace without rules or scruples, like the Wild West or a suburban Chuck E. Cheese). Rather than friend-requesting people to get them to pay attention to you, the primary method to attract Twitter followers is just… tweeting, which partially explains the correlation between number of tweets and narcissism.

Of course, there’s something to be said for quality over quantity—just look at @OneTweetTony and his 2,000+ followers. And you’d think that, even if you gather a lot of followers to you through sheer volume of content spewed, eventually some would tire of your face’s constant presence in their feed and leave you. W. Keith Campbell, head of the University of Georgia’s psychology department and author of The Narcissism Epidemic: Living in the Age of Entitlement, says that people don’t actually make the effort to unfriend or unfollow someone that often, though.

“What you find in real life with narcissists is that they’re very good at gaining friends and becoming leaders, but eventually people see through them and stop liking them,” he says. “Online, people are very good at gaining relationships, but they don’t fall off naturally. If you’re incredibly annoying, they just ignore you, and even then it might be worth it for entertainment value. There’s a reason why, on reality TV, you find high levels of narcissism. It’s entertaining.”

Also like reality TV stars, narcissists like their own images. They show a preference for posting photos on Facebook, but Campbell clarifies that it’s the type of photos that matter—narcissists tend to choose more attractive, attention-seeking photos. In another 2011 study, narcissistic adolescents rated their own profile pictures as “more physically attractive, more fashionable, more glamorous, and more cool than their less narcissistic peers did.”

Though social media is an obvious and much-discussed bastion of narcissism, online role-playing games, the most famous being World of Warcraft, have been shown to hold some attraction as well. A study of 1,471 Korean online gamers showed narcissists to be more likely to be addicted to the games than non-narcissists. The concrete goals and rewards the games offer allow the players to gather prestige: “As you play, your character advances by gaining experience points, ‘leveling-up’ from one level to the next while collecting valuables and weapons and becoming wealthier and stronger,” the study reads. “In this social setting, excellent players receive the recognition and attention of others, and gain power and status.”

And if that power comes through violence, so much the better. Narcissism has been linked to aggression, another reason for the games’ appeal. Offline, narcissists are often bullies, though attempts to link narcissism to cyberbullying have resulted in a resounding “maybe.”

 “Narcissists typically have very high self esteem but it’s very fragile self esteem, so when someone attacks them, that self-esteem takes a dramatic nosedive,” Davenport says. “They need more wins to combat those losses…so the wins they have in that [virtual] world can boost their self-esteem.”

People can tell when you are attempting to boost your self-esteem through your online presence. A 2008 study had participants rate Facebook pages (which had already been painstakingly coded by researchers) for 37 different personality traits. The Facebook page’s owners had previously taken the Narcissistic Personality Inventory, and when it was there, the raters picked up on it.

Campbell, one of the researchers on that study, tempers now: “You can detect it, but it’s not perfect,” he says. “It’s sort of like shaving in your car window, you can do it, but it’s not perfect.”

Part of the reason why may be that, as we see more self-promoting behavior online, whether it’s coming from narcissists or not, it becomes more accepted, and thus, widespread.

Though, according to Davenport, the accusation that Generation Y, or—my least favorite term—Millennials, is the most narcissistic generation yet has been backed up by data, he wonders if it’s less a generational problem than just a general shift in our society.

“Some of it is that you see the behavior more on Facebook and Twitter, and some of it is that our society is becoming more accepting of narcissistic behavior,” Davenport says. “I do wonder if at some point the pendulum will swing back a little bit. Because you’re starting to see more published about ‘Is Gen Y more narcissistic?’, ‘What does this mean for the workplace?’, etc. All those questions are starting to become common conversation.”

When asked if our society is moving in a more narcissistic direction, Campbell replied: “President Obama took a selfie at Nelson Mandela’s funeral. Selfie was the word of the year in 2013. So yeah, this stuff becomes far more accepted.”

Read the entire article here.

Images courtesy of Google Search and respective “selfie” owners.

Teens and the Internet: Don’t Panic

Some view online social networks, smartphones and texting as nothing but bad news for the future socialization of our teens. After all, they’re usually hunched heads down, thumbs out, immersed in their own private worlds, oblivious to all else, all the while paradoxically and simultaneously, publishing and sharing anything and everything to anyone.

Yet, others, including as Microsoft researcher Danah Boyd, have a more benign view of the technological maelstrom that surrounds our kids. In her book It’s Complicated: The Social Lives of Networked Teens, she argues that teenagers aren’t doing anything different today online than their parents and grandparents often did in person. Parents will take comfort from Boyd’s analysis that today’s teens will become much like their parents: behaving and worrying about many of the same issues that their parents did. Of course, teens will find this very, very uncool indeed.

From Technology Review:

Kids today! They’re online all the time, sharing every little aspect of their lives. What’s wrong with them? Actually, nothing, says Danah Boyd, a Microsoft researcher who studies social media. In a book coming out this winter, It’s Complicated: The Social Lives of Networked Teens, Boyd argues that teenagers aren’t doing much online that’s very different from what kids did at the sock hop, the roller rink, or the mall. They do so much socializing online mostly because they have little choice, Boyd says: parents now generally consider it unsafe to let kids roam their neighborhoods unsupervised. Boyd, 36, spoke with MIT Technology Review’s deputy editor, Brian Bergstein, at Microsoft Research’s offices in Manhattan.

I feel like you might have titled the book Everybody Should Stop Freaking Out.

It’s funny, because one of the early titles was Like, Duh. Because whenever I would show my research to young people, they’d say, “Like, duh. Isn’t this so obvious?” And it opens with the anecdote of a boy who says, “Can you just talk to my mom? Can you tell her that I’m going to be okay?” I found that refrain so common among young people.

You and your colleague Alice Marwick interviewed 166 teenagers for this book. But you’ve studied social media for a long time. What surprised you?

It was shocking how heavily constrained their mobility was. I had known it had gotten worse since I was a teenager, but I didn’t get it—the total lack of freedom to just go out and wander. Young people weren’t even trying to sneak out [of the house at night]. They were trying to get online, because that’s the place where they hung out with their friends.

And I had assumed based on the narratives in the media that bullying was on the rise. I was shocked that data showed otherwise.

Then why do narratives such as “Bullying is more common online” take hold?

It’s made more visible. There is some awful stuff out there, but it frustrates me when a panic distracts us from the reality of what’s going on. One of my frustrations is that there are some massive mental health issues, and we want to blame the technology [that brings them to light] instead of actually dealing with mental health issues.

take your point that Facebook or Insta­gram is the equivalent of yesterday’s hangouts. But social media amplify everyday situations in difficult new ways. For example, kids might instantly see on Facebook that they’re missing out on something other kids are doing together.

That can be a blessing or a curse. These interpersonal conflicts ramp up much faster [and] can be much more hurtful. That’s one of the challenges for this cohort of youth: some of them have the social and emotional skills that are necessary to deal with these conflicts; others don’t. It really sucks when you realize that somebody doesn’t like you as much as you like them. Part of it is, then, how do you use that as an opportunity not to just wallow in your self-pity but to figure out how to interact and be like “Hey, let’s talk through what this friendship is like”?

You contend that teenagers are not cavalier about privacy, despite appearances, and adeptly shift sensitive conversations into chat and other private channels.

Many adults assume teens don’t care about privacy because they’re so willing to participate in social media. They want to be in public. But that doesn’t mean that they want to be public. There’s a big difference. Privacy isn’t about being isolated from others. It’s about having the capacity to control a social situation.

So if parents can let go of some common fears, what should they be doing?

One thing that I think is dangerous is that we’re trained that we are the experts at everything that goes on in our lives and our kids’ lives. So the assumption is that we should teach them by telling them. But I think the best way to teach is by asking questions: “Why are you posting that? Help me understand.” Using it as an opportunity to talk. Obviously there comes a point when your teenage child is going to roll their eyes and go, “I am not interested in explaining anything more to you, Dad.”

The other thing is being present. The hardest thing that I saw, overwhelmingly—the most unhealthy environments—were those where the parents were not present. They could be physically present and not actually present.

Read the entire article here.

Merry Christmas and Happy Regression

Setting aside religious significance, the holidays season tends to be a time when most adults focus on children and family, in that order. But interestingly enough adults, consciously or not, regress to their younger selves during this extended time with parents and family.

From the Guardian:

In a characteristically serene post at Zen Habits, Leo Babauta points out that holiday family gatherings can be “the ultimate mindfulness training ground”: if you can remain centred and calm in the middle of Christmas dinner, you can presumably do so anywhere.

True, I’m sure. But for any of us heading back to childhood homes in the next few days – or, for that matter, reuniting elsewhere with the people we spent our childhoods with – there’s one huge challenge to be overcome. I’m talking, of course, about the ferocious black hole that sucks adult children, and their parents, back into family roles from years or even decades ago, the moment they’ve reassembled under one roof.

Holiday regression is an experience so universal that even therapists who specialise in this sort of stuff tend to counsel Just Dealing With It. “Expect to regress,” writes one. “Regression can be sweet,” ventures another. Forget all the progress you thought you’d made towards becoming a well-functioning and responsible member of society. For a week or so, you might as well be 13 again.

Actually, the concept of regression, like so many handed down from Freud, is probably best thought of as a poetic metaphor; modern psychology provides no real reason to believe that you’re literally returning to an earlier stage of ego development when you start passive-aggressively point-scoring with your sister over the mulled wine. The crucial point about those old family roles is that they work: they’re time-tested ways that your family discovered, over years, that enabled it to hold together as a family. The roughly 20 years between birth and fleeing the nest, as the therapist Marie Hartwell-Walker points out, is “a whole lot of practice for making the family style and our role in it permanent.”

None of that means it’s always – or even usually – enjoyable to play those roles. But they serve a purpose: the family unit’s purpose, if not necessarily your own.

Much as psychotherapists are drawn to family dynamics when it comes to explaining this sort of thing, however, more mundane psychological factors are surely also at play. We’ve learned lots in recent years about the emotional-eliciting qualities of different environments, and their role in the formation of memories. (There’s even been some interesting work on what, exactly, people are hoping to re-experience when they seek out a lost childhood home.) If you’re sleeping in the bedroom you slept in as a child, how could you avoid taking on some of the characteristics of the child who formerly slept there?

Meanwhile, there’s the particular aroma of the family home. Smell, as Marcel Proust knew and recent research confirms, can be a peculiarly powerful trigger for memories. In short: a trip back home will always be a psychological minefield.

Is there anything to be done? One of the more interesting suggestions borrows from the field of “embodied cognition”, which refers to the way our mental lives are lived through, and are influenced by, our bodies. (For example, clenching a fist has been found to enhance willpower; folding your arms aids perseverance.)

Read the entire article here.

What of Consciousness?

google-search-holiday-feast

As we dig into the traditional holiday fare surrounded by family and friends it is useful to ponder whether any of it is actually real or is it all inside the mind. The in-laws may be a figment of the brain, but the wine probably is real.

From the New Scientist:

Descartes might have been onto something with “I think therefore I am”, but surely “I think therefore you are” is going a bit far? Not for some of the brightest minds of 20th-century physics as they wrestled mightily with the strange implications of the quantum world.

According to prevailing wisdom, a quantum particle such as an electron or photon can only be properly described as a mathematical entity known as a wave function. Wave functions can exist as “superpositions” of many states at once. A photon, for instance, can circulate in two different directions around an optical fibre; or an electron can simultaneously spin clockwise and anticlockwise or be in two positions at once.

When any attempt is made to observe these simultaneous existences, however, something odd happens: we see only one. How do many possibilities become one physical reality?

This is the central question in quantum mechanics, and has spawned a plethora of proposals, or interpretations. The most popular is the Copenhagen interpretation, which says nothing is real until it is observed, or measured. Observing a wave function causes the superposition to collapse.

However, Copenhagen says nothing about what exactly constitutes an observation. John von Neumann broke this silence and suggested that observation is the action of a conscious mind. It’s an idea also put forward by Max Planck, the founder of quantum theory, who said in 1931, “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”

That argument relies on the view that there is something special about consciousness, especially human consciousness. Von Neumann argued that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is somehow different. It is thus able to select out one of the quantum possibilities on offer, making it real – to that mind, at least.

Henry Stapp of the Lawrence Berkeley National Laboratory in California is one of the few physicists that still subscribe to this notion: we are “participating observers” whose minds cause the collapse of superpositions, he says. Before human consciousness appeared, there existed a multiverse of potential universes, Stapp says. The emergence of a conscious mind in one of these potential universes, ours, gives it a special status: reality.

There are many objectors. One problem is that many of the phenomena involved are poorly understood. “There’s a big question in philosophy about whether consciousness actually exists,” says Matthew Donald, a philosopher of physics at the University of Cambridge. “When you add on quantum mechanics it all gets a bit confused.”

Donald prefers an interpretation that is arguably even more bizarre: “many minds”. This idea – related to the “many worlds” interpretation of quantum theory, which has each outcome of a quantum decision happen in a different universe – argues that an individual observing a quantum system sees all the many states, but each in a different mind. These minds all arise from the physical substance of the brain, and share a past and a future, but cannot communicate with each other about the present.

Though it sounds hard to swallow, this and other approaches to understanding the role of the mind in our perception of reality are all worthy of attention, Donald reckons. “I take them very seriously,” he says.

Read the entire article here.

Image courtesy of Google Search.

Me, Myself and I

It’s common sense — the frequency with which you use the personal pronoun “I” tells a lot about you. Now there’s some great research that backs this up, but not in a way that you would have expected.

From WSJ:

You probably don’t think about how often you say the word “I.”

You should. Researchers say that your usage of the pronoun says more about you than you may realize.

Surprising new research from the University of Texas suggests that people who often say “I” are less powerful and less sure of themselves than those who limit their use of the word. Frequent “I” users subconsciously believe they are subordinate to the person to whom they are talking.

Pronouns, in general, tell us a lot about what people are paying attention to, says James W. Pennebaker, chair of the psychology department at the University of Texas at Austin and an author on the study. Pronouns signal where someone’s internal focus is pointing, says Dr. Pennebaker, who has pioneered this line of research. Often, people using “I” are being self-reflective. But they may also be self-conscious or insecure, in physical or emotional pain, or simply trying to please.

Dr. Pennebaker and colleagues conducted five studies of the way relative rank is revealed by the use of pronouns. The research was published last month in the Journal of Language and Social Psychology. In each experiment, people deemed to have higher status used “I” less.

The findings go against the common belief that people who say “I” a lot are full of themselves, maybe even narcissists.

“I” is more powerful than you may realize. It drives perceptions in a conversation so much so that marriage therapists have long held that people should use “I” instead of “you” during a confrontation with a partner or when discussing something emotional. (“I feel unheard.” Not: “You never listen.”) The word “I” is considered less accusatory.

“There is a misconception that people who are confident, have power, have high-status tend to use ‘I’ more than people who are low status,” says Dr. Pennebaker, author of “The Secret Life of Pronouns.” “That is completely wrong. The high-status person is looking out at the world and the low-status person is looking at himself.”

So, how often should you use “I”? More—to sound humble (and not critical when speaking to your spouse)? Or less—to come across as more assured and authoritative?

The answer is “mostly more,” says Dr. Pennebaker. (Although he does say you should try and say it at the same rate as your spouse or partner, to keep the power balance in the relationship.)

In the first language-analysis study Dr. Pennebaker led, business-school students were divided into 41 four-person, mixed-sex groups and asked to work as a team to improve customer service for a fictitious company. One person in each group was randomly assigned to be the leader. The result: The leaders used “I” in 4.5% of their words. Non-leaders used the word 5.6%. (The leaders also used “we” more than followers did.)

In the second study, 112 psychology students were assigned to same-sex groups of two. The pairs worked to solve a series of complex problems. All interaction took place online. No one was assigned to a leadership role, but participants were asked at the end of the experiment who they thought had power and status. Researchers found that the higher the person’s perceived power, the less he or she used “I.”

In study three, 50 pairs of people chatted informally face-to-face, asking questions to get to know one another, as if at a cocktail party. When asked which person had more status or power, they tended to agree—and that person had used “I” less.

Study four looked at emails. Nine people turned over their incoming and outgoing emails with about 15 other people. They rated how much status they had in relation to each correspondent. In each exchange, the person with the higher status used “I” less.

The fifth study was the most unusual. Researchers looked at email communication that the U.S. government had collected (and translated) from the Iraqi military, made public for a period of time as the Iraqi Perspectives Project. They randomly selected 40 correspondences. In each case, the person with higher military rank used “I” less.

People curb their use of “I” subconsciously, Dr. Pennebaker says. “If I am the high-status person, I am thinking of what you need to do. If I am the low-status person, I am more humble and am thinking, ‘I should be doing this.’ “

Dr. Pennebaker has found heavy “I” users across many people: Women (who are typically more reflective than men), people who are more at ease with personal topics, younger people, caring people as well as anxious and depressed people. (Surprisingly, he says, narcissists do not use “I” more than others, according to a meta-analysis of a large number of studies.)

And who avoids using “I,” other than the high-powered? People who are hiding the truth. Avoiding the first-person pronoun is distancing.

Read the entire article here.

Night Owls, Beware!

A new batch of research points to a higher incidence of depression in night owls than in early risers. Further studies will be required to determine a true causal link, but initial evidence seems to suggest that those who stay up late have structural differences in the brain leading to a form of chronic jet lag.

From Washington Post:

They say the early bird catches the worm, but night owls may be missing far more than just a tasty snack. Researchers have discovered evidence of structural brain differences that distinguish early risers from people who like to stay up late. The differences might help explain why night owls seem to be at greater risk of depression.

About 10 percent of people are morning people, or larks, and 20 percent are night owls, with the rest falling in between. Your status is called your chronotype.

Previous studies have suggested that night owls experience worse sleep, feel more tiredness during the day and consume greater amounts of tobacco and alcohol. This has prompted some to suggest that they are suffering from a form of chronic jet lag.

Jessica Rosenberg at RWTH Aachen University in Germany and colleagues used a technique called diffusion tensor imaging to scan the brains of 16 larks, 23 night owls and 20 people with intermediate chronotypes. They found a reduction in the integrity of night owls’ white matter — brain tissue largely made up of fatty insulating material that speeds up the transmission of nerve signals — in areas associated with depression.

“We think this could be caused by the fact that late chronotypes suffer from this permanent jet lag,” Rosenberg says, although she cautions that further studies are needed to confirm cause and effect.

Read the entire article here.

Image courtesy of Google search.

Happy Listening to Sad Music

Why do we listen to sad music, and how is it that sad music can be as attractive as its lighter, happier cousin? After all we tend to want to steer clear of sad situations. New research suggests that it is more complex than a desire for catharsis, rather there is a disconnect between the felt emotion and the perceived emotion.

From the New York Times:

Sadness is an emotion we usually try to avoid. So why do we choose to listen to sad music?

Musicologists and philosophers have wondered about this. Sad music can induce intense emotions, yet the type of sadness evoked by music also seems pleasing in its own way. Why? Aristotle famously suggested the idea of catharsis: that by overwhelming us with an undesirable emotion, music (or drama) somehow purges us of it.

But what if, despite their apparent similarity, sadness in the realm of artistic appreciation is not the same thing as sadness in everyday life?

In a study published this summer in the journal Frontiers in Psychology, my colleagues and I explored the idea that “musical emotion” encompasses both the felt emotion that the music induces in the listener and the perceived emotion that the listener judges the music to express. By isolating these two overlapping sets of emotions and observing how they related to each other, we hoped to gain a better understanding of sad music.

Forty-four people served as participants in our experiment. We asked them to listen to one of three musical excerpts of approximately 30 seconds each. The excerpts were from Mikhail Glinka’s “La Séparation” (F minor), Felix Blumenfeld’s “Sur Mer” (G minor) and Enrique Granados’s “Allegro de Concierto” (C sharp major, though the excerpt was in G major, which we transposed to G minor).

We were interested in the minor key because it is canonically associated with sad music, and we steered clear of well-known compositions to avoid interference from any personal memories related to the pieces.

(Our participants were more or less split between men and women, as well as between musicians and nonmusicians, though these divisions turned out to be immaterial to our findings.)

A participant would listen to an excerpt and then answer a question about his felt emotions: “How did you feel when listening to this music?” Then he would listen to a “happy” version of the excerpt — i.e., transposed into the major key — and answer the same question. Next he would listen to the excerpt, again in both sad and happy versions, each time answering a question about other listeners that was designed to elicit perceived emotion: “How would normal people feel when listening to this music?”

(This is a slight simplification: in the actual study, the order in which the participant answered questions about felt and perceived emotion, and listened to sad and happy excerpts, varied from participant to participant.)

Our participants answered each question by rating 62 emotion-related descriptive words and phrases — from happy to sad, from bouncy to solemn, from heroic to wistful — on a scale from 0 (not at all) to 4 (very much).

We found, as anticipated, that felt emotion did not correspond exactly to perceived emotion. Although the sad music was both perceived and felt as “tragic” (e.g., gloomy, meditative and miserable), the listeners did not actually feel the tragic emotion as much as they perceived it. Likewise, when listening to sad music, the listeners felt more “romantic” emotion (e.g., fascinated, dear and in love) and “blithe” emotion (e.g., merry, animated and feel like dancing) than they perceived.

Read the entire article here.

Image: Detail of Marie-Magdalene, Entombment of Christ, 1672. Courtesy of Wikipedia.

All Conquering TV

In almost 90 years since television was invented it has done more to re-shape our world than conquering armies and pandemics. Whether you see TV  as a force for good or evil — or more recently, as a method for delivering absurd banality — you would be hard-pressed to find another human invention that has altered us so profoundly, psychologically, socially and culturally. What would its creator — John Logie Baird — think of his invention now, almost 70 years after his death?

From the Guardian:

Like most people my age – 51 – my childhood was in black and white. That’s because my memory of childhood is in black and white, and that’s because television in the 1960s (and most photography) was black and white. Bill and Ben, the Beatles, the Biafran war, Blue Peter, they were all black and white, and their images form the monochrome memories of my early years.

That’s one of the extraordinary aspects of television – its ability to trump reality. If seeing is believing, then there’s always a troubling doubt until you’ve seen it on television. A mass medium delivered to almost every household, it’s the communal confirmation of experience.

On 30 September it will be 84 years since the world’s first-ever television transmission. In Armchair Nation, his new social history of TV, Joe Moran, professor of English and cultural history at Liverpool John Moores University, recounts the events of that momentous day. A Yorkshire comedian named Sydney Howard performed a comic monologue and someone called Lulu Stanley sang “He’s tall, and dark, and handsome” in what was perhaps the earliest progenitor of The X Factor.

The images were broadcast by the BBC and viewed by a small group of invited guests on a screen about half the size of the average smartphone in the inventor John Logie Baird’s Covent Garden studio. Logie Baird may have been a visionary but even he would have struggled to comprehend just how much the world would be changed by his vision – television, the 20th century’s defining technology.

Every major happening is now captured by television, or it’s not a major happening. Politics and politicians are determined by how they play on television. Public knowledge, charity, humour, fashion trends, celebrity and consumer demand are all subject to its critical influence. More than the aeroplane or the nuclear bomb, the computer or the telephone, TV has determined what we know and how we think, the way we believe and how we perceive ourselves and the world around us (only the motor car is a possible rival and that, strictly speaking, was a 19th-century invention).

Not not only did television re-envision our sense of the world, it remains, even in the age of the internet, Facebook and YouTube, the most powerful generator of our collective memories, the most seductive and shocking mirror of society, and the most virulent incubator of social trends. It’s also stubbornly unavoidable.

There is good television, bad television, too much television and even, for some cultural puritans, no television, but whatever the equation, there is always television. It’s ubiquitously there, radiating away in the corner, even when it’s not. Moran quotes a dumbfounded Joey Tribbiani (Matt LeBlanc) from Friends on learning that a new acquaintance doesn’t have a TV set: “But what does your furniture point at?”

Like all the best comic lines, it contains a profound truth. The presence of television is so pervasive that its very absence is a kind of affront to the modern way of life. Not only has television reshaped the layout of our sitting rooms, it has also reshaped the very fabric of our lives.

Just to take Friends as one small example. Before it was first aired back in 1994, the idea of groups of young people hanging out in a coffee bar talking about relationships in a language of comic neurosis was, at least as far as pubcentric Britain was concerned, laughable. Now it’s a high-street fact of life. Would Starbucks and Costa have enjoyed the same success if Joey and friends had not showed the way?

But in 1929 no one had woken up and smelled the coffee. The images were extremely poor quality, the equipment was dauntingly expensive and reception vanishingly limited. In short, it didn’t look like the future. One of the first people to recognise television’s potential – or at least the most unappealing part of it – was Aldous Huxley. Writing in Brave New World, published in 1932, he described a hospice of the future in which every bed had a TV set at its foot. “Television was left on, a running tap, from morning till night.”

All the same, television remained a London-only hobby for a tiny metropolitan elite right up until the Second World War. Then, for reasons of national security, the BBC switched off its television signal and the experiment seemed to come to a bleak end.

It wasn’t until after the war that television was slowly spread out across the country. Some parts of the Scottish islands did not receive a signal until deep into the 1960s, but the nation was hooked. Moran quotes revealing statistics from 1971 about the contemporary British way of life: “Ten per cent of homes still had no indoor lavatory or bath, 31% had no fridge and 62% had no telephone, but only 9% had no TV.”

My family, as IT happened, fitted into that strangely incongruous sector that had no inside lavatory or bath but did have a TV. This seems bizarre, if you think about society’s priorities, but it’s a common situation today throughout large parts of the developing world.

I don’t recall much anxiety about the lack of a bath, at least on my part, but I can’t imagine what the sense of social exclusion would have been like, aged nine, if I hadn’t had access to Thunderbirds and The Big Match.

The strongest memory I have of watching television in the early 1970s is in my grandmother’s flat on wintry Saturday afternoons. Invariably the gas fire was roaring, the room was baking, and that inscrutable spectacle of professional wrestling, whose appeal was a mystery to me (if not Roland Barthes), lasted an eternity before the beautifully cadenced poetry of the football results came on.

Read the entire article here.

Image: John Logie Baird. Courtesy of Wikipedia.

Overcoming Right-handedness

When asked about handedness Nick Moran over a TheMillions says, “everybody’s born right-handed, but the best overcome it.” Funny. And perhaps, now, based on several rings of truth.

Several meta-studies on the issue of handedness suggest that lefties may indeed have an advantage over their right-handed cousins in a specific kind of creative thinking known as divergent thinking. Divergent thinking is the ability to generate new ideas for a single principle quickly.

At last, left-handers can emerge from the shadow that once branded them as sinister degenerates and criminals. (We recommend you check the etymology of the word “sinister” for yourself.)

From the New Yorker:

Cesare Lombroso, the father of modern criminology, owes his career to a human skull. In 1871, as a young doctor at a mental asylum in Pavia, Italy, he autopsied the brain of Giuseppe Villela, a Calabrese peasant turned criminal, who has been described as an Italian Jack the Ripper. “At the sight of that skull,” Lombroso said, “I seemed to see all at once, standing out clearly illuminated as in a vast plain under a flaming sky, the problem of the nature of the criminal, who reproduces in civilised times characteristics, not only of primitive savages, but of still lower types as far back as the carnivora.”

Lombroso would go on to argue that the key to understanding the essence of criminality lay in organic, physical, and constitutional features—each defect being a throwback to a more primitive and bestial psyche. And while his original insight had come from a skull, certain telltale signs, he believed, could be discerned long before an autopsy. Chief among these was left-handedness.

In 1903, Lombroso summarized his views on the left-handed of the world. “What is sure,” he wrote, “is, that criminals are more often left-handed than honest men, and lunatics are more sensitively left-sided than either of the other two.” Left-handers were more than three times as common in criminal populations as they were in everyday life, he found. The prevalence among swindlers was even higher: up to thirty-three per cent were left-handed—in contrast to the four per cent Lombroso found within the normal population. He ended on a conciliatory note. “I do not dream at all of saying that all left-handed people are wicked, but that left-handedness, united to many other traits, may contribute to form one of the worst characters among the human species.”

Though Lombroso’s science may seem suspect to a modern eye, less-than-favorable views of the left-handed have persisted. In 1977, the psychologist Theodore Blau argued that left-handed children were over-represented among the academically and behaviorally challenged, and were more vulnerable to mental diseases like schizophrenia. “Sinister children,” he called them. The psychologist Stanley Coren, throughout the eighties and nineties, presented evidence that the left-handed lived shorter, more impoverished lives, and that they were more likely to experience delays in mental and physical maturity, among other signs of “neurological insult or physical malfunctioning.” Toward the end of his career, the Harvard University neurologist Norman Geschwind implicated left-handedness in a range of problematic conditions, including migraines, diseases of the immune system, and learning disorders. He attributed the phenomenon, and the related susceptibilities, to higher levels of testosterone in utero, which, he argued, slowed down the development of the brain’s left hemisphere (the one responsible for the right side of the body).

But over the past two decades, the data that seemed compelling have largely been discredited. In 1993, the psychologist Marian Annett, who has spent half a century researching “handedness,” as it is known, challenged the basic foundation of Coren’s findings. The data, she argued, were fundamentally flawed: it wasn’t the case that left-handers led shorter lives. Rather, the older you were, the more likely it was that you had been forced to use your right hand as a young child. The mental-health data have also withered: a 2010 analysis of close to fifteen hundred individuals that included schizophrenic patients and their non-affected siblings found that being left-handed neither increased the risk of developing schizophrenia nor predicted any other cognitive or neural disadvantage. And when a group of neurologists scanned the brains of four hundred and sixty-five adults, they found no effect of handedness on either grey or white matter volume or concentration, either globally or regionally.

Left-handers may, in fact, even derive certain cognitive benefits from their preference. This spring, a group of psychiatrists from the University of Athens invited a hundred university students and graduates—half left-handed and half right—to complete two tests of cognitive ability. In the Trail Making Test, participants had to find a path through a batch of circles as quickly as possible. In the hard version of the test, the circles contain numbers and letters, and participants must move in ascending order while alternating between the two as fast as possible. In the second test, Letter-Number Sequencing, participants hear a group of numbers and letters and must then repeat the whole group, but with numbers in ascending order and letters organized alphabetically. Lefties performed better on both the complex version of the T.M.T.—demonstrating faster and more accurate spatial skills, along with strong executive control and mental flexibility—and on the L.N.S., demonstrating enhanced working memory. And the more intensely they preferred their left hand for tasks, the stronger the effect.

The Athens study points to a specific kind of cognitive benefit, since both the T.M.T. and the L.N.S. are thought to engage, to a large extent, the right hemisphere of the brain. But a growing body of research suggests another, broader benefit: a boost in a specific kind of creativity—namely, divergent thinking, or the ability to generate new ideas from a single principle quickly and effectively. In one demonstration, researchers found that the more marked the left-handed preference in a group of males, the better they were at tests of divergent thought. (The demonstration was led by the very Coren who had originally argued for the left-handers’ increased susceptibility to mental illness.) Left-handers were more adept, for instance, at combining two common objects in novel ways to form a third—for example, using a pole and a tin can to make a birdhouse. They also excelled at grouping lists of words into as many alternate categories as possible. Another recent study has demonstrated an increased cognitive flexibility among the ambidextrous and the left-handed—and lefties have been found to be over-represented among architects, musicians, and art and music students (as compared to those studying science).

Part of the explanation for this creative edge may lie in the greater connectivity of the left-handed brain. In a meta-analysis of forty-three studies, the neurologist Naomi Driesen and the cognitive neuroscientist Naftali Raz concluded that the corpus callosum—the bundle of fibers that connects the brain’s hemispheres—was slightly but significantly larger in left-handers than in right-handers. The explanation could also be a much more prosaic one: in 1989, a group of Connecticut College psychologists suggested that the creativity boost was a result of the environment, since left-handers had to constantly improvise to deal with a world designed for right-handers. In a 2013 review of research into handedness and cognition, a group of psychologists found that the main predictor of cognitive performance wasn’t whether an individual was left-handed or right-handed, but rather how strongly they preferred one hand over another. Strongly handed individuals, both right and left, were at a slight disadvantage compared to those who occupied the middle ground—both the ambidextrous and the left-handed who, through years of practice, had been forced to develop their non-dominant right hand. In those less clear-cut cases, the brain’s hemispheres interacted more and overall performance improved, indicating there may something to left-handed brains being pushed in a way that a right-handed one never is.

Whatever the ultimate explanation may be, the advantage appears to extend to other types of thinking, too. In a 1986 study of students who had scored in the top of their age group on either the math or the verbal sections of the S.A.T., the prevalence of left-handers among the high achievers—over fifteen per cent, as compared to the roughly ten percent found in the general population—was higher than in any comparison groups, which included their siblings and parents. Among those who had scored in the top in both the verbal and math sections, the percentage of left-handers jumped to nearly seventeen per cent, for males, and twenty per cent, for females. That advantage echoes an earlier sample of elementary-school children, which found increased left-handedness among children with I.Q. scores above a hundred and thirty-one.

Read the entire article here.

Image: Book cover – David Wolman’s new book, A Left Hand Turn Around the World, explores the scientific factors that lead to 10 percent of the human race being left-handed. Courtesy of NPR.

Growing Pains

The majority of us can identify with the awkward and self-conscious years of adolescence. And, interestingly enough many of us emerge to the other side.

From Telegraph:

Photographer Merilee Allred tries to show us that teenage insecurities don’t have to hold us back as an adult in her project ‘Awkward Years’. Bullied as a child, the 35-year-old embarked on the project after a friend didn’t believe Merilee was a self-described ‘queen of the nerds’ as a child. She asked people to pose with unflattering pictures of themselves when they were young to highlight how things can turn out alright.

Check out more pictures from the awkward years here.

Image: Project photographer Merilee Allred. Then: 11 years old, 5th grade, in Billings, Montana. Now: 35 years old, UX Designer residing in Salt Lake City, Utah. Courtesy of Merilee Allred / Telegraph.

Night Owl? You Are Evil

New research — probably conducted by a group of early-risers — shows that people who prefer to stay up late, and rise late, are more likely to be narcissistic, insensitive, manipulative and psychopathic.

That said, previous research has suggested that night owls are generally more intelligent and wealthier than their early-rising, but nicer, cousins.

From the Telegraph:

Psychologists have found that people who are often described as “night owls” display more signs of narcissism, Machiavellianism and psychopathic tendencies than those who are “morning larks”.

The scientists suggest these reason for these traits, known as the Dark Triad, being more prevalent in those who do better in the night may be linked to our evolutionary past.

They claim that the hours of darkness may have helped to conceal those who adopted a “cheaters strategy” while living in groups.

Some social animals will use the cover of darkness to steal females away from more dominant males. This behaviour was also recently spotted in rhinos in Africa.

Dr Peter Jonason, a psychologist at the University of Western Sydney, said: “It could be adaptively effective for anyone pursuing a fast life strategy like that embodied in the Dark Triad to occupy and exploit a lowlight environment where others are sleeping and have diminished cognitive functioning.

“Such features of the night may facilitate the casual sex, mate-poaching, and risk-taking the Dark Triad traits are linked to.

“In short, those high on the Dark Triad traits, like many other predators such as lions, African hunting dogs and scorpions, are creatures of the night.”

Dr Jonason and his colleagues, whose research is published in the journal of Personality and Individual Differences, surveyed 263 students, asking them to complete a series of standard personality tests designed to test their score for the Dark Triad traits.

They were rated on scales for narcissism, the tendency to seek admiration and special treatment; Machiavellianism, a desire to manipulate others; and psychopathy, an inclination towards callousness and insensitivity.

To test each, they were asked to rate their agreement with statements like: “I have a natural talent for influencing people”, “I could beat a lie detector” and “people suffering from incurable diseases should have the choice of being put painlessly to death”.

The volunteers were also asked to complete a questionnaire about how alert they felt at different times of the day and how late they stayed up at night.

The study revealed that those with a darker personality score tended to say they functioned more effectively in the evening.

They also found that those who stayed up later tended to have a higher sense of entitlement and seemed to be more exploitative.

They could find no evidence, however, that the traits were linked to the participants gender, ruling out the possibility that the tendency to plot and act in the night time had its roots in sexual evolution.

Previous research has suggested that people who thrive at night tend also to be more intelligent.

Combined with the other darker personality traits, this could be a dangerous mix.

Read the entire article here.

Image: Portrait of Niccolò Machiavelli, by Santi di Tito. Courtesy of Wikipedia.

Rewriting Memories

Important new research suggests that traumatic memories can be rewritten. Timing is critical.

From Technology Review:

It was a Saturday night at the New York Psychoanalytic Institute, and the second-floor auditorium held an odd mix of gray-haired, cerebral Upper East Side types and young, scruffy downtown grad students in black denim. Up on the stage, neuroscientist Daniela Schiller, a riveting figure with her long, straight hair and impossibly erect posture, paused briefly from what she was doing to deliver a mini-lecture about memory.

She explained how recent research, including her own, has shown that memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.

And then Schiller went back to what she had been doing, which was providing a slamming, rhythmic beat on drums and backup vocals for the Amygdaloids, a rock band composed of New York City neuroscientists. During their performance at the institute’s second annual “Heavy Mental Variety Show,” the band blasted out a selection of its greatest hits, including songs about cognition (“Theory of My Mind”), memory (“A Trace”), and psychopathology (“Brainstorm”).

“Just give me a pill,” Schiller crooned at one point, during the chorus of a song called “Memory Pill.” “Wash away my memories …”

The irony is that if research by Schiller and others holds up, you may not even need a pill to strip a memory of its power to frighten or oppress you.

Schiller, 40, has been in the vanguard of a dramatic reassessment of how human memory works at the most fundamental level. Her current lab group at Mount Sinai School of Medicine, her former colleagues at New York University, and a growing army of like-minded researchers have marshaled a pile of data to argue that we can alter the emotional impact of a memory by adding new information to it or recalling it in a different context. This hypothesis challenges 100 years of neuroscience and overturns cultural touchstones from Marcel Proust to best-selling memoirs. It changes how we think about the permanence of memory and identity, and it suggests radical nonpharmacological approaches to treating pathologies like post-traumatic stress disorder, other fear-based anxiety disorders, and even addictive behaviors.

In a landmark 2010 paper in Nature, Schiller (then a postdoc at New York University) and her NYU colleagues, including Joseph E. LeDoux and Elizabeth A. Phelps, published the results of human experiments indicating that memories are reshaped and rewritten every time we recall an event. And, the research suggested, if mitigating information about a traumatic or unhappy event is introduced within a narrow window of opportunity after its recall—during the few hours it takes for the brain to rebuild the memory in the biological brick and mortar of molecules—the emotional experience of the memory can essentially be rewritten.

“When you affect emotional memory, you don’t affect the content,” Schiller explains. “You still remember perfectly. You just don’t have the emotional memory.”

Fear training

The idea that memories are constantly being rewritten is not entirely new. Experimental evidence to this effect dates back at least to the 1960s. But mainstream researchers tended to ignore the findings for decades because they contradicted the prevailing scientific theory about how memory works.

That view began to dominate the science of memory at the beginning of the 20th century. In 1900, two German scientists, Georg Elias Müller and Alfons Pilzecker, conducted a series of human experiments at the University of Göttingen. Their results suggested that memories were fragile at the moment of formation but were strengthened, or consolidated, over time; once consolidated, these memories remained essentially static, permanently stored in the brain like a file in a cabinet from which they could be retrieved when the urge arose.

It took decades of painstaking research for neuroscientists to tease apart a basic mechanism of memory to explain how consolidation occurred at the level of neurons and proteins: an experience entered the neural landscape of the brain through the senses, was initially “encoded” in a central brain apparatus known as the hippocampus, and then migrated—by means of biochemical and electrical signals—to other precincts of the brain for storage. A famous chapter in this story was the case of “H.M.,” a young man whose hippocampus was removed during surgery in 1953 to treat debilitating epileptic seizures; although physiologically healthy for the remainder of his life (he died in 2008), H.M. was never again able to create new long-term memories, other than to learn new motor skills.

Subsequent research also made clear that there is no single thing called memory but, rather, different types of memory that achieve different biological purposes using different neural pathways. “Episodic” memory refers to the recollection of specific past events; “procedural” memory refers to the ability to remember specific motor skills like riding a bicycle or throwing a ball; fear memory, a particularly powerful form of emotional memory, refers to the immediate sense of distress that comes from recalling a physically or emotionally dangerous experience. Whatever the memory, however, the theory of consolidation argued that it was an unchanging neural trace of an earlier event, fixed in long-term storage. Whenever you retrieved the memory, whether it was triggered by an unpleasant emotional association or by the seductive taste of a madeleine, you essentially fetched a timeless narrative of an earlier event. Humans, in this view, were the sum total of their fixed memories. As recently as 2000 in Science, in a review article titled “Memory—A Century of Consolidation,” James L. McGaugh, a leading neuroscientist at the University of California, Irvine, celebrated the consolidation hypothesis for the way that it “still guides” fundamental research into the biological process of long-term memory.

As it turns out, Proust wasn’t much of a neuroscientist, and consolidation theory couldn’t explain everything about memory. This became apparent during decades of research into what is known as fear training.

Schiller gave me a crash course in fear training one afternoon in her Mount Sinai lab. One of her postdocs, Dorothee Bentz, strapped an electrode onto my right wrist in order to deliver a mild but annoying shock. She also attached sensors to several fingers on my left hand to record my galvanic skin response, a measure of physiological arousal and fear. Then I watched a series of images—blue and purple cylinders—flash by on a computer screen. It quickly became apparent that the blue cylinders often (but not always) preceded a shock, and my skin conductivity readings reflected what I’d learned. Every time I saw a blue cylinder, I became anxious in anticipation of a shock. The “learning” took no more than a couple of minutes, and Schiller pronounced my little bumps of anticipatory anxiety, charted in real time on a nearby monitor, a classic response of fear training. “It’s exactly the same as in the rats,” she said.

In the 1960s and 1970s, several research groups used this kind of fear memory in rats to detect cracks in the theory of memory consolidation. In 1968, for example, Donald J. Lewis of Rutgers University led a study showing that you could make the rats lose the fear associated with a memory if you gave them a strong electroconvulsive shock right after they were induced to retrieve that memory; the shock produced an amnesia about the previously learned fear. Giving a shock to animals that had not retrieved the memory, in contrast, did not cause amnesia. In other words, a strong shock timed to occur immediately after a memory was retrieved seemed to have a unique capacity to disrupt the memory itself and allow it to be reconsolidated in a new way. Follow-up work in the 1980s confirmed some of these observations, but they lay so far outside mainstream thinking that they barely received notice.

Moment of silence

At the time, Schiller was oblivious to these developments. A self-described skateboarding “science geek,” she grew up in Rishon LeZion, Israel’s fourth-largest city, on the coastal plain a few miles southeast of Tel Aviv. She was the youngest of four children of a mother from Morocco and a “culturally Polish” father from Ukraine—“a typical Israeli melting pot,” she says. As a tall, fair-skinned teenager with European features, she recalls feeling estranged from other neighborhood kids because she looked so German.

Schiller remembers exactly when her curiosity about the nature of human memory began. She was in the sixth grade, and it was the annual Holocaust Memorial Day in Israel. For a school project, she asked her father about his memories as a Holocaust survivor, and he shrugged off her questions. She was especially puzzled by her father’s behavior at 11 a.m., when a simultaneous eruption of sirens throughout Israel signals the start of a national moment of silence. While everyone else in the country stood up to honor the victims of genocide, he stubbornly remained seated at the kitchen table as the sirens blared, drinking his coffee and reading the newspaper.

“The Germans did something to my dad, but I don’t know what because he never talks about it,” Schiller told a packed audience in 2010 at The Moth, a storytelling event.

During her compulsory service in the Israeli army, she organized scientific and educational conferences, which led to studies in psychology and philosophy at Tel Aviv University; during that same period, she procured a set of drums and formed her own Hebrew rock band, the Rebellion Movement. Schiller went on to receive a PhD in psychobiology from Tel Aviv University in 2004. That same year, she recalls, she saw the movie Eternal Sunshine of the Spotless Mind, in which a young man undergoes treatment with a drug that erases all memories of a former girlfriend and their painful breakup. Schiller heard (mistakenly, it turns out) that the premise of the movie had been based on research conducted by Joe LeDoux, and she eventually applied to NYU for a postdoctoral fellowship.

In science as in memory, timing is everything. Schiller arrived in New York just in time for the second coming of memory reconsolidation in neuroscience.

Altering the story

The table had been set for Schiller’s work on memory modification in 2000, when Karim Nader, a postdoc in LeDoux’s lab, suggested an experiment testing the effect of a drug on the formation of fear memories in rats. LeDoux told Nader in no uncertain terms that he thought the idea was a waste of time and money. Nader did the experiment anyway. It ended up getting published in Nature and sparked a burst of renewed scientific interest in memory reconsolidation (see “Manipulating Memory,” May/June 2009).

The rats had undergone classic fear training—in an unpleasant twist on Pavlovian conditioning, they had learned to associate an auditory tone with an electric shock. But right after the animals retrieved the fearsome memory (the researchers knew they had done so because they froze when they heard the tone), Nader injected a drug that blocked protein synthesis directly into their amygdala, the part of the brain where fear memories are believed to be stored. Surprisingly, that appeared to pave over the fearful association. The rats no longer froze in fear of the shock when they heard the sound cue.

Decades of research had established that long-term memory consolidation requires the synthesis of proteins in the brain’s memory pathways, but no one knew that protein synthesis was required after the retrieval of a memory as well—which implied that the memory was being consolidated then, too. Nader’s experiments also showed that blocking protein synthesis prevented the animals from recalling the fearsome memory only if they received the drug at the right time, shortly after they were reminded of the fearsome event. If Nader waited six hours before giving the drug, it had no effect and the original memory remained intact. This was a big biochemical clue that at least some forms of memories essentially had to be neurally rewritten every time they were recalled.

When Schiller arrived at NYU in 2005, she was asked by Elizabeth Phelps, who was spearheading memory research in humans, to extend Nader’s findings and test the potential of a drug to block fear memories. The drug used in the rodent experiment was much too toxic for human use, but a class of antianxiety drugs known as beta-adrenergic antagonists (or, in common parlance, “beta blockers”) had potential; among these drugs was propranolol, which had previously been approved by the FDA for the treatment of panic attacks and stage fright. ­Schiller immediately set out to test the effect of propranolol on memory in humans, but she never actually performed the experiment because of prolonged delays in getting institutional approval for what was then a pioneering form of human experimentation. “It took four years to get approval,” she recalls, “and then two months later, they took away the approval again. My entire postdoc was spent waiting for this experiment to be approved.” (“It still hasn’t been approved!” she adds.)

While waiting for the approval that never came, Schiller began to work on a side project that turned out to be even more interesting. It grew out of an offhand conversation with a colleague about some anomalous data described at meeting of LeDoux’s lab: a group of rats “didn’t behave as they were supposed to” in a fear experiment, Schiller says.

The data suggested that a fear memory could be disrupted in animals even without the use of a drug that blocked protein synthesis. Schiller used the kernel of this idea to design a set of fear experiments in humans, while Marie-H. Monfils, a member of the LeDoux lab, simultaneously pursued a parallel line of experimentation in rats. In the human experiments, volunteers were shown a blue square on a computer screen and then given a shock. Once the blue square was associated with an impending shock, the fear memory was in place. Schiller went on to show that if she repeated the sequence that produced the fear memory the following day but broke the association within a narrow window of time—that is, showed the blue square without delivering the shock—this new information was incorporated into the memory.

Here, too, the timing was crucial. If the blue square that wasn’t followed by a shock was shown within 10 minutes of the initial memory recall, the human subjects reconsolidated the memory without fear. If it happened six hours later, the initial fear memory persisted. Put another way, intervening during the brief window when the brain was rewriting its memory offered a chance to revise the initial memory itself while diminishing the emotion (fear) that came with it. By mastering the timing, the NYU group had essentially created a scenario in which humans could rewrite a fearsome memory and give it an unfrightening ending. And this new ending was robust: when Schiller and her colleagues called their subjects back into the lab a year later, they were able to show that the fear associated with the memory was still blocked.

The study, published in Nature in 2010, made clear that reconsolidation of memory didn’t occur only in rats.

Read the entire article here.

The Myth of Martyrdom

Unfortunately our world is still populated by a few people who will willingly shed the blood of others while destroying themselves. Understanding the personalities and motivations of these people may one day help eliminate this scourge. In the meantime, psychologists ponder whether they are psychologically normal, but politically crazed fanatics or deeply troubled?

Adam Lankford, a criminal justice professor, asserts that suicide terrorists are merely unhappy, damaged individuals who want to die. In his book, The Myth of Martyrdom, Lankford rejects the popular view of suicide terrorists as calculating, radicalized individuals who will do anything for a cause.

From the New Scientist:

In the aftermath of 9/11, terrorism experts in the US made a bold and counter-intuitive claim: the suicide terrorists were psychologically normal. When it came to their state of mind, they were not so different from US Special Forces agents. Just because they deliberately crashed planes into buildings, that didn’t make them suicidal – it simply meant they were willing to die for a cause they believed in.

This argument was stated over and over and became the orthodoxy. “We’d like to believe these are crazed fanatics,” said CIA terror expert Jerrold Post in 2006. “Not true… as individuals, this is normal behaviour.”

I disagree. Far from being psychologically normal, suicide terrorists are suicidal. They kill themselves to escape crises or unbearable pain. Until we recognise this, attempts to stop the attacks are doomed to fail.

When I began studying suicide terrorists, I had no agenda, just curiosity. My hunch was that the official version was true, but I kept an open mind.

Then I began watching martyrdom videos and reading case studies, letters and diary entries. What I discovered was a litany of fear, failure, guilt, shame and rage. In my book The Myth of Martyrdom, I present evidence that far from being normal, these self-destructive killers have often suffered from serious mental trauma and always demonstrate at least a few behaviours on the continuum of suicidality, such as suicide ideation, a suicide plan or previous suicide attempts.

Why did so many scholars come to the wrong conclusions? One key reason is that they believe what the bombers, their relatives and friends, and their terrorist recruiters say, especially when their accounts are consistent.

In 2007, for example, Ellen Townsend of the University of Nottingham, UK, published an influential article called Suicide Terrorists: Are they suicidal? Her answer was a resounding no (Suicide and Life-Threatening Behavior, vol 37, p 35).

How did she come to this conclusion? By reviewing five empirical reports: three that depended largely upon interviews with deceased suicide terrorists’ friends and family, and two based on interviews of non-suicide terrorists. She took what they said at face value.

I think this was a serious mistake. All of these people have strong incentives to lie.

Take the failed Palestinian suicide bomber Wafa al-Biss, who attempted to blow herself up at an Israeli checkpoint in 2005. Her own account and those of her parents and recruiters tell the same story: that she acted for political and religious reasons.

These accounts are highly suspect. Terrorist leaders have strategic reasons for insisting that attackers are not suicidal, but instead are carrying out glorious martyrdom operations. Traumatised parents want to believe that their children were motivated by heroic impulses. And suicidal people commonly deny that they are suicidal and are often able to hide their true feelings from the world.

This is especially true of fundamentalist Muslims. Suicide is explicitly condemned in Islam and guarantees an eternity in hell. Martyrs, on the other hand, can go to heaven.

Most telling of all, it later emerged that al-Biss had suffered from mental health problems most of her life and had made two previous suicide attempts.

Her case is far from unique. Consider Qari Sami, who blew himself up in a café in Kabul, Afghanistan, in 2005. He walked in – and kept on walking, past crowded tables and into the bathroom at the back where he closed the door and detonated his belt. He killed himself and two others, but could easily have killed more. It later emerged that he was on antidepressants.

Read the entire article here.

Bella Italia: It’s All in the Hands

[tube]DW91Ec4DYkU[/tube]

Italians are famous and infamous for their eloquent and vigorous hand gestures. Psychologist professor Isabella Poggi, of Roma Tre University, has cataloged about 250 hand gestures used by Italians in everyday conversation. The gestures are used to reinforce a simple statement or emotion or convey quite complex meaning. Italy would not be the same without them.

Our favorite hand gesture is fingers and thumb pinched in the form of a spire often used to mean “what on earth are you talking about?“; moving the hand slightly up and down while doing this adds emphasis and demands explanation.

For a visual lexicon of the most popular gestures jump here.

From the New York Times:

In the great open-air theater that is Rome, the characters talk with their hands as much as their mouths. While talking animatedly on their cellphones or smoking cigarettes or even while downshifting their tiny cars through rush-hour traffic, they gesticulate with enviably elegant coordination.

From the classic fingers pinched against the thumb that can mean “Whaddya want from me?” or “I wasn’t born yesterday” to a hand circled slowly, indicating “Whatever” or “That’ll be the day,” there is an eloquence to the Italian hand gesture. In a culture that prizes oratory, nothing deflates airy rhetoric more swiftly.

Some gestures are simple: the side of the hand against the belly means hungry; the index finger twisted into the cheek means something tastes good; and tapping one’s wrist is a universal sign for “hurry up.” But others are far more complex. They add an inflection — of fatalism, resignation, world-weariness — that is as much a part of the Italian experience as breathing.

Two open hands can ask a real question, “What’s happening?” But hands placed in prayer become a sort of supplication, a rhetorical question: “What do you expect me to do about it?” Ask when a Roman bus might arrive, and the universal answer is shrugged shoulders, an “ehh” that sounds like an engine turning over and two raised hands that say, “Only when Providence allows.”

To Italians, gesturing comes naturally. “You mean Americans don’t gesture? They talk like this?” asked Pasquale Guarrancino, a Roman taxi driver, freezing up and placing his arms flat against his sides. He had been sitting in his cab talking with a friend outside, each moving his hands in elaborate choreography. Asked to describe his favorite gesture, he said it was not fit for print.

In Italy, children and adolescents gesture. The elderly gesture. Some Italians joke that gesturing may even begin before birth. “In the ultrasound, I think the baby is saying, ‘Doctor, what do you want from me?’ ” said Laura Offeddu, a Roman and an elaborate gesticulator, as she pinched her fingers together and moved her hand up and down.

On a recent afternoon, two middle-aged men in elegant dark suits were deep in conversation outside the Giolitti ice cream parlor in downtown Rome, gesturing even as they held gelato in cones. One, who gave his name only as Alessandro, noted that younger people used a gesture that his generation did not: quotation marks to signify irony.

Sometimes gesturing can get out of hand. Last year, Italy’s highest court ruled that a man who inadvertently struck an 80-year-old woman while gesticulating in a piazza in the southern region Puglia was liable for civil damages. “The public street isn’t a living room,” the judges ruled, saying, “The habit of accompanying a conversation with gestures, while certainly licit, becomes illicit” in some contexts.

In 2008, Umberto Bossi, the colorful founder of the conservative Northern League, raised his middle finger during the singing of Italy’s national anthem. But prosecutors in Venice determined that the gesture, while obscene and the cause of widespread outrage, was not a crime.

Gestures have long been a part of Italy’s political spectacle. Former Prime Minister Silvio Berlusconi is a noted gesticulator. When he greeted President Obama and his wife, Michelle, at a meeting of the Group of 20 leaders in September 2009, he extended both hands, palms facing toward himself, and then pinched his fingers as he looked Mrs. Obama up and down — a gesture that might be interpreted as “va-va-voom.”

In contrast, Giulio Andreotti — Christian Democrat, seven-time prime minister and by far the most powerful politician of the Italian postwar era — was famous for keeping both hands clasped in front of him. The subtle, patient gesture functioned as a kind of deterrent, indicating the tremendous power he could deploy if he chose to.

Isabella Poggi, a professor of psychology at Roma Tre University and an expert on gestures, has identified around 250 gestures that Italians use in everyday conversation. “There are gestures expressing a threat or a wish or desperation or shame or pride,” she said. The only thing differentiating them from sign language is that they are used individually and lack a full syntax, Ms. Poggi added.

Far more than quaint folklore, gestures have a rich history. One theory holds that Italians developed them as an alternative form of communication during the centuries when they lived under foreign occupation — by Austria, France and Spain in the 14th through 19th centuries — as a way of communicating without their overlords understanding.

Another theory, advanced by Adam Kendon, the editor in chief of the journal Gesture, is that in overpopulated cities like Naples, gesturing became a way of competing, of marking one’s territory in a crowded arena. “To get attention, people gestured and used their whole bodies,” Ms. Poggi said, explaining the theory.

Read the entire article here.

Video courtesy of New York Times.

Pretending to be Smart

Have you ever taken a date to a cerebral movie or the opera? Have you ever taken a classic work of literature to read at the beach? If so, you are not alone. But why are you doing it?

From the Telegraph:

Men try to impress their friends almost twice as much as women do by quoting Shakespeare and pretending to like jazz to seem more clever.

A fifth of all adults admitted they have tried to impress others by making out they are more cultured than they really are, but this rises to 41 per cent in London.

Scotland is the least pretentious country as only 14 per cent of the 1,000 UK adults surveyed had faked their intelligence there, according to Ask Jeeves research.

Typical methods of trying to seem cleverer ranged from deliberately reading a ‘serious’ novel on the beach, passing off other people’s witty remarks as one’s own and talking loudly about politics in front of others.

Two thirds put on the pretensions for friends, while 36 per cent did it to seem smarter in their workplace and 32 per cent tried to impress a potential partner.

One in five swapped their usual holiday read for something more serious on the beach and one in four went to an art gallery to look more cultured.

When it came to music tastes, 20 per cent have pretended to prefer Beethoven to Beyonce and many have referenced operas they have never seen.

A spokesman for Ask Jeeves said: “We were surprised by just how many people think they should go to such lengths in order to impress someone else.

“They obviously think they will make a better impression if they pretend to like Beethoven rather than admit they listen to Beyonce or read The Spectator rather than Loaded.

“Social media and the internet means it is increasingly easy to present this kind of false image about themselves.

“But in the end, if they are really going to be liked then it is going to be for the person they really are rather than the person they are pretending to be.”

Social media also plays a large part with people sharing Facebook posts on politics or re-tweeting clever tweets to raise their intellectual profile.

Men were the biggest offenders, with 26 per cent of men admitting to the acts of pretence compared to 14 per cent of women.

Top things people have done to seem smarter:

Repeated someone else’s joke as your own

Gone to an art gallery

Listened to classical music in front of others

Read a ‘serious’ book on the beach

Re-tweeted a clever tweet

Talked loudly about politics in front of others

Read a ‘serious’ magazine on public transport

Shared an intellectual article on Facebook

Quoted Shakespeare

Pretended to know about wine

Worn glasses with clear lenses

Mentioned an opera you’d ‘seen’

Pretended to like jazz

Read the entire article here.

Image: Opera. Courtesy of the New York Times.

What Makes Us Human

Psychologist Jerome Kagan leaves no stone unturned in his quest to determine what makes us distinctly human. His latest book, The Human Spark: The science of human development, comes up with some fresh conclusions.

From the New Scientist:

What is it that makes humans special, that sets our species apart from all others? It must be something connected with intelligence – but what exactly? People have asked these questions for as long as we can remember. Yet the more we understand the minds of other animals, the more elusive the answers to these questions have become.

The latest person to take up the challenge is Jerome Kagan, a former professor at Harvard University. And not content with pinning down the “human spark” in the title of his new book, he then tries to explain what makes each of us unique.

As a pioneer in the science of developmental psychology, Kagan has an interesting angle. A life spent investigating how a fertilised egg develops into an adult human being provides him with a rich understanding of the mind and how it differs from that of our closest animal cousins.

Human and chimpanzee infants behave in remarkably similar ways for the first four to six months, Kagan notes. It is only during the second year of life that we begin to diverge profoundly. As the toddler’s frontal lobes expand and the connections between the brain sites increase, the human starts to develop the talents that set our species apart. These include “the ability to speak a symbolic language, infer the thoughts and feelings of others, understand the meaning of a prohibited action, and become conscious of their own feelings, intentions and actions”.

Becoming human, as Kagan describes it, is a complex dance of neurobiological changes and psychological advances. All newborns possess the potential to develop the universal human properties “inherent in their genomes”. What makes each of us individual is the unique backdrop of genetics, epigenetics, and the environment against which this development plays out.

Kagan’s research highlighted the role of temperament, which he notes is underpinned by at least 1500 genes, affording huge individual variation. This variation, in turn, influences the way we respond to environmental factors including family, social class, culture and historical era.

But what of that human spark? Kagan seems to locate it in a quartet of qualities: language, consciousness, inference and, especially, morality. This is where things start to get weird. He would like you to believe that morality is uniquely human, which, of course, bolsters his argument. Unfortunately, it also means he has to deny that a rudimentary morality has evolved in other social animals whose survival also depends on cooperation.

Instead, Kagan argues that morality is a distinctive property of our species, just as “fish do not have lungs”. No mention of evolution. So why are we moral, then? “The unique biology of the human brain motivates children and adults to act in ways that will allow them to arrive at the judgement that they are a good person.” That’s it?

Warming to his theme, Kagan argues that in today’s world, where traditional moral standards have been eroded and replaced by a belief in the value of wealth and celebrity, it is increasingly difficult to see oneself as a good person. He thinks this mismatch between our moral imperative and Western culture helps explain the “modern epidemic” of mental illness. Unwittingly, we have created an environment in which the human spark is fading.

Some of Kagan’s ideas are even more outlandish, surely none more so than the assertion that a declining interest in natural sciences may be a consequence of mothers becoming less sexually mysterious than they once were. More worryingly, he doesn’t seem to believe that humans are subject to the same forces of evolution as other animals.

Read the entire article here.

Media Multi-Tasking, School Work and Poor Memory

It’s official — teens can’t stay off social media for more than 15 minutes. It’s no secret that many kids aged between 8 and 18 spend most of their time texting, tweeting and checking their real-time social status. The profound psychological and sociological consequences of this behavior will only start to become apparent ten to fifteen year from now. In the meantime, researchers are finding a general degradation in kids’ memory skills from using social media and multi-tasking while studying.

From Slate:

Living rooms, dens, kitchens, even bedrooms: Investigators followed students into the spaces where homework gets done. Pens poised over their “study observation forms,” the observers watched intently as the students—in middle school, high school, and college, 263 in all—opened their books and turned on their computers.

For a quarter of an hour, the investigators from the lab of Larry Rosen, a psychology professor at California State University–Dominguez Hills, marked down once a minute what the students were doing as they studied. A checklist on the form included: reading a book, writing on paper, typing on the computer—and also using email, looking at Facebook, engaging in instant messaging, texting, talking on the phone, watching television, listening to music, surfing the Web. Sitting unobtrusively at the back of the room, the observers counted the number of windows open on the students’ screens and noted whether the students were wearing earbuds.

Although the students had been told at the outset that they should “study something important, including homework, an upcoming examination or project, or reading a book for a course,” it wasn’t long before their attention drifted: Students’ “on-task behavior” started declining around the two-minute mark as they began responding to arriving texts or checking their Facebook feeds. By the time the 15 minutes were up, they had spent only about 65 percent of the observation period actually doing their schoolwork.

“We were amazed at how frequently they multitasked, even though they knew someone was watching,” Rosen says. “It really seems that they could not go for 15 minutes without engaging their devices,” adding, “It was kind of scary, actually.”

Concern about young people’s use of technology is nothing new, of course. But Rosen’s study, published in the May issue of Computers in Human Behavior, is part of a growing body of research focused on a very particular use of technology: media multitasking while learning. Attending to multiple streams of information and entertainment while studying, doing homework, or even sitting in class has become common behavior among young people—so common that many of them rarely write a paper or complete a problem set any other way.

But evidence from psychology, cognitive science, and neuroscience suggests that when students multitask while doing schoolwork, their learning is far spottier and shallower than if the work had their full attention. They understand and remember less, and they have greater difficulty transferring their learning to new contexts. So detrimental is this practice that some researchers are proposing that a new prerequisite for academic and even professional success—the new marshmallow test of self-discipline—is the ability to resist a blinking inbox or a buzzing phone.

The media multitasking habit starts early. In “Generation M2: Media in the Lives of 8- to 18-Year-Olds,” a survey conducted by the Kaiser Family Foundation and published in 2010, almost a third of those surveyed said that when they were doing homework, “most of the time” they were also watching TV, texting, listening to music, or using some other medium. The lead author of the study was Victoria Rideout, then a vice president at Kaiser and now an independent research and policy consultant. Although the study looked at all aspects of kids’ media use, Rideout told me she was particularly troubled by its findings regarding media multitasking while doing schoolwork.

“This is a concern we should have distinct from worrying about how much kids are online or how much kids are media multitasking overall. It’s multitasking while learning that has the biggest potential downside,” she says. “I don’t care if a kid wants to tweet while she’s watching American Idol, or have music on while he plays a video game. But when students are doing serious work with their minds, they have to have focus.”

For older students, the media multitasking habit extends into the classroom. While most middle and high school students don’t have the opportunity to text, email, and surf the Internet during class, studies show the practice is nearly universal among students in college and professional school. One large survey found that 80 percent of college students admit to texting during class; 15 percent say they send 11 or more texts in a single class period.

During the first meeting of his courses, Rosen makes a practice of calling on a student who is busy with his phone. “I ask him, ‘What was on the slide I just showed to the class?’ The student always pulls a blank,” Rosen reports. “Young people have a wildly inflated idea of how many things they can attend to at once, and this demonstration helps drive the point home: If you’re paying attention to your phone, you’re not paying attention to what’s going on in class.” Other professors have taken a more surreptitious approach, installing electronic spyware or planting human observers to record whether students are taking notes on their laptops or using them for other, unauthorized purposes.

Read the entire article here.

Image courtesy of Examiner.

The Advantages of Shyness

Behavioral scientists have confirmed what shy people of the world have known for quite some time — that timidity and introversion can be beneficial traits. Yes, shyness is not a disorder!

Several studies of humans and animals show that shyness and assertiveness are both beneficial, dependent on the situational context. Researchers have shown that evolution favors both types of personality, and in fact, often rewards adaptability versus pathological extremes at either end of the behavioral spectrum.

From the New Scientist:

“Don’t be shy!” It’s an oft-heard phrase in modern western cultures where go-getters and extroverts appear to have an edge and where raising confident, assertive children sits high on the priority list for many parents. Such attitudes are understandable. Timidity really does hold individuals back. “Shy people start dating later, have sex later, get married later, have children later and get promoted later,” says Bernardo Carducci, director of the Shyness Research Institute at Indiana University Southeast in New Albany. In extreme cases shyness can even be pathological, resulting in anxiety attacks and social phobia.

In recent years it has emerged that we are not the only creatures to experience shyness. In fact, it is one of the most obvious character traits in the animal world, found in a wide variety of species from sea anemones and spiders to birds and sheep. But it is also becoming clear that in the natural world fortune doesn’t always favour the bold. Sometimes the shy, cautious individuals are luckier in love and lifespan. The inescapable conclusion is that there is no one “best” personality – each has benefits in different situations – so evolution favours both.

Should we take a lesson from these findings and re-evaluate what it means to be a shy human? Does shyness have survival value for us too? Some researchers think so and are starting to find that people who are shy, sensitive and even anxious have some surprising advantages over more go-getting types. Perhaps it is time to ditch our negative attitude to shyness and accept that it is as valuable as extroversion. Carducci certainly thinks so. “Think about what it would be like if everybody was very bold,” he says. “What would your daily life be like if everybody you encountered was like Lady Gaga?”

One of the first steps in the rehabilitation of shyness came in the 1990s, from work on salamanders. An interest in optimality – the idea that animals are as efficient as possible in their quest for food, mates and resources – led Andrew Sih at the University of California, Davis, to study the behaviour of sunfish and their prey, larval salamanders. In his experiments, he couldn’t help noticing differences between individual salamanders. Some were bolder and more active than others. They ate more and grew faster than their shyer counterparts, but there was a downside. When sunfish were around, the bold salamanders were just “blundering out there and not actually doing the sort of smart anti-predator behaviour that simple optimality theory predicted they would do”, says Sih. As a result, they were more likely to be gobbled up than their shy counterparts.

Until then, the idea that animals have personalities – consistent differences in behaviour between individuals – was considered controversial. Sih’s research forced a rethink. It also spurred further studies, to the extent that today the so-called “shy-bold continuum” has been identified in more than 100 species. In each of these, individuals range from highly “reactive” to highly “proactive”: reactive types being shy, timid, risk-averse and slow to explore novel environments, whereas proactive types are bold, aggressive, exploratory and risk-prone.

Why would these two personality types exist in nature? Sih’s study holds the key. Bold salamander larvae may risk being eaten, but their fast growth is a distinct advantage in the small streams they normally inhabit, which may dry up before more cautious individuals can reach maturity. In other words, each personality has advantages and disadvantages depending on the circumstances. Since natural environments are complex and constantly changing, natural selection may favour first one and then the other or even both simultaneously.

The idea is illustrated even more convincingly by studies of a small European bird, the great tit. The research, led by John Quinn at University College Cork in Ireland, involved capturing wild birds and putting each separately into a novel environment to assess how proactive or reactive it was. Some hunkered down in the fake tree provided and stayed there for the entire 8-minute trial; others immediately began exploring every nook and cranny of the experimental room. The birds were then released back into the wild, to carry on with the business of surviving and breeding. “If you catch those same individuals a year later, they tend to do more or less the same thing,” says Quinn. In other words, exploration is a consistent personality trait. What’s more, by continuously monitoring the birds, a team led by Niels Dingemanse at the Max Planck Institute for Ornithology in Seewiesen, Germany, observed that in certain years the environment favours bold individuals – more survive and they produce more chicks than other birds – whereas in other years the shy types do best.

A great tit’s propensity to explore is usually similar to that of its parents and a genetic component of risk-taking behaviour has been found in this and other species. Even so, nurture seems to play a part in forming animal personalities too (see “Nurturing Temperament”). Quinn’s team has also identified correlations between exploring and key survival behaviours: the more a bird likes to explore, the more willing it is to disperse, take risks and act aggressively. In contrast, less exploratory individuals were better at solving problems to find food.

Read the entire article following the jump.

Image courtesy of Psychology today.

Moist and Other Words We Hate

Some words give us the creeps, they raise the hair on back of our heads, they make us squirm and give us an internal shudder. “Moist” is such as word.

From Slate:

The George Saunders story “Escape From Spiderhead,” included in his much praised new book Tenth of December, is not for the squeamish or the faint of heart. The sprawling, futuristic tale delves into several potentially unnerving topics: suicide, sex, psychotropic drugs. It includes graphic scenes of self-mutilation. It employs the phrases “butt-squirm,” “placental blood,” and “thrusting penis.” At one point, Saunders relates a conversation between two characters about the application of medicinal cream to raw, chafed genitals.

Early in the story, there is a brief passage in which the narrator, describing a moment of postcoital amorousness, says, “Everything seemed moist, permeable, sayable.” This sentence doesn’t really stand out from the rest—in fact, it’s one of the less conspicuous sentences in the story. But during a recent reading of “Escape From Spiderhead” in Austin, Texas, Saunders says he encountered something unexpected. “I’d texted a cousin of mine who was coming with her kids (one of whom is in high school) just to let her know there was some rough language,” he recalls. “Afterwards she said she didn’t mind fu*k, but hated—wait for it—moist. Said it made her a little physically ill. Then I went on to Jackson, read there, and my sister Jane was in the audience—and had the same reaction. To moist.”

Mr. Saunders, say hello to word aversion.

It’s about to get really moist in here. But first, some background is in order. The phenomenon of word aversion—seemingly pedestrian, inoffensive words driving some people up the wall—has garnered increasing attention over the past decade or so. In a recent post on Language Log, University of Pennsylvania linguistics professor Mark Liberman defined the concept as “a feeling of intense, irrational distaste for the sound or sight of a particular word or phrase, not because its use is regarded as etymologically or logically or grammatically wrong, nor because it’s felt to be over-used or redundant or trendy or non-standard, but simply because the word itself somehow feels unpleasant or even disgusting.”

So we’re not talking about hating how some people say laxadaisical instead of lackadaisical or wanting to vigorously shake teenagers who can’t avoid using the word like between every other word of a sentence. If you can’t stand the word tax because you dislike paying taxes, that’s something else, too. (When recently asked about whether he harbored any word aversions, Harvard University cognition and education professor Howard Gardner offered up webinar, noting that these events take too much time to set up, often lack the requisite organization, and usually result in “a singularly unpleasant experience.” All true, of course, but that sort of antipathy is not what word aversion is all about.)

Word aversion is marked by strong reactions triggered by the sound, sight, and sometimes even the thought of certain words, according to Liberman. “Not to the things that they refer to, but to the word itself,” he adds. “The feelings involved seem to be something like disgust.”

Participants on various message boards and online forums have noted serious aversions to, for instance, squab, cornucopia, panties, navel, brainchild, crud, slacks, crevice, and fudge, among numerous others. Ointment, one Language Log reader noted in 2007, “has the same mouth-feel as moist, yet it’s somehow worse.” In response to a 2009 post on the subject by Ben Zimmer, one commenter confided: “The word meal makes me wince. Doubly so when paired with hot.” (Nineteen comments later, someone agreed, declaring: “Meal is a repulsive word.”) In many cases, real-life word aversions seem no less bizarre than when the words mattress and tin induce freak-outs on Monty Python’s Flying Circus. (The Monty Python crew knew a thing or two about annoying sounds.)

Jason Riggle, a professor in the department of linguistics at the University of Chicago, says word aversions are similar to phobias. “If there is a single central hallmark to this, it’s probably that it’s a more visceral response,” he says. “The [words] evoke nausea and disgust rather than, say, annoyance or moral outrage. And the disgust response is triggered because the word evokes a highly specific and somewhat unusual association with imagery or a scenario that people would typically find disgusting—but don’t typically associate with the word.” These aversions, Riggle adds, don’t seem to be elicited solely by specific letter combinations or word characteristics. “If we collected enough of [these words], it might be the case that the words that fall in this category have some properties in common,” he says. “But it’s not the case that words with those properties in common always fall in the category.”

So back to moist. If pop cultural references, Internet blog posts, and social media are any indication, moist reigns supreme in its capacity to disgust a great many of us. Aversion to the word has popped up on How I Met Your Mother and Dead Like Me. VH1 declared that using the word moist is enough to make a man “undateable.” In December, Huffington Post’s food section published a piece suggesting five alternatives to the word moist so the site could avoid its usage when writing about various cakes. Readers of The New Yorker flocked to Facebook and Twitter to choose moist as the one word they would most like to be eliminated from the English language. In a survey of 75 Mississippi State University students from 2009, moist placed second only to vomit as the ugliest word in the English language. In a 2011 follow-up survey of 125 students, moist pulled into the ugly-word lead—vanquishing a greatest hits of gross that included phlegm, ooze, mucus, puke, scab, and pus. Meanwhile, there are 7,903 people on Facebook who like the “interest” known as “I Hate the Word Moist.” (More than 5,000 other Facebook users give the thumbs up to three different moist-hatred Facebook pages.)

Being grossed out by the word moist is not beyond comprehension. It’s squishy-seeming, and, to some, specifically evocative of genital regions and undergarments. These qualities are not unusual when it comes to word aversion. Many hated words refer to “slimy things, or gross things, or names for garments worn in potentially sexual areas, or anything to do with food, or suckling, or sexual overtones,” says Riggle. But other averted words are more confounding, notes Liberman. “There is a list of words that seem to have sexual connotations that are among the words that elicit this kind of reaction—moist being an obvious one,” he says. “But there are other words like luggage, and pugilist, and hardscrabble, and goose pimple, and squab, and so on, which I guess you could imagine phonic associations between those words and something sexual, but it certainly doesn’t seem obvious.”

So then the question becomes: What is it about certain words that makes certain people want to hurl?

Riggle thinks the phenomenon may be dependent on social interactions and media coverage. “Given that, as far back as the aughts, there were comedians making jokes about hating [moist], people who were maybe prone to have that kind of reaction to one of these words, surely have had it pointed out to them that it’s an icky word,” he says. “So, to what extent is it really some sort of innate expression that is independently arrived at, and to what extent is it sort of socially transmitted? Disgust is really a very social emotion.”

And in an era of YouTube, Twitter, Vine, BuzzFeed top-20 gross-out lists, and so on, trends, even the most icky ones, spread fast. “There could very well be a viral aspect to this, where either through the media or just through real-world personal connections, the reaction to some particular word—for example, moist—spreads,” says Liberman. “But that’s the sheerest speculation.”

Words do have the power to disgust and repulse, though—that, at least, has been demonstrated in scholarly investigations. Natasha Fedotova, a Ph.D. student studying psychology at the University of Pennsylvania, recently conducted research examining the extent to which individuals connect the properties of an especially repellent thing to the word that represents it. “For instance,” she says, “the word rat, which stands for a disgusting animal, can contaminate an edible object [such as water] if the two touch. This result cannot be explained solely in terms of the tendency of the word to act as a reminder of the disgusting entity because the effect depends on direct physical contact with the word.” Put another way, if you serve people who are grossed out by rats Big Macs on plates that have the word rat written on them, some people will be less likely to want to eat the portion of the burger that touched the word. Humans, in these instances, go so far as to treat gross-out words “as though they can transfer negative properties through physical contact,” says Fedotova.

Product marketers and advertisers are, not surprisingly, well aware of these tendencies, even if they haven’t read about word aversion (and even though they’ve been known to slip up on the word usage front from time to time, to disastrous effect). George Tannenbaum, an executive creative director at the advertising agency R/GA, says those responsible for creating corporate branding strategies know that consumers are an easily skeeved-out bunch. “Our job as communicators and agents is to protect brands from their own linguistic foibles,” he says. “Obviously there are some words that are just ugly sounding.”

Sometimes, because the stakes are so high, Tannenbaum says clients can be risk averse to an extreme. He recalled working on an ad for a health club that included the word pectoral, which the client deemed to be dangerously close to the word pecker. In the end, after much consideration, they didn’t want to risk any pervy connotations. “We took it out,” he says.

Read the entire article following the jump.

Image courtesy of keep-calm-o-matic.

The Benefits of Human Stupidity

Human intelligence is a wonderful thing. At both the individual and collective level it drives our complex communication, our fundamental discoveries and inventions, and impressive and accelerating progress. Intelligence allows us to innovate, to design, to build; and it underlies our superior capacity, over other animals, for empathy, altruism, art, and social and cultural evolution. Yet, despite our intellectual abilities and seemingly limitless potential, we humans still do lots of stupid things. Why is this?

From New Scientist:

“EARTH has its boundaries, but human stupidity is limitless,” wrote Gustave Flaubert. He was almost unhinged by the fact. Colourful fulminations about his fatuous peers filled his many letters to Louise Colet, the French poet who inspired his novel Madame Bovary. He saw stupidity everywhere, from the gossip of middle-class busybodies to the lectures of academics. Not even Voltaire escaped his critical eye. Consumed by this obsession, he devoted his final years to collecting thousands of examples for a kind of encyclopedia of stupidity. He died before his magnum opus was complete, and some attribute his sudden death, aged 58, to the frustration of researching the book.

Documenting the extent of human stupidity may itself seem a fool’s errand, which could explain why studies of human intellect have tended to focus on the high end of the intelligence spectrum. And yet, the sheer breadth of that spectrum raises many intriguing questions. If being smart is such an overwhelming advantage, for instance, why aren’t we all uniformly intelligent? Or are there drawbacks to being clever that sometimes give slower thinkers the upper hand? And why are even the smartest people prone to – well, stupidity?

It turns out that our usual measures of intelligence – particularly IQ – have very little to do with the kind of irrational, illogical behaviours that so enraged Flaubert. You really can be highly intelligent, and at the same time very stupid. Understanding the factors that lead clever people to make bad decisions is beginning to shed light on many of society’s biggest catastrophes, including the recent economic crisis. More intriguingly, the latest research may suggest ways to evade a condition that can plague us all.

The idea that intelligence and stupidity are simply opposing ends of a single spectrum is a surprisingly modern one. The Renaissance theologian Erasmus painted Folly – or Stultitia in Latin – as a distinct entity in her own right, descended from the god of wealth and the nymph of youth; others saw it as a combination of vanity, stubbornness and imitation. It was only in the middle of the 18th century that stupidity became conflated with mediocre intelligence, says Matthijs van Boxsel, a Dutch historian who has written many books about stupidity. “Around that time, the bourgeoisie rose to power, and reason became a new norm with the Enlightenment,” he says. “That put every man in charge of his own fate.”

Modern attempts to study variations in human ability tended to focus on IQ tests that put a single number on someone’s mental capacity. They are perhaps best recognised as a measure of abstract reasoning, says psychologist Richard Nisbett at the University of Michigan in Ann Arbor. “If you have an IQ of 120, calculus is easy. If it’s 100, you can learn it but you’ll have to be motivated to put in a lot of work. If your IQ is 70, you have no chance of grasping calculus.” The measure seems to predict academic and professional success.

Various factors will determine where you lie on the IQ scale. Possibly a third of the variation in our intelligence is down to the environment in which we grow up – nutrition and education, for example. Genes, meanwhile, contribute more than 40 per cent of the differences between two people.

These differences may manifest themselves in our brain’s wiring. Smarter brains seem to have more efficient networks of connections between neurons. That may determine how well someone is able to use their short-term “working” memory to link disparate ideas and quickly access problem-solving strategies, says Jennie Ferrell, a psychologist at the University of the West of England in Bristol. “Those neural connections are the biological basis for making efficient mental connections.”

This variation in intelligence has led some to wonder whether superior brain power comes at a cost – otherwise, why haven’t we all evolved to be geniuses? Unfortunately, evidence is in short supply. For instance, some proposed that depression may be more common among more intelligent people, leading to higher suicide rates, but no studies have managed to support the idea. One of the only studies to report a downside to intelligence found that soldiers with higher IQs were more likely to die during the second world war. The effect was slight, however, and other factors might have skewed the data.

Intellectual wasteland

Alternatively, the variation in our intelligence may have arisen from a process called “genetic drift”, after human civilisation eased the challenges driving the evolution of our brains. Gerald Crabtree at Stanford University in California is one of the leading proponents of this idea. He points out that our intelligence depends on around 2000 to 5000 constantly mutating genes. In the distant past, people whose mutations had slowed their intellect would not have survived to pass on their genes; but Crabtree suggests that as human societies became more collaborative, slower thinkers were able to piggyback on the success of those with higher intellect. In fact, he says, someone plucked from 1000 BC and placed in modern society, would be “among the brightest and most intellectually alive of our colleagues and companions” (Trends in Genetics, vol 29, p 1).

This theory is often called the “idiocracy” hypothesis, after the eponymous film, which imagines a future in which the social safety net has created an intellectual wasteland. Although it has some supporters, the evidence is shaky. We can’t easily estimate the intelligence of our distant ancestors, and the average IQ has in fact risen slightly in the immediate past. At the very least, “this disproves the fear that less intelligent people have more children and therefore the national intelligence will fall”, says psychologist Alan Baddeley at the University of York, UK.

In any case, such theories on the evolution of intelligence may need a radical rethink in the light of recent developments, which have led many to speculate that there are more dimensions to human thinking than IQ measures. Critics have long pointed out that IQ scores can easily be skewed by factors such as dyslexia, education and culture. “I would probably soundly fail an intelligence test devised by an 18th-century Sioux Indian,” says Nisbett. Additionally, people with scores as low as 80 can still speak multiple languages and even, in the case of one British man, engage in complex financial fraud. Conversely, high IQ is no guarantee that a person will act rationally – think of the brilliant physicists who insist that climate change is a hoax.

It was this inability to weigh up evidence and make sound decisions that so infuriated Flaubert. Unlike the French writer, however, many scientists avoid talking about stupidity per se – “the term is unscientific”, says Baddeley. However, Flaubert’s understanding that profound lapses in logic can plague the brightest minds is now getting attention. “There are intelligent people who are stupid,” says Dylan Evans, a psychologist and author who studies emotion and intelligence.

Read the entire article after the jump.

Helplessness and Intelligence Go Hand in Hand

From the Wall Street Journal:

Why are children so, well, so helpless? Why did I spend a recent Sunday morning putting blueberry pancake bits on my 1-year-old grandson’s fork and then picking them up again off the floor? And why are toddlers most helpless when they’re trying to be helpful? Augie’s vigorous efforts to sweep up the pancake detritus with a much-too-large broom (“I clean!”) were adorable but not exactly effective.

This isn’t just a caregiver’s cri de coeur—it’s also an important scientific question. Human babies and young children are an evolutionary paradox. Why must big animals invest so much time and energy just keeping the little ones alive? This is especially true of our human young, helpless and needy for far longer than the young of other primates.

One idea is that our distinctive long childhood helps to develop our equally distinctive intelligence. We have both a much longer childhood and a much larger brain than other primates. Restless humans have to learn about more different physical environments than stay-at-home chimps, and with our propensity for culture, we constantly create new social environments. Childhood gives us a protected time to master new physical and social tools, from a whisk broom to a winning comment, before we have to use them to survive.

The usual museum diorama of our evolutionary origins features brave hunters pursuing a rearing mammoth. But a Pleistocene version of the scene in my kitchen, with ground cassava roots instead of pancakes, might be more accurate, if less exciting.

Of course, many scientists are justifiably skeptical about such “just-so stories” in evolutionary psychology. The idea that our useless babies are really useful learners is appealing, but what kind of evidence could support (or refute) it? There’s still controversy, but two recent studies at least show how we might go about proving the idea empirically.

One of the problems with much evolutionary psychology is that it just concentrates on humans, or sometimes on humans and chimps. To really make an evolutionary argument, you need to study a much wider variety of animals. Is it just a coincidence that we humans have both needy children and big brains? Or will we find the same evolutionary pattern in animals who are very different from us? In 2010, Vera Weisbecker of Cambridge University and a colleague found a correlation between brain size and dependence across 52 different species of marsupials, from familiar ones like kangaroos and opossums to more exotic ones like quokkas.

Quokkas are about the same size as Virginia opossums, but baby quokkas nurse for three times as long, their parents invest more in each baby, and their brains are twice as big.

Read the entire article after the jump.

Psst! AIDS Was Created by the U.S. Government

Some believe that AIDS was created by the U.S. Government or bestowed by a malevolent god. Some believe that Neil Armstrong never set foot on the moon, while others believe that Nazis first established a moon base in 1942. Some believe that recent tsunamis were caused by the U.S. military, and that said military is hiding evidence of alien visits in Area 51, Nevada. The latest of course is the great conspiracy of climate change, which apparently is created by socialists seeking to destroy the United States. This conspiratorial thinking makes for good reality-TV, and presents wonderful opportunities for psychological research. Why after all, in the face of seemingly insurmountable evidence, widespread common consensus and fundamental scientific reasoning, do such ideas, and their believers persist?

[div class=attrib]From Skeptical Science:[end-div]

There is growing evidence that conspiratorial thinking, also known as conspiracist ideation, is often involved in the rejection of scientific propositions. Conspiracist ideations tend to invoke alternative explanations for the nature or source of the scientific evidence. For example, among people who reject the link between HIV and AIDS, common ideations involve the beliefs that AIDS was created by the U.S. Government.

My colleagues and I published a paper recently that found evidence for the involvement of conspiracist ideation in the rejection of scientific propositions—from climate change to the link between tobacco and lung cancer, and between HIV and AIDS—among visitors to climate blogs. This was a fairly unsurprising result because it meshed well with previous research and the existing literature on the rejection of science. Indeed, it would have been far more surprising, from a scientific perspective, if the article had not found a link between conspiracist ideation and rejection of science.

Nonetheless, as some readers of this blog may remember, this article engendered considerable controversy.

The article also generated data.

Data, because for social scientists, public statements and publically-expressed ideas constitute data for further research. Cognitive scientists sometimes apply something called “narrative analysis” to understand how people, groups, or societies are organized and how they think.

In the case of the response to our earlier paper, we were struck by the way in which some of the accusations leveled against our paper were, well, somewhat conspiratorial in nature. We therefore decided to analyze the public response to our first paper with the hypothesis in mind that this response might also involve conspiracist ideation. We systematically collected utterances by bloggers and commenters, and we sought to classify them into various hypotheses leveled against our earlier paper. For each hypothesis, we then compared the public statements against a list of criteria for conspiracist ideation that was taken from the previous literature.

This follow-up paper was accepted a few days ago by Frontiers in Psychology, and a preliminary version of the paper is already available, for open access, here.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Area 51 – Warning sign near secret Area 51 base in Nevada. Courtesy of Wikipedia.[end-div]

You Are Different From Yourself

The next time your spouse tells you that you’re “just not the same person anymore” there may be some truth to it. After all, we are not who we thought we would become, nor are we likely to become what we think. That’s the overall result of a recent study of human personality changes in around 20,000 people over time.

[div class=attrib]From Independent:[end-div]

When we remember our past selves, they seem quite different. We know how much our personalities and tastes have changed over the years. But when we look ahead, somehow we expect ourselves to stay the same, a team of psychologists said Thursday, describing research they conducted of people’s self-perceptions.

They called this phenomenon the “end of history illusion,” in which people tend to “underestimate how much they will change in the future.” According to their research, which involved more than 19,000 people ages 18 to 68, the illusion persists from teenage years into retirement.

“Middle-aged people — like me — often look back on our teenage selves with some mixture of amusement and chagrin,” said one of the authors, Daniel T. Gilbert, a psychologist at Harvard. “What we never seem to realize is that our future selves will look back and think the very same thing about us. At every age we think we’re having the last laugh, and at every age we’re wrong.”

Other psychologists said they were intrigued by the findings, published Thursday in the journal Science, and were impressed with the amount of supporting evidence. Participants were asked about their personality traits and preferences — their favorite foods, vacations, hobbies and bands — in years past and present, and then asked to make predictions for the future. Not surprisingly, the younger people in the study reported more change in the previous decade than did the older respondents.

But when asked to predict what their personalities and tastes would be like in 10 years, people of all ages consistently played down the potential changes ahead.

Thus, the typical 20-year-old woman’s predictions for her next decade were not nearly as radical as the typical 30-year-old woman’s recollection of how much she had changed in her 20s. This sort of discrepancy persisted among respondents all the way into their 60s.

And the discrepancy did not seem to be because of faulty memories, because the personality changes recalled by people jibed quite well with independent research charting how personality traits shift with age. People seemed to be much better at recalling their former selves than at imagining how much they would change in the future.

Why? Dr. Gilbert and his collaborators, Jordi Quoidbach of Harvard and Timothy D. Wilson of the University of Virginia, had a few theories, starting with the well-documented tendency of people to overestimate their own wonderfulness.

“Believing that we just reached the peak of our personal evolution makes us feel good,” Dr. Quoidbach said. “The ‘I wish that I knew then what I know now’ experience might give us a sense of satisfaction and meaning, whereas realizing how transient our preferences and values are might lead us to doubt every decision and generate anxiety.”

Or maybe the explanation has more to do with mental energy: predicting the future requires more work than simply recalling the past. “People may confuse the difficulty of imagining personal change with the unlikelihood of change itself,” the authors wrote in Science.

The phenomenon does have its downsides, the authors said. For instance, people make decisions in their youth — about getting a tattoo, say, or a choice of spouse — that they sometimes come to regret.

And that illusion of stability could lead to dubious financial expectations, as the researchers showed in an experiment asking people how much they would pay to see their favorite bands.

When asked about their favorite band from a decade ago, respondents were typically willing to shell out $80 to attend a concert of the band today. But when they were asked about their current favorite band and how much they would be willing to spend to see the band’s concert in 10 years, the price went up to $129. Even though they realized that favorites from a decade ago like Creed or the Dixie Chicks have lost some of their luster, they apparently expect Coldplay and Rihanna to blaze on forever.

“The end-of-history effect may represent a failure in personal imagination,” said Dan P. McAdams, a psychologist at Northwestern who has done separate research into the stories people construct about their past and future lives. He has often heard people tell complex, dynamic stories about the past but then make vague, prosaic projections of a future in which things stay pretty much the same.

[div class=attrib]Read the entire article after the jump.[end-div]

E or I, T or F: 50 Years of Myers-Briggs

Two million people annually take the Myers-Briggs Type Indicator assessment. Over 10,000 businesses and 2,500 colleges in the United States use the test.

It’s very likely that you have taken the test at some point in your life: during high school, or to get into university or to secure your first job. The test categorizes humans along 4 discrete axes (or dichotomies) of personality types: Extraversion (E) and Introversion (I); Sensing (S) and Intuition (N); Thinking (T) and Feeling (F); Judging (J) and Perceiving (P). If your have a partner it’s likely that he or she has, at sometime or another, (mis-)labeled you as an E or an I, and as a “feeler” rather than a “thinker”, and so on. Countless arguments will have ensued.

[div class=attrib]From the Washington Post:[end-div]

Some grandmothers pass down cameo necklaces. Katharine Cook Briggs passed down the world’s most widely used personality test.

Chances are you’ve taken the Myers-Briggs Type Indicator, or will. Roughly 2 million people a year do. It has become the gold standard of psychological assessments, used in businesses, government agencies and educational institutions. Along the way, it has spawned a multimillion-dollar business around its simple concept that everyone fits one of 16 personality types.

Now, 50 years after the first time anyone paid money for the test, the Myers-Briggs legacy is reaching the end of the family line. The youngest heirs don’t want it. And it’s not clear whether organizations should, either.

That’s not to say it hasn’t had a major influence.

More than 10,000 companies, 2,500 colleges and universities and 200 government agencies in the United States use the test. From the State Department to McKinsey & Co., it’s a rite of passage. It’s estimated that 50 million people have taken the Myers-Briggs personality test since the Educational Testing Service first added the research to its portfolio in 1962.

The test, whose first research guinea pigs were George Washington University students, has seen financial success commensurate to this cultlike devotion among its practitioners. CPP, the private company that publishes Myers-Briggs, brings in roughly $20 million a year from it and the 800 other products, such as coaching guides, that it has spawned.

Yet despite its widespread use and vast financial success, and although it was derived from the work of Carl Jung, one of the most famous psychologists of the 20th century, the test is highly questioned by the scientific community.

To begin even before its arrival in Washington: Myers-Briggs traces its history to 1921, when Jung, a Swiss psychiatrist, published his theory of personality types in the book “Psychologische Typen.” Jung had become well known for his pioneering work in psychoanalysis and close collaboration with Sigmund Freud, though by the 1920s the two had severed ties.

Psychoanalysis was a young field and one many regarded skeptically. Still, it had made its way across the Atlantic not only to the university offices of scientists but also to the home of a mother in Washington.

Katharine Cook Briggs was a voracious reader of the new psychology books coming out in Europe, and she shared her fascination with Jung’s latest work — in which he developed the concepts of introversion and extroversion — with her daughter, Isabel Myers. They would later use Jung’s work as a basis for their own theory, which would become the Myers-Briggs Type Indicator. MBTI is their framework for classifying personality types along four distinct axes: introversion vs. extroversion, sensing vs. intuition, thinking vs. feeling and judging vs. perceiving. A person, according to their hypothesis, has one dominant preference in each of the four pairs. For example, he might be introverted, a sensor, a thinker and a perceiver. Or, in Myers-Briggs shorthand, an “ISTP.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Keirsey Temperament Sorter, which utilizes Myers-Briggs dichotomies to group personalities into 16 types. Courtesy of Wikipedia.[end-div]

Single-tasking is Human

If you’re an office worker you will relate. Recently, you will have participated on a team meeting or conference call only to have at least one person say, when asked a question, “sorry can you please repeat that, I was multitasking.”

Many of us believe, or have been tricked into believing, that doing multiple things at once makes us more productive. This phenomenon was branded by business as multitasking. After all, if computers could do it, then why not humans. Yet, experience shows that humans are woefully inadequate at performing multiple concurrent tasks that require dedicated attention. Of course, humans are experts at walking and chewing gum at the same time. However, in the majority of cases these activities require very little involvement from the higher functions of the brain. There is a growing body of anecdotal and experimental evidence that shows poorer performance on multiple tasks done concurrently versus the same tasks performed sequentially. In fact, for quite some time, researchers have shown that dealing with multiple streams of information at once is a real problem for our limited brains.

Yet, most businesses seem to demand or reward multitasking behavior. And damagingly, the multitasking epidemic now seems to be the norm in the home as well.

[div class=attrib]From the WSJ:[end-div]

In the few minutes it takes to read this article, chances are you’ll pause to check your phone, answer a text, switch to your desktop to read an email from the boss’s assistant, or glance at the Facebook or Twitter messages popping up in the corner of your screen. Off-screen, in your open-plan office, crosstalk about a colleague’s preschooler might lure you away, or a co-worker may stop by your desk for a quick question.

And bosses wonder why it is tough to get any work done.

Distraction at the office is hardly new, but as screens multiply and managers push frazzled workers to do more with less, companies say the problem is worsening and is affecting business.

While some firms make noises about workers wasting time on the Web, companies are realizing the problem is partly their own fault.

Even though digital technology has led to significant productivity increases, the modern workday seems custom-built to destroy individual focus. Open-plan offices and an emphasis on collaborative work leave workers with little insulation from colleagues’ chatter. A ceaseless tide of meetings and internal emails means that workers increasingly scramble to get their “real work” done on the margins, early in the morning or late in the evening. And the tempting lure of social-networking streams and status updates make it easy for workers to interrupt themselves.

“It is an epidemic,” says Lacy Roberson, a director of learning and organizational development at eBay Inc. At most companies, it’s a struggle “to get work done on a daily basis, with all these things coming at you,” she says.

Office workers are interrupted—or self-interrupt—roughly every three minutes, academic studies have found, with numerous distractions coming in both digital and human forms. Once thrown off track, it can take some 23 minutes for a worker to return to the original task, says Gloria Mark, a professor of informatics at the University of California, Irvine, who studies digital distraction.

Companies are experimenting with strategies to keep workers focused. Some are limiting internal emails—with one company moving to ban them entirely—while others are reducing the number of projects workers can tackle at a time.

Last year, Jamey Jacobs, a divisional vice president at Abbott Vascular, a unit of health-care company Abbott Laboratories learned that his 200 employees had grown stressed trying to squeeze in more heads-down, focused work amid the daily thrum of email and meetings.

“It became personally frustrating that they were not getting the things they wanted to get done,” he says. At meetings, attendees were often checking email, trying to multitask and in the process obliterating their focus.

Part of the solution for Mr. Jacobs’s team was that oft-forgotten piece of office technology: the telephone.

Mr. Jacobs and productivity consultant Daniel Markovitz found that employees communicated almost entirely over email, whether the matter was mundane, such as cake in the break room, or urgent, like an equipment issue.

The pair instructed workers to let the importance and complexity of their message dictate whether to use cellphones, office phones or email. Truly urgent messages and complex issues merited phone calls or in-person conversations, while email was reserved for messages that could wait.

Workers now pick up the phone more, logging fewer internal emails and say they’ve got clarity on what’s urgent and what’s not, although Mr. Jacobs says staff still have to stay current with emails from clients or co-workers outside the group.

[div class=attrib]Read the entire article after the jump, and learn more in this insightful article on multitasking over at Big Think.[end-div]

[div class=attrib]Image courtesy of Big Think.[end-div]

Blind Loyalty and the Importance of Critical Thinking

Two landmark studies in the 1960s and ’70s put behavioral psychology squarely in the public consciousness. The obedience experiments by Stanley Milgram and the Stanford Prison experiment demonstrated how regular individuals could be made, quite simply, to obey figures in authority and to subject others to humiliation, suffering and pain.

A re-examination of these experiments and several recent similar studies have prompted a number of psychologists to offer a reinterpretation of the original conclusions. They suggest that humans may not be inherently evil after all. However, we remain dangerously flawed — our willingness to follow those in authority, especially in those with whom we identify, makes us susceptible to believing in the virtue of actions that by all standards would be monstrous. It turns out that an open mind able to think critically may be the best antidote.

[div class=attrib]From the Pacific Standard:[end-div]

They are among the most famous of all psychological studies, and together they paint a dark portrait of human nature. Widely disseminated in the media, they spread the belief that people are prone to blindly follow authority figures—and will quickly become cruel and abusive when placed in positions of power.

It’s hard to overstate the impact of Stanley Milgram’s obedience experiments of 1961, or the Stanford Prison Experiment of 1971. Yet in recent years, the conclusions derived from those studies have been, if not debunked, radically reinterpreted.

A new perspective—one that views human nature in a more nuanced light—is offered by psychologists Alex Haslam of the University of Queensland, Australia, and Stephen Reicher of the University of St. Andrews in Scotland.

In an essay published in the open-access journal PLoS Biology, they argue that people will indeed comply with the questionable demands of authority figures—but only if they strongly identify with that person, and buy into the rightness of those beliefs.
In other words, we’re not unthinking automatons. Nor are we monsters waiting for permission for our dark sides to be unleashed. However, we are more susceptible to psychological manipulation than we may realize.

In Milgram’s study, members of the general public were placed in the role of “teacher” and told that a “learner” was in a nearby room. Each time the “learner” failed to correctly recall a word as part of a memory experiment, the “teacher” was told to administer an electrical shock.

As the “learner” kept making mistakes, the “teacher” was ordered to give him stronger and stronger jolts of electricity. If a participant hesitated, the experimenter—an authority figure wearing a white coat—instructed him to continue.

Somewhat amazingly, most people did so: 65 percent of participants continued to give stronger and stronger shocks until the experiment ended with the “learner” apparently unconscious. (The torture was entirely fictional; no actual shocks were administered.)
To a world still reeling from the question of why so many Germans obeyed orders and carried out Nazi atrocities, here was a clear answer: We are predisposed to obey authority figures.

The Stanford Prisoner Experiment, conducted a few years later, was equally unnerving. Students were randomly assigned to assume the role of either prisoner or guard in a “prison” set up in the university’s psychology department. As Haslam and Reicher note, “such was the abuse meted out to the prisoners by the guards that the study had to be terminated after just six days.”

Lead author Philip Zimbardo, who assumed the role of “prison superintendent” with a level of zeal he later found frightening, concluded that brutality was “a natural consequence of being in the uniform of a guard and asserting the power inherent in that role.”

So is all this proof of the “banality of evil,” to use historian Hannah Arendt’s memorable phrase? Not really, argue Haslam and Reicher. They point to their own work on the BBC Prison Study, which mimicked the seminal Stanford study.

They found that participants “did not conform automatically to their assigned role” as prisoner or guard. Rather, there was a period of resistance, which ultimately gave way to a “draconian” new hierarchy. Before becoming brutal, the participants needed time to assume their new identities, and internalize their role in the system.

Once they did so, “the hallmark of the tyrannical regime was not conformity, but creative leadership and engaged followership within a group of true believers,” they write. “This analysis mirrors recent conclusions about the Nazi tyranny.”

[div class=attrib]Read the entire article after the jump.[end-div]

Sleep Myths

Chronobiologist, Till Roenneberg, debunks 5 commonly held beliefs about sleep. He is author of “Internal Time: Chronotypes, Social Jet Lag, and Why You’re So Tired.

[div class=attrib]From the Washington Post:[end-div]

If shopping on Black Friday leaves you exhausted, or if your holiday guests keep you up until the wee hours, a long Thanksgiving weekend should offer an opportunity for some serious shut-eye. We spend between a quarter and a third of our lives asleep, but that doesn’t make us experts on how much is too much, how little is too little, or how many hours of rest the kids need to be sharp in school. Let’s tackle some popular myths about Mr. Sandman.

1.You need eight hours of sleep per night.

That’s the cliche. Napoleon, for one, didn’t believe it. His prescription went something like this: “Six hours for a man, seven for a woman and eight for a fool.”

But Napoleon’s formula wasn’t right, either. The ideal amount of sleep is different for everyone and depends on many factors, including age and genetic makeup.

In the past 10 years, my research team has surveyed sleep behavior in more than 150,000 people. About 11 percent slept six hours or less, while only 27 percent clocked eight hours or more. The majority fell in between. Women tended to sleep longer than men, but only by 14 minutes.

Bigger differences are seen when comparing various age groups. Ten-year-olds needed about nine hours of sleep, while adults older than 30, including senior citizens, averaged about seven hours. We recently identified the first gene associated with sleep duration — if you have one variant of this gene, you need more sleep than if you have another.

2. Early to bed and early to rise makes a man healthy, wealthy and wise.

Benjamin Franklin’s proverbial praise of early risers made sense in the second half of the 18th century, when his peers were exposed to much more daylight and to very dark nights. Their body clocks were tightly synchronized to this day-night cycle. This changed as work gradually moved indoors, performed under the far weaker intensity of artificial light during the day and, if desired, all night long.

The timing of sleep — earlier or later — is controlled by our internal clocks, which determine what researchers call our optimal “sleep window.” With the widespread use of electric light, our body clocks have shifted later while the workday has essentially remained the same. We fall asleep according to our (late) body clock, and are awakened early for work by the alarm clock. We therefore suffer from chronic sleep deprivation, and then we try to compensate by sleeping in on free days. Many of us sleep more than an hour longer on weekends than we do on workdays.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Prodigies and the Rest of Us

[div class=attrib]From the New York Times:[end-div]

Drew Petersen didn’t speak until he was 3½, but his mother, Sue, never believed he was slow. When he was 18 months old, in 1994, she was reading to him and skipped a word, whereupon Drew reached over and pointed to the missing word on the page. Drew didn’t produce much sound at that stage, but he already cared about it deeply. “Church bells would elicit a big response,” Sue told me. “Birdsong would stop him in his tracks.”

Sue, who learned piano as a child, taught Drew the basics on an old upright, and he became fascinated by sheet music. “He needed to decode it,” Sue said. “So I had to recall what little I remembered, which was the treble clef.” As Drew told me, “It was like learning 13 letters of the alphabet and then trying to read books.” He figured out the bass clef on his own, and when he began formal lessons at 5, his teacher said he could skip the first six months’ worth of material. Within the year, Drew was performing Beethoven sonatas at the recital hall at Carnegie Hall. “I thought it was delightful,” Sue said, “but I also thought we shouldn’t take it too seriously. He was just a little boy.”

On his way to kindergarten one day, Drew asked his mother, “Can I just stay home so I can learn something?” Sue was at a loss. “He was reading textbooks this big, and they’re in class holding up a blowup M,” she said. Drew, who is now 18, said: “At first, it felt lonely. Then you accept that, yes, you’re different from everyone else, but people will be your friends anyway.” Drew’s parents moved him to a private school. They bought him a new piano, because he announced at 7 that their upright lacked dynamic contrast. “It cost more money than we’d ever paid for anything except a down payment on a house,” Sue said. When Drew was 14, he discovered a home-school program created by Harvard; when I met him two years ago, he was 16, studying at the Manhattan School of Music and halfway to a Harvard bachelor’s degree.

Prodigies are able to function at an advanced adult level in some domain before age 12. “Prodigy” derives from the Latin “prodigium,” a monster that violates the natural order. These children have differences so evident as to resemble a birth defect, and it was in that context that I came to investigate them. Having spent 10 years researching a book about children whose experiences differ radically from those of their parents and the world around them, I found that stigmatized differences — having Down syndrome, autism or deafness; being a dwarf or being transgender — are often clouds with silver linings. Families grappling with these apparent problems may find profound meaning, even beauty, in them. Prodigiousness, conversely, looks from a distance like silver, but it comes with banks of clouds; genius can be as bewildering and hazardous as a disability. Despite the past century’s breakthroughs in psychology and neuroscience, prodigiousness and genius are as little understood as autism. “Genius is an abnormality, and can signal other abnormalities,” says Veda Kaplinsky of Juilliard, perhaps the world’s pre-eminent teacher of young pianists. “Many gifted kids have A.D.D. or O.C.D. or Asperger’s. When the parents are confronted with two sides of a kid, they’re so quick to acknowledge the positive, the talented, the exceptional; they are often in denial over everything else.”

We live in ambitious times. You need only to go through the New York preschool application process, as I recently did for my son, to witness the hysteria attached to early achievement, the widespread presumption that a child’s destiny hinges on getting a baby foot on a tall ladder. Parental obsessiveness on this front reflects the hegemony of developmental psychiatry, with its insistence that first experience is formative. We now know that brain plasticity diminishes over time; it is easier to mold a child than to reform an adult. What are we to do with this information? I would hate for my children to feel that their worth is contingent on sustaining competitive advantage, but I’d also hate for them to fall short of their potential. Tiger mothers who browbeat their children into submission overemphasize a narrow category of achievement over psychic health. Attachment parenting, conversely, often sacrifices accomplishment to an ideal of unboundaried acceptance that can be equally pernicious. It’s tempting to propose some universal answer, but spending time with families of remarkably talented children showed me that what works for one child can be disastrous for another.

Children who are pushed toward success and succeed have a very different trajectory from that of children who are pushed toward success and fail. I once told Lang Lang, a prodigy par excellence and now perhaps the most famous pianist in the world, that by American standards, his father’s brutal methods — which included telling him to commit suicide, refusing any praise, browbeating him into abject submission — would count as child abuse. “If my father had pressured me like this and I had not done well, it would have been child abuse, and I would be traumatized, maybe destroyed,” Lang responded. “He could have been less extreme, and we probably would have made it to the same place; you don’t have to sacrifice everything to be a musician. But we had the same goal. So since all the pressure helped me become a world-famous star musician, which I love being, I would say that, for me, it was in the end a wonderful way to grow up.”

While it is true that some parents push their kids too hard and give them breakdowns, others fail to support a child’s passion for his own gift and deprive him of the only life that he would have enjoyed. You can err in either direction. Given that there is no consensus about how to raise ordinary children, it is not surprising that there is none about how to raise remarkable children. Like parents of children who are severely challenged, parents of exceptionally talented children are custodians of young people beyond their comprehension.

Spending time with the Petersens, I was struck not only by their mutual devotion but also by the easy way they avoided the snobberies that tend to cling to classical music. Sue is a school nurse; her husband, Joe, works in the engineering department of Volkswagen. They never expected the life into which Drew has led them, but they have neither been intimidated by it nor brash in pursuing it; it remains both a diligence and an art. “How do you describe a normal family?” Joe said. “The only way I can describe a normal one is a happy one. What my kids do brings a lot of joy into this household.” When I asked Sue how Drew’s talent had affected how they reared his younger brother, Erik, she said: “It’s distracting and different. It would be similar if Erik’s brother had a disability or a wooden leg.”

Prodigiousness manifests most often in athletics, mathematics, chess and music. A child may have a brain that processes chess moves or mathematical equations like some dream computer, which is its own mystery, but how can the mature emotional insight that is necessary to musicianship emerge from someone who is immature? “Young people like romance stories and war stories and good-and-evil stories and old movies because their emotional life mostly is and should be fantasy,” says Ken Noda, a great piano prodigy in his day who gave up public performance and now works at the Metropolitan Opera. “They put that fantasized emotion into their playing, and it is very convincing. I had an amazing capacity for imagining these feelings, and that’s part of what talent is. But it dries up, in everyone. That’s why so many prodigies have midlife crises in their late teens or early 20s. If our imagination is not replenished with experience, the ability to reproduce these feelings in one’s playing gradually diminishes.”

Musicians often talked to me about whether you achieve brilliance on the violin by practicing for hours every day or by reading Shakespeare, learning physics and falling in love. “Maturity, in music and in life, has to be earned by living,” the violinist Yehudi Menuhin once said. Who opens up or blocks access to such living? A musical prodigy’s development hinges on parental collaboration. Without that support, the child would never gain access to an instrument, the technical training that even the most devout genius requires or the emotional nurturance that enables a musician to achieve mature expression. As David Henry Feldman and Lynn T. Goldsmith, scholars in the field, have said, “A prodigy is a group enterprise.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Portrait of Wolfgang Amadeus Mozart aged six years old, by anonymous. Courtesy of Wikipedia.[end-div]

The Beauty of Ugliness

The endless pursuit of beauty in human affairs probably pre-dates our historical record. We certainly know that ancient Egyptians used cosmetics believing them to offer magical and religious powers, in addition to aesthetic value.

Yet paradoxically beauty it is rather subjective and often fleeting. The French singer, songwriter, composer and bon viveur once said that, “ugliness is superior to beauty because it lasts longer”. Author Stephen Bayley argues in his new book “Ugly: The Aesthetics of Everything”, that beauty is downright boring.

[div class=attrib]From the Telegraph:[end-div]

Beauty is boring. And the evidence is piling up. An article in the journal Psychological Science now confirms what partygoers have known forever: that beauty and charm are no more directly linked than a high IQ and a talent for whistling.

A group of scientists set out to discover whether physically attractive people also have appealing character traits and values, and found, according to Lihi Segal-Caspi, who carried out part of the research, that “beautiful people tend to focus more on conformity and self-promotion than independence and tolerance”.

Certainly, while a room full of beautiful people might be impressively stiff with the whiff of Chanel No 5, the intellectual atmosphere will be carrying a very low charge. If positive at all.

The grizzled and gargoyle-like Parisian chanteur, and legendary lover, Serge Gainsbourg always used to pick up the ugliest girls at parties. This was not simply because predatory male folklore insists that ill-favoured women will be more “grateful”, but because Gainsbourg, a stylish contrarian, knew that the conversation would be better, the uglier the girl.

Beauty is a conformist conspiracy. And the conspirators include the fashion, cosmetics and movie businesses: a terrible Greek chorus of brainless idolatry towards abstract form. The conspirators insist that women – and, nowadays, men, too – should be un-creased, smooth, fat-free, tanned and, with the exception of the skull, hairless. Flawlessly dull. Even Hollywood once acknowledged the weakness of this proposition: Marilyn Monroe was made more attractive still by the addition of a “beauty spot”, a blemish turned into an asset.

The red carpet version of beauty is a feeble, temporary construction. Bodies corrode and erode, sag and bulge, just as cars rust and buildings develop a fine patina over time. This is not to be feared, rather to be understood and enjoyed. Anyone wishing to arrest these processes with the aid of surgery, aerosols, paint, glue, drugs, tape and Lycra must be both very stupid and very vain. Hence the problems encountered in conversation with beautiful people: stupidity and vanity rarely contribute much to wit and creativity.

Fine features may be all very well, but the great tragedy of beauty is that it is so ephemeral. Albert Camus said it “drives us to despair, offering for a minute the glimpse of an eternity that we should like to stretch out over the whole of time”. And Gainsbourg agreed when he said: “Ugliness is superior to beauty because it lasts longer.” A hegemony of beautiful perfection would be intolerable: we need a good measure of ugliness to keep our senses keen. If everything were beautiful, nothing would be.

And yet, despite the evidence against, there has been a conviction that beauty and goodness are somehow inextricably and permanently linked. Political propaganda exploited our primitive fear of ugliness, so we had Second World War American posters of Japanese looking like vampire bats. The Greeks believed that beauty had a moral character: beautiful people – discus-throwers and so on – were necessarily good people. Darwin explained our need for “beauty” in saying that breeding attractive children is a survival characteristic: I may feel the need to fuse my premium genetic material with yours, so that humanity continues in the same fine style.

This became a lazy consensus, described as the “beauty premium” by US economists Markus M Mobius and Tanya S Rosenblat. The “beauty premium” insists that as attractive children grow into attractive adults, they may find it easier to develop agreeable interpersonal communications skills because their audience reacts more favourably to them. In this beauty-related employment theory, short people are less likely to get a good job. As Randy Newman sang: “Short people got no reason to live.” So Darwin’s argument that evolutionary forces favour a certain physical type may be proven in the job market as well as the wider world.

But as soon as you try to grasp the concept of beauty, it disappears.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]