Tag Archives: psychology

Teenagers and Time

Parents have long known that the sleep-wake cycles of their adolescent offspring are rather different to those of anyone else in the household.

Several new and detailed studies of teenagers tell us why teens are impossible to awaken at 7 am, suddenly awake at 10 pm, and often able to sleep anywhere for stretches of 16 hours.

[div class=attrib]From the Wall Street Journal:[end-div]

Many parents know the scene: The groggy, sleep-deprived teenager stumbles through breakfast and falls asleep over afternoon homework, only to spring to life, wide-eyed and alert, at 10 p.m.—just as Mom and Dad are nodding off.

Fortunately for parents, science has gotten more sophisticated at explaining why, starting at puberty, a teen’s internal sleep-wake clock seems to go off the rails. Researchers are also connecting the dots between the resulting sleep loss and behavior long chalked up to just “being a teenager.” This includes more risk-taking, less self-control, a drop in school performance and a rise in the incidence of depression.

One 2010 study from the University of British Columbia, for example, found that sleep loss can hamper neuron growth in the brain during adolescence, a critical period for cognitive development.

Findings linking sleep loss to adolescent turbulence are “really revelatory,” says Michael Terman, a professor of clinical psychology and psychiatry at Columbia University Medical Center and co-author of “Chronotherapy,” a forthcoming book on resetting the body clock. “These are reactions to a basic change in the way teens’ physiology and behavior is organized.”

Despite such revelations, there are still no clear solutions for the teen-zombie syndrome. Should a parent try to enforce strict wake-up and bedtimes, even though they conflict with the teen’s body clock? Or try to create a workable sleep schedule around that natural cycle? Coupled with a trend toward predawn school start times and peer pressure to socialize online into the wee hours, the result can upset kids’ health, school performance—and family peace.

Jeremy Kern, 16 years old, of San Diego, gets up at 6:30 a.m. for school and tries to fall asleep by 10 p.m. But a heavy load of homework and extracurricular activities, including playing saxophone in his school marching band and in a theater orchestra, often keep him up later.

“I need 10 hours of sleep to not feel tired, and every single day I have to deal with being exhausted,” Jeremy says. He stays awake during early-afternoon classes “by sheer force of will.” And as research shows, sleep loss makes him more emotionally volatile, Jeremy says, like when he recently broke up with his girlfriend: “You are more irrational when you’re sleep deprived. Your emotions are much harder to control.”

Only 7.6% of teens get the recommended 9 to 10 hours of sleep, 23.5% get eight hours and 38.7% are seriously sleep-deprived at six or fewer hours a night, says a 2011 study by the Centers for Disease Control and Prevention.

It’s a biological 1-2-3 punch. First, the onset of puberty brings a median 1.5-hour delay in the body’s release of the sleep-inducing hormone melatonin, says Mary Carskadon, a professor of psychiatry and human behavior at the Brown University medical school and a leading sleep researcher.

Second, “sleep pressure,” or the buildup of the need to sleep as the day wears on, slows during adolescence. That is, kids don’t become sleepy as early. This sleep delay isn’t just a passing impulse: It continues to increase through adolescence, peaking at age 19.5 in girls and age 20.9 in boys, Dr. Carskadon’s research shows.

Finally, teens lose some of their sensitivity to morning light, the kind that spurs awakening and alertness. And they become more reactive to nighttime light, sparking activity later into the evening.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Guardian / Alamy.[end-div]

The Great Blue Monday Fallacy

A yearlong survey of moodiness shows that the so-called Monday Blues may be more figment of the imagination than fact.

[div class=attrib]From the New York Times:[end-div]

DESPITE the beating that Mondays have taken in pop songs — Fats Domino crooned “Blue Monday, how I hate blue Monday” — the day does not deserve its gloomy reputation.

Two colleagues and I recently published an analysis of a remarkable yearlong survey by the Gallup Organization, which conducted 1,000 live interviews a day, asking people across the United States to recall their mood in the prior day. We scoured the data for evidence that Monday was bluer than Tuesday or Wednesday. We couldn’t find any.

Mood was evaluated with several adjectives measuring positive or negative feelings. Spanish-only speakers were queried in Spanish. Interviewers spoke to people in every state on cellphones and land lines. The data unequivocally showed that Mondays are as pleasant to Americans as the three days that follow, and only a trifle less joyful than Fridays. Perhaps no surprise, people generally felt good on the weekend — though for retirees, the distinction between weekend and weekdays was only modest.

Likewise, day-of-the-week mood was gender-blind. Over all, women assessed their daily moods more negatively than men did, but relative changes from day to day were similar for both sexes.

And yet still, the belief in blue Mondays persists.

Several years ago, in another study, I examined expectations about mood and day of the week: two-thirds of the sample nominated Monday as the “worst” day of the week. Other research has confirmed that this sentiment is widespread, despite the fact that, well, we don’t really feel any gloomier on that day.

The question is, why? Why do we believe something that our own immediate experience indicates simply isn’t true?

As it turns out, the blue Monday mystery highlights a phenomenon familiar to behavioral scientists: that beliefs or judgments about experience can be at odds with actual experience. Indeed, the disconnection between beliefs and experience is common.

Vacations, for example, are viewed more pleasantly after they are over compared with how they were experienced at the time. And motorists who drive fancy cars report having more fun driving than those who own more modest vehicles, though in-car monitoring shows this isn’t the case. The same is often true in reverse as well: we remember pain or symptoms of illness at higher levels than real-time experience suggests, in part because we ignore symptom-free periods in between our aches and pains.

HOW do we make sense of these findings? The human brain has vast, but limited, capacities to store, retrieve and process information. Yet we are often confronted with questions that challenge these capacities. And this is often when the disconnect between belief and experience occurs. When information isn’t available for answering a question — say, when it did not make it into our memories in the first place — we use whatever information is available, even if it isn’t particularly relevant to the question at hand.

When asked about pain for the last week, most people cannot completely remember all of its ups and downs over seven days. However, we are likely to remember it at its worst and may use that as a way of summarizing pain for the entire week. When asked about our current satisfaction with life, we may focus on the first things that come to mind — a recent spat with a spouse or maybe a compliment from the boss at work.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: “I Don’t Like Mondays” single cover. Courtesy of The Boomtown Rats / Ensign Records.[end-div]

Remembering the Future

Memory is a very useful cognitive tool. After all, where would we be if we had no recall of our family, friends, foods, words, tasks and dangers.

But, it turns our that memory may also help us imagine the future — another very important human trait.

[div class=attrib]From the New Scientist:[end-div]

WHEN thinking about the workings of the mind, it is easy to imagine memory as a kind of mental autobiography – the private book of you. To relive the trepidation of your first day at school, say, you simply dust off the cover and turn to the relevant pages. But there is a problem with this idea. Why are the contents of that book so unreliable? It is not simply our tendency to forget key details. We are also prone to “remember” events that never actually took place, almost as if a chapter from another book has somehow slipped into our autobiography. Such flaws are puzzling if you believe that the purpose of memory is to record your past – but they begin to make sense if it is for something else entirely.

That is exactly what memory researchers are now starting to realise. They believe that human memory didn’t evolve so that we could remember but to allow us to imagine what might be. This idea began with the work of Endel Tulving, now at the Rotman Research Institute in Toronto, Canada, who discovered a person with amnesia who could remember facts but not episodic memories relating to past events in his life. Crucially, whenever Tulving asked him about his plans for that evening, the next day or the summer, his mind went blank – leading Tulving to suspect that foresight was the flipside of episodic memory.

Subsequent brain scans supported the idea, suggesting that every time we think about a possible future, we tear up the pages of our autobiographies and stitch together the fragments into a montage that represents the new scenario. This process is the key to foresight and ingenuity, but it comes at the cost of accuracy, as our recollections become frayed and shuffled along the way. “It’s not surprising that we confuse memories and imagination, considering that they share so many processes,” says Daniel Schacter, a psychologist at Harvard University.

Over the next 10 pages, we will show how this theory has brought about a revolution in our understanding of memory. Given the many survival benefits of being able to imagine the future, for instance, it is not surprising that other creatures show a rudimentary ability to think in this way (“Do animals ever forget?”). Memory’s role in planning and problem solving, meanwhile, suggests that problems accessing the past may lie behind mental illnesses like depression and post-traumatic stress disorder, offering a new approach to treating these conditions (“Boosting your mental fortress”). Equally, a growing understanding of our sense of self can explain why we are so selective in the events that we weave into our life story – again showing definite parallels with the way we imagine the future (“How the brain spins your life story”). The work might even suggest some dieting tips (“Lost in the here and now”).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Persistence of Memory, 1931. Salvador Dalí. Courtesy of Salvador Dalí, Gala-Salvador Dalí Foundation/Artists Rights Society.[end-div]

Power and Baldness

Since behavioral scientists and psychologists first began roaming the globe we have come to know how and (sometimes) why visual appearance is so important in human interactions. Of course, anecdotally, humans have known this for thousands of years — that image is everything. After all it, was not Mary Kay or L’Oreal who brought us make-up but the ancient Egyptians. Yet, it is still fascinating to see how markedly the perception of an individual can change with a basic alteration, and only at the surface. Witness the profound difference in characteristics that we project onto a male with male pattern baldness (wimp) when he shaves his head (tough guy). And, of course, corporations can now assign a monetary value to the shaven look. As for comb-overs, well that is another topic entirely.

[div class=attrib]From the Wall Street Journal:[end-div]

Up for a promotion? If you’re a man, you might want to get out the clippers.

Men with shaved heads are perceived to be more masculine, dominant and, in some cases, to have greater leadership potential than those with longer locks or with thinning hair, according to a recent study out of the University of Pennsylvania’s Wharton School.

That may explain why the power-buzz look has caught on among business leaders in recent years. Venture capitalist and Netscape founder Marc Andreessen, 41 years old, DreamWorks Animation Chief Executive Jeffrey Katzenberg, 61, and Amazon.com Inc. CEO Jeffrey Bezos, 48, all sport some variant of the close-cropped look.

Some executives say the style makes them appear younger—or at least, makes their age less evident—and gives them more confidence than a comb-over or monk-like pate.

“I’m not saying that shaving your head makes you successful, but it starts the conversation that you’ve done something active,” says tech entrepreneur and writer Seth Godin, 52, who has embraced the bare look for two decades. “These are people who decide to own what they have, as opposed to trying to pretend to be something else.”

Wharton management lecturer Albert Mannes conducted three experiments to test peoples’ perceptions of men with shaved heads. In one of the experiments, he showed 344 subjects photos of the same men in two versions: one showing the man with hair and the other showing him with his hair digitally removed, so his head appears shaved.

In all three tests, the subjects reported finding the men with shaved heads as more dominant than their hirsute counterparts. In one test, men with shorn heads were even perceived as an inch taller and about 13% stronger than those with fuller manes. The paper, “Shorn Scalps and Perceptions of Male Dominance,” was published online, and will be included in a coming issue of journal Social Psychological and Personality Science.

The study found that men with thinning hair were viewed as the least attractive and powerful of the bunch, a finding that tracks with other studies showing that people perceive men with typical male-pattern baldness—which affects roughly 35 million Americans—as older and less attractive. For those men, the solution could be as cheap and simple as a shave.

According to Wharton’s Dr. Mannes—who says he was inspired to conduct the research after noticing that people treated him more deferentially when he shaved off his own thinning hair—head shavers may seem powerful because the look is associated with hypermasculine images, such as the military, professional athletes and Hollywood action heroes like Bruce Willis. (Male-pattern baldness, by contrast, conjures images of “Seinfeld” character George Costanza.)

New York image consultant Julie Rath advises her clients to get closely cropped when they start thinning up top. “There’s something really strong, powerful and confident about laying it all bare,” she says, describing the thinning or combed-over look as “kind of shlumpy.”

The look is catching on. A 2010 study from razor maker Gillette, a unit of Procter & Gamble Co., found that 13% of respondents said they shaved their heads, citing reasons as varied as fashion, sports and already thinning hair, according to a company spokesman. HeadBlade Inc., which sells head-shaving accessories, says revenues have grown 30% a year in the past decade.

Shaving his head gave 60-year-old Stephen Carley, CEO of restaurant chain Red Robin Gourmet Burgers Inc., a confidence boost when he was working among 20-somethings at tech start-ups in the 1990s. With his thinning hair shorn, “I didn’t feel like the grandfather in the office anymore.” He adds that the look gave him “the impression that it was much harder to figure out how old I was.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Comb-over patent, 1977. Courtesy of Wikipedia.[end-div]

Social Outcast = Creative Wunderkind

A recent study published in the Journal of Experimental Psychology correlates social ostracization and rejection with creativity. Businesses seeking creative individuals take note: perhaps your next great hire is a social misfit.

[div class=attrib]From Fast Company:[end-div]

Are you a recovering high school geek who still can’t get the girl? Are you always the last person picked for your company’s softball team? When you watched Office Space, did you feel a special kinship to the stapler-obsessed Milton Waddams? If you answered yes to any of these questions, do not despair. Researchers at Johns Hopkins and Cornell have recently found that the socially rejected might also be society’s most creatively powerful people.

The study, which is forthcoming in the Journal of Experimental Psychology, is called “Outside Advantage: Can Social Rejection Fuel Creative Thought?” It found that people who already have a strong “self-concept”–i.e. are independently minded–become creatively fecund in the face of rejection. “We were inspired by the stories of highly creative individuals like Steve Jobs and Lady Gaga,” says the study’s lead author, Hopkins professor Sharon Kim. “And we wanted to find a silver lining in all the popular press about bullying. There are benefits to being different.”

The study consisted of 200 Cornell students and set out to identify the relationship between the strength of an individual’s self-concept and their level of creativity. First, Kim tested the strength of each student’s self-concept by assessing his or her “need for uniqueness.” In other words, how important it is for each individual to feel separate from the crowd. Next, students were told that they’d either been included in or rejected from a hypothetical group project. Finally, they were given a simple, but creatively demanding, task: Draw an alien from a planet unlike earth.

If you’re curious about your own general creativity level (at least by the standards of Kim’s study), go ahead and sketch an alien right now…Okay, got your alien? Now give yourself a point for every non-human characteristic you’ve included in the drawing. If your alien has two eyes between the nose and forehead, you don’t get any points. If your alien has two eyes below the mouth, or three eyes that breathe fire, you get a point. If your alien doesn’t even have eyes or a mouth, give yourself a bunch of points. In short, the more dissimilar your alien is to a human, the higher your creativity score.

Kim found that people with a strong self-concept who were rejected produced more creative aliens than people from any other group, including people with a strong self-concept who were accepted. “If you’re in a mindset where you don’t care what others think,” she explained, “you’re open to ideas that you may not be open to if you’re concerned about what other people are thinking.”

This may seem like an obvious conclusion, but Kim pointed out that most companies don’t encourage the kind of freedom and independence that readers of Fast Company probably expect. “The benefits of being different is not a message everyone is getting,” she said.

But Kim also discovered something unexpected. People with a weak self-concept could be influenced toward a stronger one and, thus, toward a more creative mindset. In one part of the study, students were asked to read a short story in which all the pronouns were either singular (I/me) or plural (we/us) and then to circle all the pronouns. They were then “accepted” or “rejected” and asked to draw their aliens.

Kim found that all of the students who read stories with singular pronouns and were rejected produced more creative aliens. Even the students who originally had a weaker self-concept. Once these group-oriented individuals focused on individual-centric prose, they became more individualized themselves. And that made them more creative.

This finding doesn’t prove that you can teach someone to have a strong self-concept but it suggests that you can create a professional environment that facilitates independent and creative thought.

[div class=attrib]Read the entire article after the jump.[end-div]

Building Character in Kids

Many parents have known this for a long time: it takes more than a stellar IQ, SAT or ACT score to make a well-rounded kid. Arguably there a many more important traits that never feature on these quantitative tests. Such qualities as leadership, curiosity, initiative, perseverance, motivation, courage and empathy come to mind.

An excerpt below from Paul Tough’s book, “How Children Succeed: Grit, Curiosity and the Hidden Power of Character”.

[div class=attrib]From the Wall Street Journal:[end-div]

We are living through a particularly anxious moment in the history of American parenting. In the nation’s big cities these days, the competition among affluent parents over slots in favored preschools verges on the gladiatorial. A pair of economists from the University of California recently dubbed this contest for early academic achievement the “Rug Rat Race,” and each year, the race seems to be starting earlier and growing more intense.

At the root of this parental anxiety is an idea you might call the cognitive hypothesis. It is the belief, rarely spoken aloud but commonly held nonetheless, that success in the U.S. today depends more than anything else on cognitive skill—the kind of intelligence that gets measured on IQ tests—and that the best way to develop those skills is to practice them as much as possible, beginning as early as possible.

There is something undeniably compelling about the cognitive hypothesis. The world it describes is so reassuringly linear, such a clear case of inputs here leading to outputs there. Fewer books in the home means less reading ability; fewer words spoken by your parents means a smaller vocabulary; more math work sheets for your 3-year-old means better math scores in elementary school. But in the past decade, and especially in the past few years, a disparate group of economists, educators, psychologists and neuroscientists has begun to produce evidence that calls into question many of the assumptions behind the cognitive hypothesis.

What matters most in a child’s development, they say, is not how much information we can stuff into her brain in the first few years of life. What matters, instead, is whether we are able to help her develop a very different set of qualities, a list that includes persistence, self-control, curiosity, conscientiousness, grit and self-confidence. Economists refer to these as noncognitive skills, psychologists call them personality traits, and the rest of us often think of them as character.

If there is one person at the hub of this new interdisciplinary network, it is James Heckman, an economist at the University of Chicago who in 2000 won the Nobel Prize in economics. In recent years, Mr. Heckman has been convening regular invitation-only conferences of economists and psychologists, all engaged in one form or another with the same questions: Which skills and traits lead to success? How do they develop in childhood? And what kind of interventions might help children do better?

The transformation of Mr. Heckman’s career has its roots in a study he undertook in the late 1990s on the General Educational Development program, better known as the GED, which was at the time becoming an increasingly popular way for high-school dropouts to earn the equivalent of high-school diplomas. The GED’s growth was founded on a version of the cognitive hypothesis, on the belief that what schools develop, and what a high-school diploma certifies, is cognitive skill. If a teenager already has the knowledge and the smarts to graduate from high school, according to this logic, he doesn’t need to waste his time actually finishing high school. He can just take a test that measures that knowledge and those skills, and the state will certify that he is, legally, a high-school graduate, as well-prepared as any other high-school graduate to go on to college or other postsecondary pursuits.

Mr. Heckman wanted to examine this idea more closely, so he analyzed a few large national databases of student performance. He found that in many important ways, the premise behind the GED was entirely valid. According to their scores on achievement tests, GED recipients were every bit as smart as high-school graduates. But when Mr. Heckman looked at their path through higher education, he found that GED recipients weren’t anything like high-school graduates. At age 22, Mr. Heckman found, just 3% of GED recipients were either enrolled in a four-year university or had completed some kind of postsecondary degree, compared with 46% of high-school graduates. In fact, Heckman discovered that when you consider all kinds of important future outcomes—annual income, unemployment rate, divorce rate, use of illegal drugs—GED recipients look exactly like high-school dropouts, despite the fact that they have earned this supposedly valuable extra credential, and despite the fact that they are, on average, considerably more intelligent than high-school dropouts.

These results posed, for Mr. Heckman, a confounding intellectual puzzle. Like most economists, he had always believed that cognitive ability was the single most reliable determinant of how a person’s life would turn out. Now he had discovered a group—GED holders—whose good test scores didn’t seem to have any positive effect on their eventual outcomes. What was missing from the equation, Mr. Heckman concluded, were the psychological traits, or noncognitive skills, that had allowed the high-school graduates to make it through school.

So what can parents do to help their children develop skills like motivation and perseverance? The reality is that when it comes to noncognitive skills, the traditional calculus of the cognitive hypothesis—start earlier and work harder—falls apart. Children can’t get better at overcoming disappointment just by working at it for more hours. And they don’t lag behind in curiosity simply because they didn’t start doing curiosity work sheets at an early enough age.

[div class=attrib]Read the entire article after the jump.[end-div]

Sign First; Lie Less

A recent paper filed with the Proceedings of the National Academy of Sciences (PNAS) shows that we are more likely to be honest if we sign a form before, rather than after, completing it. So, over the coming years look out for Uncle Sam to revise the ubiquitous IRS 1040 form by adding a signature line at the top rather than the bottom of the last page.

[div class=attrib]From Ars Technica:[end-div]

What’s the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they’re comfortable attaching their identity to something that may not be completely true.

Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.

It shouldn’t be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. “Immediately after lying,” the paper’s authors write, “individuals quickly engage in various mental justifications, reinterpretations, and other ‘tricks’ such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied.” By the time they get to the actual request for a signature, they’ve already made their peace with lying: “When signing comes after reporting, the morality train has already left the station.”

The problem isn’t with the signature itself. Lots of studies have shown that focusing the attention on one’s self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that “signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting.”

To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study’s location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.

In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.

Although they didn’t have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.

The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like “virtue.” They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.

But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.

Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.

[div class=attrib]Read the entire article after the jump, or follow the article at PNAS, here.[end-div]

[div class=attrib]Image courtesy of University of Illinois at Urbana-Champaign.[end-div]

The Benefits of Self-Deception

 

Psychologists have long studied the causes and characteristics of deception. In recent times they have had a huge pool of talented liars from which to draw — bankers, mortgage lenders, Enron executives, borrowers, and of course politicians. Now, researchers have begun to took at the art of self-deception, with some interesting results. Self-deception may be a useful tool in influencing others.

[div class=attrib]From the Wall Street Journal:[end-div]

Lying to yourself—or self-deception, as psychologists call it—can actually have benefits. And nearly everybody does it, based on a growing body of research using new experimental techniques.

Self-deception isn’t just lying or faking, but is deeper and more complicated, says Del Paulhus, psychology professor at University of British Columbia and author of a widely used scale to measure self-deceptive tendencies. It involves strong psychological forces that keep us from acknowledging a threatening truth about ourselves, he says.

Believing we are more talented or intelligent than we really are can help us influence and win over others, says Robert Trivers, an anthropology professor at Rutgers University and author of “The Folly of Fools,” a 2011 book on the subject. An executive who talks himself into believing he is a great public speaker may not only feel better as he performs, but increase “how much he fools people, by having a confident style that persuades them that he’s good,” he says.

Researchers haven’t studied large population samples to compare rates of self-deception or compared men and women, but they know based on smaller studies that it is very common. And scientists in many different disciplines are drawn to studying it, says Michael I. Norton, an associate professor at Harvard Business School. “It’s also one of the most puzzling things that humans do.”

Researchers disagree over what exactly happens in the brain during self-deception. Social psychologists say people deceive themselves in an unconscious effort to boost self-esteem or feel better. Evolutionary psychologists, who say different parts of the brain can harbor conflicting beliefs at the same time, say self-deception is a way of fooling others to our own advantage.

In some people, the tendency seems to be an inborn personality trait. Others may develop a habit of self-deception as a way of coping with problems and challenges.

Behavioral scientists in recent years have begun using new techniques in the laboratory to predict when and why people are likely to deceive themselves. For example, they may give subjects opportunities to inflate their own attractiveness, skill or intelligence. Then, they manipulate such variables as subjects’ mood, promises of rewards or opportunities to cheat. They measure how the prevalence of self-deception changes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Truth or Consequences. Courtesy of CBS 1950-51 / Wikia.[end-div]

The Exceptionalism of American Violence

The United States is often cited as the most generous nation on Earth. Unfortunately, it is also one of the most violent, having one of the highest murder rates of any industrialized country. Why this tragic paradox?

In an absorbing article excerpted below, backed by sound research, Anthropologist Eric Michael Johnson points to the lack of social capital on a local and national scale. Here, social capital is defined as interpersonal trust that promotes cooperation between citizens and groups for mutual benefit.

So, combine a culture that allows convenient access to very effective weapons with broad inequality, social isolation and distrust, and you get a very sobering picture — a country where around 70 people are killed each day by others wielding guns (25,423 firearm homicides in 2006-2007, based on Centers for Disease Control statistics).

[div class=attrib]From Scientific American:[end-div]

The United States is the deadliest wealthy country in the world. Can science help us explain, or even solve, our national crisis?

His tortured and sadistic grin beamed like a full moon on that dark night. “Madness, as you know, is like gravity,” he cackled. “All it takes is a little push.” But once the house lights rose, the terror was lifted for most of us. Few imagined that the fictive evil on screen back in 2008 would later inspire a depraved act of mass murder by a young man sitting with us in the audience, a student of neuroscience whose mind was teetering on the edge. What was it that pushed him over?

In the wake of the tragedy that struck Aurora, Colorado last Friday there remain more questions than answers. Just like last time–in January, 2011 when Congresswoman Gabrielle Giffords and 18 others were shot in Tucson, Arizona or before that in April, 2007 when a deranged gunman attacked students and staff at Virginia Tech–this senseless mass shooting has given rise to a national conversation as we struggle to find meaning in the madness.

While everyone agrees the blame should ultimately be placed on the perpetrator of this violence, the fact remains that the United States has one of the highest murder rates in the industrialized world. Of the 34 countries in the Organisation for Economic Co-operation and Development (OECD), the U.S. ranks fifth in homicides just behind Brazil (highest), Mexico, Russia, and Estonia. Our nation also holds the dubious honor of being responsible for half of the worst mass shootings in the last 30 years. How can we explain why the United States has nearly three times more murders per capita than neighboring Canada and ten times more than Japan? What makes the land of the free such a dangerous place to live?

Diagnosing a Murder

There have been hundreds of thoughtful explorations of this problem in the last week, though three in particular have encapsulated the major issues. Could it be, as science writer David Dobbs argues at Wired, that “an American culture that fetishizes violence,” such as the Batman franchise itself, has contributed to our fall? “Culture shapes the expression of mental dysfunction,” Dobbs writes, “just as it does other traits.”

Perhaps the push arrived with the collision of other factors, as veteran journalist Bill Moyers maintains, when the dark side of human nature encountered political allies who nurture our destructive impulses? “Violence is our alter ego, wired into our Stone Age brains,” he says. “The NRA is the best friend a killer’s instinct ever had.”

But then again maybe there is an economic explanation, as my Scientific American colleague John Horgan believes, citing a hypothesis by McMaster University evolutionary psychologists Martin Daly and his late wife Margo Wilson. “Daly and Wilson found a strong correlation between high Gini scores [a measure of inequality] and high homicide rates in Canadian provinces and U.S. counties,” Horgan writes, “blaming homicides not on poverty per se but on the collision of poverty and affluence, the ancient tug-of-war between haves and have-nots.”

In all three cases, as it was with other culprits such as the lack of religion in public schools or the popularity of violent video games (both of which are found in other wealthy countries and can be dismissed), commentators are looking at our society as a whole rather than specific details of the murderer’s background. The hope is that, if we can isolate the factor which pushes some people to murder their fellow citizens, perhaps we can alter our social environment and reduce the likelihood that these terrible acts will be repeated in the future. The only problem is, which one could it be?

The Exceptionalism of American Violence

As it turns out, the “social capital” Sapolsky found that made the Forest Troop baboons so peaceful is an important missing factor that can explain our high homicide rate in the United States. In 1999 Ichiro Kawachi at the Harvard School of Public Health led a study investigating the factors in American homicide for the journal Social Science and Medicine (pdf here). His diagnosis was dire.

“If the level of crime is an indicator of the health of society,” Kawachi wrote, “then the US provides an illustrative case study as one of the most unhealthy of modern industrialized nations.” The paper outlined what the most significant causal factors were for this exaggerated level of violence by developing what was called “an ecological theory of crime.” Whereas many other analyses of homicide take a criminal justice approach to the problem–such as the number of cops on the beat, harshness of prison sentences, or adoption of the death penalty–Kawachi used a public health perspective that emphasized social relations.

In all 50 states and the District of Columbia data were collected using the General Social Survey that measured social capital (defined as interpersonal trust that promotes cooperation between citizens for mutual benefit), along with measures of poverty and relative income inequality, homicide rates, incidence of other crimes–rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft–unemployment, percentage of high school graduates, and average alcohol consumption. By using a statistical method known as principal component analysis Kawachi was then able to identify which ecologic variables were most associated with particular types of crime.

The results were unambiguous: when income inequality was higher, so was the rate of homicide. Income inequality alone explained 74% of the variance in murder rates and half of the aggravated assaults. However, social capital had an even stronger association and, by itself, accounted for 82% of homicides and 61% of assaults. Other factors such as unemployment, poverty, or number of high school graduates were only weakly associated and alcohol consumption had no connection to violent crime at all. A World Bank sponsored study subsequently confirmed these results on income inequality concluding that, worldwide, homicide and the unequal distribution of resources are inextricably tied. (see Figure 2). However, the World Bank study didn’t measure social capital. According to Kawachi it is this factor that should be considered primary; when the ties that bind a community together are severed inequality is allowed to run free, and with deadly consequences.

But what about guns? Multiple studies have shown a direct correlation between the number of guns and the number of homicides. The United States is the most heavily armed country in the world with 90 guns for every 100 citizens. Doesn’t this over-saturation of American firepower explain our exaggerated homicide rate? Maybe not. In a follow-up study in 2001 Kawachi looked specifically at firearm prevalence and social capital among U.S. states. The results showed that when social capital and community involvement declined, gun ownership increased (see Figure 3).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Smith & Wesson M&P Victory model revolver. Courtesy of Oleg Volk / Wikpedia.[end-div]

Women See Bodies; Men See Body Parts

Yet another research study of gender differences shows some fascinating variation in the way men and women see and process their perceptions of others. Men tend to be perceived as a whole, women, on the other hand, are more likely to be perceived as parts.

[div class=attrib]From Scientific American:[end-div]

A glimpse at the magazine rack in any supermarket checkout line will tell you that women are frequently the focus of sexual objectification. Now, new research finds that the brain actually processes images of women differently than those of men, contributing to this trend.

Women are more likely to be picked apart by the brain and seen as parts rather than a whole, according to research published online June 29 in the European Journal of Social Psychology. Men, on the other hand, are processed as a whole rather than the sum of their parts.

“Everyday, ordinary women are being reduced to their sexual body parts,” said study author Sarah Gervais, a psychologist at the University of Nebraska, Lincoln. “This isn’t just something that supermodels or porn stars have to deal with.”

Objectification hurts
Numerous studies have found that feeling objectified is bad for women. Being ogled can make women do worse on math tests, and self-sexualization, or scrutiny of one’s own shape, is linked to body shame, eating disorders and poor mood.

But those findings have all focused on the perception of being sexualized or objectified, Gervais told LiveScience. She and her colleagues wondered about the eye of the beholder: Are people really objectifying women more than men?

To find out, the researchers focused on two types of mental processing, global and local. Global processing is how the brain identifies objects as a whole. It tends to be used when recognizing people, where it’s not just important to know the shape of the nose, for example, but also how the nose sits in relation to the eyes and mouth. Local processing focuses more on the individual parts of an object. You might recognize a house by its door alone, for instance, while you’re less likely to recognize a person’s arm without the benefit of seeing the rest of their body.

If women are sexually objectified, people should process their bodies in a more local way, focusing on individual body parts like breasts. To test the idea, Gervais and her colleagues carried out two nearly identical experiments with a total of 227 undergraduate participants. Each person was shown non-sexualized photographs, each of either a young man or young woman, 48 in total. After seeing each original full-body image, the participants saw two side-by-side photographs. One was the original image, while the other was the original with a slight alteration to the chest or waist (chosen because these are sexualized body parts). Participants had to pick which image they’d seen before.

In some cases, the second set of photos zoomed in on the chest or waist only, asking participants to pick the body part they’d seen previously versus the one that had been altered.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: People focus on the parts of a woman’s body when processing her image, according to research published in June in the European Journal of Social Psychology. Courtesy of LiveScience / Yuri Arcurs, Shutterstock.[end-div]

Procrastination is a Good Thing

Procrastinators have known this for a long time: that success comes from making a decision at the last possible moment.

Procrastinating professor Frank Partnoy expands on this theory, captured in his book, “Wait: The Art and Science of Delay“.

[div class=attrib]From Smithsonian:[end-div]

Sometimes life seems to happen at warp speed. But, decisions, says Frank Partnoy, should not. When the financial market crashed in 2008, the former investment banker and corporate lawyer, now a professor of finance and law and co-director of the Center for Corporate and Securities Law at the University of San Diego, turned his attention to literature on decision-making.

“Much recent research about decisions helps us understand what we should do or how we should do it, but it says little about when,” he says.

In his new book, Wait: The Art and Science of Delay, Partnoy claims that when faced with a decision, we should assess how long we have to make it, and then wait until the last possible moment to do so. Should we take his advice on how to “manage delay,” we will live happier lives.

It is not surprising that the author of a book titled Wait is a self-described procrastinator. In what ways do you procrastinate?

I procrastinate in just about every possible way and always have, since my earliest memories going back to when I first starting going to elementary school and had these arguments with my mother about making my bed.

My mom would ask me to make my bed before going to school. I would say, no, because I didn’t see the point of making my bed if I was just going to sleep in it again that night. She would say, well, we have guests coming over at 6 o’clock, and they might come upstairs and look at your room. I said, I would make my bed when we know they are here. I want to see a car in the driveway. I want to hear a knock on the door. I know it will take me about one minute to make my bed so at 5:59, if they are here, I will make my bed.

I procrastinated all through college and law school. When I went to work at Morgan Stanley, I was delighted to find that although the pace of the trading floor is frenetic and people are very fast, there were lots of incredibly successful mentors of procrastination.

Now, I am an academic. As an academic, procrastination is practically a job requirement. If I were to say I would be submitting an academic paper by September 1, and I submitted it in August, people would question my character.

It has certainly been drilled into us that procrastination is a bad thing. Yet, you argue that we should embrace it. Why?

Historically, for human beings, procrastination has not been regarded as a bad thing. The Greeks and Romans generally regarded procrastination very highly. The wisest leaders embraced procrastination and would basically sit around and think and not do anything unless they absolutely had to.

The idea that procrastination is bad really started in the Puritanical era with Jonathan Edwards’s sermon against procrastination and then the American embrace of “a stitch in time saves nine,” and this sort of work ethic that required immediate and diligent action.

But if you look at recent studies, managing delay is an important tool for human beings. People are more successful and happier when they manage delay. Procrastination is just a universal state of being for humans. We will always have more things to do than we can possibly do, so we will always be imposing some sort of unwarranted delay on some tasks. The question is not whether we are procrastinating, it is whether we are procrastinating well.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of eHow.[end-div]

Time Flows Uphill

Many people in industrialized countries often describe time as flowing like a river: it flows back into the past, and it flows forward into the future. Of course, for bored workers time sometimes stands still, while for kids on summer vacation time flows all too quickly. And, for many people over, say the age of forty, days often drag, but the years fly by.

For some, time flows uphill, and it flows downhill.

[div class=attrib]From New Scientist:[end-div]

“HERE and now”, “Back in the 1950s”, “Going forward”… Western languages are full of spatial metaphors for time, and whether you are, say, British, French or German, you no doubt think of the past as behind you and the future as stretching out ahead. Time is a straight line that runs through your body.

Once thought to be universal, this “embodied cognition of time” is in fact strictly cultural. Over the past decade, encounters with various remote tribal societies have revealed a rich diversity of the ways in which humans relate to time (see “Attitudes across the latitudes”). The latest, coming from the Yupno people of Papua New Guinea, is perhaps the most remarkable. Time for the Yupno flows uphill and is not even linear.

Rafael Núñez of the University of California, San Diego, led his team into the Finisterre mountain range of north-east Papua New Guinea to study the Yupno living in the village of Gua. There are no roads in this remote region. The Yupno have no electricity or even domestic animals to work the land. They live with very little contact with the western world.

Núñez and his colleagues noticed that the tribespeople made spontaneous gestures when speaking about the past, present and future. They filmed and analysed the gestures and found that for the Yupno the past is always downhill, in the direction of the mouth of the local river. The future, meanwhile, is towards the river’s source, which lies uphill from Gua.

This was true regardless of the direction they were facing. For instance, if they were facing downhill when talking about the future, a person would gesture backwards up the slope. But when they turned around to face uphill, they pointed forwards.

Núñez thinks the explanation is historical. The Yupno’s ancestors arrived by sea and climbed up the 2500-metre-high mountain valley, so lowlands may represent the past, and time flows uphill.

But the most unusual aspect of the Yupno timeline is its shape. The village of Gua, the river’s source and its mouth do not lie in a straight line, so the timeline is kinked. “This is the first time ever that a culture has been documented to have everyday notions of time anchored in topographic properties,” says Núñez.

Within the dark confines of their homes, geographical landmarks disappear and the timeline appears to straighten out somewhat. The Yupno always point towards the doorway when talking about the past, and away from the door to indicate the future, regardless of their home’s orientation. That could be because entrances are always raised, says Núñez. You have to climb down – towards the past – to leave the house, so each home has its own timeline.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The Persistence of Memory, by Salvador Dalí. Courtesy of Salvador Dalí, Gala-Salvador Dalí Foundation / Artists Rights Society (ARS), Museum of Modern Art New York / Wikipedia.[end-div]

Faux Fashion is More Than Skin-Deep

Some innovative research shows that we are generally more inclined to cheat others if we are clad in counterfeit designer clothing or carrying faux accessories.

[div class=attrib]From Scientific American:[end-div]

Let me tell you the story of my debut into the world of fashion. When Jennifer Wideman Green (a friend of mine from graduate school) ended up living in New York City, she met a number of people in the fashion industry. Through her I met Freeda Fawal-Farah, who worked for Harper’s Bazaar. A few months later Freeda invited me to give a talk at the magazine, and because it was such an atypical crowd for me, I agreed.

I found myself on a stage before an auditorium full of fashion mavens. Each woman was like an exhibit in a museum: her jewelry, her makeup, and, of course, her stunning shoes. I talked about how people make decisions, how we compare prices when we are trying to figure out how much something is worth, how we compare ourselves to others, and so on. They laughed when I hoped they would, asked thoughtful questions, and offered plenty of their own interesting ideas. When I finished the talk, Valerie Salembier, the publisher of Harper’s Bazaar, came onstage, hugged and thanked me—and gave me a stylish black Prada overnight bag.

I headed downtown to my next meeting. I had some time to kill, so I decided to take a walk. As I wandered, I couldn’t help thinking about my big black leather bag with its large Prada logo. I debated with myself: should I carry my new bag with the logo facing outward? That way, other people could see and admire it (or maybe just wonder how someone wearing jeans and red sneakers could possibly have procured it). Or should I carry it with the logo facing toward me, so that no one could recognize that it was a Prada? I decided on the latter and turned the bag around.

Even though I was pretty sure that with the logo hidden no one realized it was a Prada bag, and despite the fact that I don’t think of myself as someone who cares about fashion, something felt different to me. I was continuously aware of the brand on the bag. I was wearing Prada! And it made me feel different; I stood a little straighter and walked with a bit more swagger. I wondered what would happen if I wore Ferrari underwear. Would I feel more invigorated? More confident? More agile? Faster?

I continued walking and passed through Chinatown, which was bustling with activity. Not far away, I spotted an attractive young couple in their twenties taking in the scene. A Chinese man approached them. “Handbags, handbags!” he called, tilting his head to indicate the direction of his small shop. After a moment or two, the woman asked the Chinese man, “You have Prada?”

The vendor nodded. I watched as she conferred with her partner. He smiled at her, and they followed the man to his stand.

The Prada they were referring to, of course, was not actually Prada. Nor were the $5 “designer” sunglasses on display in his stand really Dolce&Gabbana. And the Armani perfumes displayed over by the street food stands? Fakes too.

From Ermine to Armani

Going back a way, ancient Roman law included a set of regulations called sumptuary laws, which filtered down through the centuries into the laws of nearly all European nations. Among other things, the laws dictated who could wear what, according to their station and class. For example, in Renaissance England, only the nobility could wear certain kinds of fur, fabrics, laces, decorative beading per square foot, and so on, while those in the gentry could wear decisively less appealing clothing. (The poorest were generally excluded from the law, as there was little point in regulating musty burlap, wool, and hair shirts.) People who “dressed above their station” were silently, but directly, lying to those around them. And those who broke the law were often hit with fines and other punishments.

What may seem to be an absurd degree of obsessive compulsion on the part of the upper crust was in reality an effort to ensure that people were what they signaled themselves to be; the system was designed to eliminate disorder and confusion. Although our current sartorial class system is not as rigid as it was in the past, the desire to signal success and individuality is as strong today as ever.

When thinking about my experience with the Prada bag, I wondered whether there were other psychological forces related to fakes that go beyond external signaling. There I was in Chinatown holding my real Prada bag, watching the woman emerge from the shop holding her fake one. Despite the fact that I had neither picked out nor paid for mine, it felt to me that there was a substantial difference between the way I related to my bag and the way she related to hers.

More generally, I started wondering about the relationship between what we wear and how we behave, and it made me think about a concept that social scientists call self-signaling. The basic idea behind self-signaling is that despite what we tend to think, we don’t have a very clear notion of who we are. We generally believe that we have a privileged view of our own preferences and character, but in reality we don’t know ourselves that well (and definitely not as well as we think we do). Instead, we observe ourselves in the same way we observe and judge the actions of other people— inferring who we are and what we like from our actions.

For example, imagine that you see a beggar on the street. Rather than ignoring him or giving him money, you decide to buy him a sandwich. The action in itself does not define who you are, your morality, or your character, but you interpret the deed as evidence of your compassionate and charitable character. Now, armed with this “new” information, you start believing more intensely in your own benevolence. That’s self-signaling at work.

The same principle could also apply to fashion accessories. Carrying a real Prada bag—even if no one else knows it is real—could make us think and act a little differently than if we were carrying a counterfeit one. Which brings us to the questions: Does wearing counterfeit products somehow make us feel less legitimate? Is it possible that accessorizing with fakes might affect us in unexpected and negative ways?

Calling All Chloés

I decided to call Freeda and tell her about my recent interest in high fashion. During our conversation, Freeda promised to convince a fashion designer to lend me some items to use in some experiments. A few weeks later, I received a package from the Chloé label containing twenty handbags and twenty pairs of sunglasses. The statement accompanying the package told me that the handbags were estimated to be worth around $40,000 and the sunglasses around $7,000. (The rumor about this shipment quickly traveled around Duke, and I became popular among the fashion-minded crowd.)

With those hot commodities in hand, Francesca Gino, Mike Norton (both professors at Harvard University), and I set about testing whether participants who wore fake products would feel and behave differently from those wearing authentic ones. If our participants felt that wearing fakes would broadcast (even to themselves) a less honorable self-image, we wondered whether they might start thinking of themselves as somewhat less honest. And with this tainted self-concept in mind, would they be more likely to continue down the road of dishonesty?

Using the lure of Chloé accessories, we enlisted many female MBA students for our experiment. We assigned each woman to one of three conditions: authentic, fake or no information. In the authentic condition, we told participants that they would be donning real Chloé designer sunglasses. In the fake condition, we told them that they would be wearing counterfeit sunglasses that looked identical to those made by Chloé (in actuality all the products we used were the real McCoy). Finally, in the no-information condition, we didn’t say anything about the authenticity of the sunglasses.

Once the women donned their sunglasses, we directed them to the hallway, where we asked them to look at different posters and out the windows so that they could later evaluate the quality and experience of looking through their sunglasses. Soon after, we called them into another room for another task.

In this task, the participants were given 20 sets of 12 numbers (3.42, 7.32 and so on), and they were asked to find in each set the two numbers that add up to 10. They had five minutes to solve as many as possible and were paid for each correct answer. We set up the test so that the women could cheat—report that they solved more sets than they did (after shredding their worksheet and all the evidence)—while allowing us to figure out who cheated and by how much (by rigging the shredders so that they only cut the sides of the paper).

Over the years we carried out many versions of this experiment, and we repeatedly find that a lot of people cheated by a few questions. This experiment was not different in this regard, but what was particularly interesting was the effect of wearing counterfeits. While “only” 30 percent of the participants in the authentic condition reported solving more matrices than they actually had, 74 percent of those in the fake condition reported solving more matrices than they actually had. These results gave rise to another interesting question. Did the presumed fakeness of the product make the women cheat more than they naturally would? Or did the genuine Chloé label make them behave more honestly than they would otherwise?

This is why we also had a no-information condition, in which we didn’t mention anything about whether the sunglasses were real or fake. In that condition 42 percent of the women cheated. That result was between the other two, but it was much closer to the authentic condition (in fact, the two conditions were not statistically different from each other). These results suggest that wearing a genuine product does not increase our honesty (or at least not by much). But once we knowingly put on a counterfeit product, moral constraints loosen to some degree, making it easier for us to take further steps down the path of dishonesty.

The moral of the story? If you, your friend, or someone you are dating wears counterfeit products, be careful! Another act of dishonesty may be closer than you expect.

Up to No Good

These results led us to another question: if wearing counterfeits changes the way we view our own behavior, does it also cause us to be more suspicious of others? To find out, we asked another group of participants to put on what we told them were either real or counterfeit Chloé sunglasses. This time, we asked them to fill out a rather long survey with their sunglasses on. In this survey, we included three sets of questions. The questions in set A asked participants to estimate the likelihood that people they know might engage in various ethically questionable behaviors such as standing in the express line with too many groceries. The questions in set B asked them to estimate the likelihood that when people say particular phrases, including “Sorry, I’m late. Traffic was terrible,” they are lying. Set C presented participants with two scenarios depicting someone who has the opportunity to behave dishonestly, and asked them to estimate the likelihood that the person in the scenario would take the opportunity to cheat.

What were the results? You guessed it. When reflecting on the behavior of people they know, participants in the counterfeit condition judged their acquaintances to be more likely to behave dishonestly than did participants in the authentic condition. They also interpreted the list of common excuses as more likely to be lies, and judged the actor in the two scenarios as being more likely to choose the shadier option. We concluded that counterfeit products not only tend to make us more dishonest; they also cause us to view others as less than honest as well.

[div class=attrib]Read the entire article after the jump.[end-div]

Letting Go of Regrets

[div class=attrib]From Mind Matters over at Scientific American:[end-div]

The poem “Maud Muller” by John Greenleaf Whittier aptly ends with the line, “For of all sad words of tongue or pen, The saddest are these: ‘It might have been!’” What if you had gone for the risky investment that you later found out made someone else rich, or if you had had the guts to ask that certain someone to marry you? Certainly, we’ve all had instances in our lives where hindsight makes us regret not sticking our neck out a bit more.

But new research suggests that when we are older these kinds of ‘if only!’ thoughts about the choices we made may not be so good for our mental health. One of the most important determinants of our emotional well being in our golden years might be whether we learn to stop worrying about what might have been.

In a new paper published in Science, researchers from the University Medical Center Hamburg-Eppendorf in Hamburg, Germany, report evidence from two experiments which suggest that one key to aging well might involve learning to let go of regrets about missed opportunities. Stafanie Brassen and her colleagues looked at how healthy young participants (mean age: 25.4 years), healthy older participants (65.8 years), and older participants who had developed depression for the first time later in life (65.6 years) dealt with regret, and found that the young and older depressed patients seemed to hold on to regrets about missed opportunities while the healthy older participants seemed to let them go.

To measure regret over missed opportunities, the researchers adapted an established risk taking task into a clever game in which the participants looked at eight wooden boxes lined up in a row on a computer screen and could choose to reveal the contents of the boxes one at a time, from left to right. Seven of the boxes had gold in them, which the participants would earn if they chose to open them. One box, however, had a devil in it. What happens if they open the box with the devil in it? They lose that round and any gold they earned so far with it.

Importantly, the participants could choose to cash out early and keep any gold they earned up to that point. Doing this would reveal the location of the devil and coincidently all of the gold they missed out on. Sometimes this wouldn’t be a big deal, because the devil would be in the next box. No harm, no foul.  But sometimes the devil might be several boxes away. In this case, you might have missed out on a lot of potential earnings, and this had the potential to induce feelings of regret.

In their first experiment, Brassen and colleagues had all of the participants play this ‘devil game’ during a functional magnetic resonance (fMRI) brain scan.  They wanted to test whether young participants, older depressed, and healthy older participants responded differently to missed opportunities during the game, and whether these differences might also be reflected in activity in one area of the brain called the ventral striatum (an area known to very active when we experience regret) and another area of the brain called the anterior cingulate (an area known to be active when controlling our emotions).

Brassen and her colleagues found that for healthy older participants, the area of the brain which is usually active during the experience of regret, the ventral striatum, was much less active during rounds of the game where they missed out on a lot of money, suggesting that the healthily aging brains were not processing regret in the same way the young and depressed older brains were. Also, when they looked at the emotion controlling center of the brain, the anterior cingulate, the researchers found that this area was much more active in the healthy older participants than the other two groups. Interestingly, Brassen and her colleagues found that the bigger the missed opportunity, the greater the activity in this area for healthy older participants, which suggests that their brains were actively mitigating their experience of regret.

[div class=attrib]Read the entire article after the jump.[end-div]

Why Daydreaming is Good

Most of us, editor of theDiagonal included, have known this for a while. We’ve known that letting the mind wander aimlessly is crucial to creativity and problem-solving.

[div class=attrib]From Wired:[end-div]

It’s easy to underestimate boredom. The mental condition, after all, is defined by its lack of stimulation; it’s the mind at its most apathetic. This is why the poet Joseph Brodsky described boredom as a “psychological Sahara,” a cognitive desert “that starts right in your bedroom and spurns the horizon.” The hands of the clock seem to stop; the stream of consciousness slows to a drip. We want to be anywhere but here.

However, as Brodsky also noted, boredom and its synonyms can also become a crucial tool of creativity. “Boredom is your window,” the poet declared. “Once this window opens, don’t try to shut it; on the contrary, throw it wide open.”

Brodsky was right. The secret isn’t boredom per se: It’s how boredom makes us think. When people are immersed in monotony, they automatically lapse into a very special form of brain activity: mind-wandering. In a culture obsessed with efficiency, mind-wandering is often derided as a lazy habit, the kind of thinking we rely on when we don’t really want to think. (Freud regarded mind-wandering as an example of “infantile” thinking.) It’s a sign of procrastination, not productivity.

In recent years, however, neuroscience has dramatically revised our views of mind-wandering. For one thing, it turns out that the mind wanders a ridiculous amount. Last year, the Harvard psychologists Daniel Gilbert and Matthew A. Killingsworth published a fascinating paper in Science documenting our penchant for disappearing down the rabbit hole of our own mind. The scientists developed an iPhone app that contacted 2,250 volunteers at random intervals, asking them about their current activity and levels of happiness. It turns out that people were engaged in mind-wandering 46.9 percent of the time. In fact, the only activity in which their minds were not constantly wandering was love making. They were able to focus for that.

What’s happening inside the brain when the mind wanders? A lot. In 2009, a team led by Kalina Christoff of UBC and Jonathan Schooler of UCSB used “experience sampling” inside an fMRI machine to capture the brain in the midst of a daydream. (This condition is easy to induce: After subjects were given an extremely tedious task, they started to mind-wander within seconds.) Although it’s been known for nearly a decade that mind wandering is a metabolically intense process — your cortex consumes lots of energy when thinking to itself — this study further helped to clarify the sequence of mental events:

Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions—two brain systems that so far have been assumed to work in opposition—suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Two things worth noting here. The first is the reference to the default network. The name is literal: We daydream so easily and effortlessly that it appears to be our default mode of thought. The second is the simultaneous activation in executive and default regions, suggesting that mind wandering isn’t quite as mindless as we’d long imagined. (That’s why it seems to require so much executive activity.) Instead, a daydream seems to exist in the liminal space between sleep dreaming and focused attentiveness, in which we are still awake but not really present.

Last week, a team of Austrian scientists expanded on this result in PLoS ONE. By examining 17 patients with unresponsive wakefulness syndrome (UWS), 8 patients in a minimally conscious state (MCS), and 25 healthy controls, the researchers were able to detect the brain differences along this gradient of consciousness. The key difference was an inability among the most unresponsive patients to “deactivate” their default network. This suggests that these poor subjects were trapped within a daydreaming loop, unable to exercise their executive regions to pay attention to the world outside. (Problems with the deactivation of the default network have also been observed in patients with Alzheimer’s and schizophrenia.) The end result is that their mind’s eye is always focused inwards.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A daydreaming gentleman; from an original 1912 postcard published in Germany. Courtesy of Wikipedia.[end-div]

Whitewashing Prejudice One Word at a Time

[div class=attrib]From Salon:[end-div]

The news of recent research documenting how readers identify with the main characters in stories has mostly been taken as confirmation of the value of literary role models. Lisa Libby, an assistant professor at Ohio State University and co-author of a study published in the Journal of Personality and Social Psychology, explained that subjects who read a short story in which the protagonist overcomes obstacles in order to vote were more likely to vote themselves several days later.

The suggestibility of readers isn’t news. Johann Wolfgang von Goethe’s novel of a sensitive young man destroyed by unrequited love, “The Sorrows of Young Werther,” inspired a rash of suicides by would-be Werthers in the late 1700s. Jack Kerouac has launched a thousand road trips. Still, this is part of science’s job: Running empirical tests on common knowledge — if for no other reason than because common knowledge (and common sense) is often wrong.

A far more unsettling finding is buried in this otherwise up-with-reading news item. The Ohio State researchers gave 70 heterosexual male readers stories about a college student much like themselves. In one version, the character was straight. In another, the character is described as gay early in the story. In a third version the character is gay, but this isn’t revealed until near the end. In each case, the readers’ “experience-taking” — the name these researchers have given to the act of immersing oneself in the perspective, thoughts and emotions of a story’s protagonist — was measured.

The straight readers were far more likely to take on the experience of the main character if they weren’t told until late in the story that he was different from themselves. This, too, is not so surprising. Human beings are notorious for extending more of their sympathy to people they perceive as being of their own kind. But the researchers also found that readers of the “gay-late” story showed “significantly more favorable attitudes toward homosexuals” than the other two groups of readers, and that they were less likely to attribute stereotypically gay traits, such as effeminacy, to the main character. The “gay-late” story actually reduced their biases (conscious or not) against gays, and made them more empathetic. Similar results were found when white readers were given stories about black characters to read.

What can we do with this information? If we subscribe to the idea that literature ought to improve people’s characters — and that’s the sentiment that seems to be lurking behind the study itself — then perhaps authors and publishers should be encouraged to conceal a main character’s race or sexual orientation from readers until they become invested in him or her. Who knows how much J.K. Rowling’s revelation that Albus Dumbledore is gay, announced after the publication of the final Harry Potter book, has helped to combat homophobia? (Although I confess that I find it hard to believe there were that many homophobic Potter fans in the first place.)

[div class=attrib]Read the entire article after the jump.[end-div]

Cocktail Party Science and Multitasking


The hit drama Mad Men shows us that cocktail parties can be fun — colorful drinks and colorful conversations with a host of very colorful characters. Yet cocktail parties also highlight one of our limitations, the inability to multitask. We are single-threaded animals despite the constant and simultaneous bombardment for our attention from all directions, and to all our senses.

Melinda Beck over at the WSJ Health Journal summarizes recent research that shows the deleterious effects of our attempts to multitask — why it’s so hard and why it’s probably not a good idea anyway, especially while driving.

[div class=attrib]From the Wall Street Journal:[end-div]

You’re at a party. Music is playing. Glasses are clinking. Dozens of conversations are driving up the decibel level. Yet amid all those distractions, you can zero in on the one conversation you want to hear.

This ability to hyper-focus on one stream of sound amid a cacophony of others is what researchers call the “cocktail-party effect.” Now, scientists at the University of California in San Francisco have pinpointed where that sound-editing process occurs in the brain—in the auditory cortex just behind the ear, not in areas of higher thought. The auditory cortex boosts some sounds and turns down others so that when the signal reaches the higher brain, “it’s as if only one person was speaking alone,” says principle investigator Edward Chang.

These findings, published in the journal Nature last week, underscore why people aren’t very good at multitasking—our brains are wired for “selective attention” and can focus on only one thing at a time. That innate ability has helped humans survive in a world buzzing with visual and auditory stimulation. But we keep trying to push the limits with multitasking, sometimes with tragic consequences. Drivers talking on cellphones, for example, are four times as likely to get into traffic accidents as those who aren’t.

Many of those accidents are due to “inattentional blindness,” in which people can, in effect, turn a blind eye to things they aren’t focusing on. Images land on our retinas and are either boosted or played down in the visual cortex before being passed to the brain, just as the auditory cortex filters sounds, as shown in the Nature study last week. “It’s a push-pull relationship—the more we focus on one thing, the less we can focus on others,” says Diane M. Beck, an associate professor of psychology at the University of Illinois.

That people can be completely oblivious to things in their field of vision was demonstrated famously in the “Invisible Gorilla experiment” devised at Harvard in the 1990s. Observers are shown a short video of youths tossing a basketball and asked to count how often the ball is passed by those wearing white. Afterward, the observers are asked several questions, including, “Did you see the gorilla?” Typically, about half the observers failed to notice that someone in a gorilla suit walked through the scene. They’re usually flabbergasted because they’re certain they would have noticed something like that.

“We largely see what we expect to see,” says Daniel Simons, one of the study’s creators and now a professor of psychology at the University of Illinois. As he notes in his subsequent book, “The Invisible Gorilla,” the more attention a task demands, the less attention we can pay to other things in our field of vision. That’s why pilots sometimes fail to notice obstacles on runways and radiologists may overlook anomalies on X-rays, especially in areas they aren’t scrutinizing.

And it isn’t just that sights and sounds compete for the brain’s attention. All the sensory inputs vie to become the mind’s top priority.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Getty Images / Wall Street Journal.[end-div]

The Benefits of Bilingualism

[div class=attrib]From the New York Times:[end-div]

SPEAKING two languages rather than just one has obvious practical benefits in an increasingly globalized world. But in recent years, scientists have begun to show that the advantages of bilingualism are even more fundamental than being able to converse with a wider range of people. Being bilingual, it turns out, makes you smarter. It can have a profound effect on your brain, improving cognitive skills not related to language and even shielding against dementia in old age.

This view of bilingualism is remarkably different from the understanding of bilingualism through much of the 20th century. Researchers, educators and policy makers long considered a second language to be an interference, cognitively speaking, that hindered a child’s academic and intellectual development.

They were not wrong about the interference: there is ample evidence that in a bilingual’s brain both language systems are active even when he is using only one language, thus creating situations in which one system obstructs the other. But this interference, researchers are finding out, isn’t so much a handicap as a blessing in disguise. It forces the brain to resolve internal conflict, giving the mind a workout that strengthens its cognitive muscles.

Bilinguals, for instance, seem to be more adept than monolinguals at solving certain kinds of mental puzzles. In a 2004 study by the psychologists Ellen Bialystok and Michelle Martin-Rhee, bilingual and monolingual preschoolers were asked to sort blue circles and red squares presented on a computer screen into two digital bins — one marked with a blue square and the other marked with a red circle.

In the first task, the children had to sort the shapes by color, placing blue circles in the bin marked with the blue square and red squares in the bin marked with the red circle. Both groups did this with comparable ease. Next, the children were asked to sort by shape, which was more challenging because it required placing the images in a bin marked with a conflicting color. The bilinguals were quicker at performing this task.

The collective evidence from a number of such studies suggests that the bilingual experience improves the brain’s so-called executive function — a command system that directs the attention processes that we use for planning, solving problems and performing various other mentally demanding tasks. These processes include ignoring distractions to stay focused, switching attention willfully from one thing to another and holding information in mind — like remembering a sequence of directions while driving.

Why does the tussle between two simultaneously active language systems improve these aspects of cognition? Until recently, researchers thought the bilingual advantage stemmed primarily from an ability for inhibition that was honed by the exercise of suppressing one language system: this suppression, it was thought, would help train the bilingual mind to ignore distractions in other contexts. But that explanation increasingly appears to be inadequate, since studies have shown that bilinguals perform better than monolinguals even at tasks that do not require inhibition, like threading a line through an ascending series of numbers scattered randomly on a page.

The key difference between bilinguals and monolinguals may be more basic: a heightened ability to monitor the environment. “Bilinguals have to switch languages quite often — you may talk to your father in one language and to your mother in another language,” says Albert Costa, a researcher at the University of Pompeu Fabra in Spain. “It requires keeping track of changes around you in the same way that we monitor our surroundings when driving.” In a study comparing German-Italian bilinguals with Italian monolinguals on monitoring tasks, Mr. Costa and his colleagues found that the bilingual subjects not only performed better, but they also did so with less activity in parts of the brain involved in monitoring, indicating that they were more efficient at it.

[div class=attrib]Read more after the jump.[end-div]

[div class=attrib]Image courtesy of Futurity.org.[end-div]

Male Brain + Female = Jello

[div class=attrib]From Scientific American:[end-div]

In one experiment, just telling a man he would be observed by a female was enough to hurt his psychological performance.

Movies and television shows are full of scenes where a man tries unsuccessfully to interact with a pretty woman. In many cases, the potential suitor ends up acting foolishly despite his best attempts to impress. It seems like his brain isn’t working quite properly and according to new findings, it may not be.

Researchers have begun to explore the cognitive impairment that men experience before and after interacting with women. A 2009 study demonstrated that after a short interaction with an attractive woman, men experienced a decline in mental performance. A more recent study suggests that this cognitive impairment takes hold even w hen men simply anticipate interacting with a woman who they know very little about.

Sanne Nauts and her colleagues at Radboud University Nijmegen in the Netherlands ran two experiments using men and women university students as participants. They first collected a baseline measure of cognitive performance by having the students complete a Stroop test. Developed in 1935 by the psychologist John Ridley Stroop, the test is a common way of assessing our ability to process competing information. The test involves showing people a series of words describing different colors that are printed in different colored inks. For example, the word “blue” might be printed in green ink and the word “red” printed in blue ink. Participants are asked to name, as quickly as they can, the color of the ink that the words are written in. The test is cognitively demanding because our brains can’t help but process the meaning of the word along with the color of the ink. When people are mentally tired, they tend to complete the task at a slower rate.

After completing the Stroop Test, participants in Nauts’ study were asked to take part in another supposedly unrelated task. They were asked to read out loud a number of Dutch words while sitting in front of a webcam. The experimenters told them that during this “lip reading task” an observer would watch them over the webcam. The observer was given either a common male or female name. Participants were led to believe that this person would see them over the web cam, but they would not be able to interact with the person. No pictures or other identifying information were provided about the observer—all the participants knew was his or her name. After the lip reading task, the participants took another Stroop test. Women’s performance on the second test did not differ, regardless of the gender of their observer. However men who thought a woman was observing them ended up performing worse on the second Stroop test. This cognitive impairment occurred even though the men had not interacted with the female observer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock/Iconogenic.[end-div]

Our Children: Independently Dependent

Why can’t our kids tie their own shoes?

Are we raising our children to be self-obsessed, attention-seeking, helpless and dependent groupthinkers? And, why may the phenomenon of “family time” in the U.S. be a key culprit?

These are some of the questions raised by anthropologist Elinor Ochs and her colleagues. Over the last decade they have studied family life across the globe, from the Amazon region, to Samoa, and middle-America.

[div class=attrib]From the Wall Street Journal:[end-div]

Why do American children depend on their parents to do things for them that they are capable of doing for themselves? How do U.S. working parents’ views of “family time” affect their stress levels? These are just two of the questions that researchers at UCLA’s Center on Everyday Lives of Families, or CELF, are trying to answer in their work.

By studying families at home—or, as the scientists say, “in vivo”—rather than in a lab, they hope to better grasp how families with two working parents balance child care, household duties and career, and how this balance affects their health and well-being.

The center, which also includes sociologists, psychologists and archeologists, wants to understand “what the middle class thought, felt and what they did,” says Dr. Ochs. The researchers plan to publish two books this year on their work, and say they hope the findings may help families become closer and healthier.

Ten years ago, the UCLA team recorded video for a week of nearly every moment at home in the lives of 32 Southern California families. They have been picking apart the footage ever since, scrutinizing behavior, comments and even their refrigerators’s contents for clues.

The families, recruited primarily through ads, owned their own homes and had two or three children, at least one of whom was between 7 and 12 years old. About a third of the families had at least one nonwhite member, and two were headed by same-sex couples. Each family was filmed by two cameras and watched all day by at least three observers.

Among the findings: The families had very a child-centered focus, which may help explain the “dependency dilemma” seen among American middle-class families, says Dr. Ochs. Parents intend to develop their children’s independence, yet raise them to be relatively dependent, even when the kids have the skills to act on their own, she says.

In addition, these parents tended to have a very specific, idealized way of thinking about family time, says Tami Kremer-Sadlik, a former CELF research director who is now the director of programs for the division of social sciences at UCLA. These ideals appeared to generate guilt when work intruded on family life, and left parents feeling pressured to create perfect time together. The researchers noted that the presence of the observers may have altered some of the families’ behavior.

How kids develop moral responsibility is an area of focus for the researchers. Dr. Ochs, who began her career in far-off regions of the world studying the concept of “baby talk,” noticed that American children seemed relatively helpless compared with those in other cultures she and colleagues had observed.

In those cultures, young children were expected to contribute substantially to the community, says Dr. Ochs. Children in Samoa serve food to their elders, waiting patiently in front of them before they eat, as shown in one video snippet. Another video clip shows a girl around 5 years of age in Peru’s Amazon region climbing a tall tree to harvest papaya, and helping haul logs thicker than her leg to stoke a fire.

By contrast, the U.S. videos showed Los Angeles parents focusing more on the children, using simplified talk with them, doing most of the housework and intervening quickly when the kids had trouble completing a task.

In 22 of 30 families, children frequently ignored or resisted appeals to help, according to a study published in the journal Ethos in 2009. In the remaining eight families, the children weren’t asked to do much. In some cases, the children routinely asked the parents to do tasks, like getting them silverware. “How am I supposed to cut my food?” Dr. Ochs recalls one girl asking her parents.

Asking children to do a task led to much negotiation, and when parents asked, it sounded often like they were asking a favor, not making a demand, researchers said. Parents interviewed about their behavior said it was often too much trouble to ask.

For instance, one exchange caught on video shows an 8-year-old named Ben sprawled out on a couch near the front door, lifting his white, high-top sneaker to his father, the shoe laced. “Dad, untie my shoe,” he pleads. His father says Ben needs to say “please.”

“Please untie my shoe,” says the child in an identical tone as before. After his father hands the shoe back to him, Ben says, “Please put my shoe on and tie it,” and his father obliges.

[div class=attrib]Read the entire article after the jump:[end-div]

[div class=attrib]Image courtesy of Kyle T. Webster / Wall Street Journal.[end-div]

Daddy’s Girl, Yes; Mother’s Boy, No

Western social norms tolerate a strong bond between father and daughter; it’s OK to be a daddy’s girl. Yet, for a mother’s boy and mothers of mothers’ boys it’s a different story. In fact, a strong bond between mother and son is frequently looked upon with derision. Just check out the mother’s body “definition” in Wikipedia; there’s no formal entry for Daddy’s Girl.

Why is this, and is it right?

Excerpts below from the forthcoming book “From “The Mama’s Boy Myth” by Kate Stone Lombardi.

[div class=attrib]From the Wall Street Journal:[end-div]

My daughter Jeanie and I use Google chat throughout the day to discuss work, what we had for lunch, how we’re avoiding the gym, and emotional issues big and small. We may also catch up by phone in the evening. I can open up to Jeanie about certain things that I wouldn’t share with another soul, and I believe she would say the same about me. We are very close, which you probably won’t find particularly surprising or alarming.

Now switch genders. Suppose I told you that I am very close to my son, Paul. That I love hanging out with him and that we have dozens of inside jokes and shared traditions. Even though we speak frequently, I get a little thrill each time I hear his signature ringtone on my cellphone. Next, I confess that Paul is so sensitive and intuitive that he “gets me” in a very special way.

Are you starting to speculate that something is a little off? Are you getting uncomfortable about the kind of guy my son is growing up to be?

For generations mothers have gotten one message: that keeping their sons close is wrong, possibly even dangerous. A mother who fosters a deep emotional bond with her son, we’ve been told, is setting him up to be weak and effeminate—an archetypal mama’s boy. He’ll never be independent or able to form healthy adult relationships. As the therapist and child-rearing guru Michael Gurian wrote in his 1994 book about mothers and sons, “a mother’s job…is very much to hold back the coming of manhood.” A well-adjusted, loving mother is one who gradually but surely pushes her son away, both emotionally and physically, in order to allow him to become a healthy man.

This was standard operating procedure for our mothers, our grandmothers and even our great-grandmothers. Amazingly, we’re still encouraged to buy this parenting advice today.

Somehow, when so many of our other beliefs about the roles of men and women have been revolutionized, our view of the mother-son relationship has remained frozen in time. We’ve dramatically changed the way we raise our daughters, encouraging them to be assertive, play competitive sports and aim high in their educational and professional ambitions. We don’t fret about “masculinizing” our girls.

As for daughters and their fathers, while a “mama’s boy” may be a reviled creature, people tend to look tolerantly on a “daddy’s girl.” A loving and supportive father is considered essential to a girl’s self-esteem. Fathers are encouraged to be involved in their daughters’ lives, whether it’s coaching their soccer teams or escorting their teenage girls to father-daughter dances. A father who flouts gender stereotypes and teaches his daughter a traditionally masculine task—say, rebuilding a car engine—is considered to be pretty cool. But a mother who does something comparable—like teaching her son to knit or even encouraging him to talk more openly about his feelings—is looked at with contempt. What is she trying to do to that boy?

Many mothers are confused and anxious when it comes to raising boys. Should they defer to their husband when he insists that she stop kissing their first-grade son at school drop-off? If she cuddles her 10-year-old boy when he is hurt, will she turn him into a wimp? If she keeps him too close, will she make him gay? If her teenage boy is crying in his room, should she go in and comfort him, or will this embarrass and shame him? Anthony E. Wolf, a child psychologist and best-selling author, warns us that “strong emotional contact with his mother is especially upsetting to any teenage boy.”

None of these fears, however, is based on any actual science. In fact, research shows that boys suffer when they separate prematurely from their mothers and benefit from closeness in myriad ways throughout their lives.

A study published in Child Development involving almost 6,000 children, age 12 and younger, found that boys who were insecurely attached to their mothers acted more aggressive and hostile later in childhood—kicking and hitting others, yelling, disobeying adults and being generally destructive.

A study of more than 400 middle school boys revealed that sons who were close to their mothers were less likely to define masculinity as being physically tough, stoic and self-reliant. They not only remained more emotionally open, forming stronger friendships, but they also were less depressed and anxious than their more macho classmates. And they were getting better grades.

There is evidence that a strong mother-son bond prevents delinquency in adolescence. And though it has been long established that teenagers who have good communication with their parents are more likely to resist negative peer pressure, new research shows that it is a boy’s mother who is the most influential when it comes to risky behavior, not only with alcohol and drugs but also in preventing both early and unprotected sex.

Finally, there are no reputable scientific studies suggesting that a boy’s sexual orientation can be altered by his mother, no matter how much she loves him.

[div class=attrib]Read the entire article here.[end-div]

Need Creative Inpiration? Take a New Route to Work

[div class=attrib]From Miller-McCune:[end-div]

Want to boost your creativity? Tomorrow morning, pour some milk into an empty bowl, and then add the cereal.

That may sound, well, flaky. But according to a newly published study, preparing a common meal in reverse order may stimulate innovative thinking.

Avoiding conventional behavior at the breakfast table “can help people break their cognitive patterns, and thus lead them to think more flexibly and creatively,” according to a research team led by psychologist Simone Ritter of Radboud University Nijmegen in the Netherlands.

She and her colleagues, including Rodica Ioana Damian of the University of California, Davis, argue that “active involvement in an unusual event” can trigger higher levels of creativity. They note this activity can take many forms, from studying abroad for a semester to coping with the unexpected death of a loved one.
But, writing in the Journal of Experimental Social Psychology, they provide evidence that something simpler will suffice.

The researchers describe an experiment in which Dutch university students were asked to prepare a breakfast sandwich popular in the Netherlands.

Half of them did so in the conventional manner: They put a slice of bread on a plate, buttered the bread and then placed chocolate chips on top. The others — prompted by a script on a computer screen — first put chocolate chips on a plate, then buttered a slice of bread and finally “placed the bread butter-side-down on the dish with the chocolate chips.”

After completing their culinary assignment, they turned their attention to the “Unusual Uses Task,” a widely used measure of creativity. They were given two minutes to generate uses for a brick and another two minutes to come up with as many answers as they could to the question: “What makes sound?”

“Cognitive flexibility” was scored not by counting how many answers they came up with, but rather by the number of categories those answers fell into. For the “What makes sound?” test, a participant whose answers were all animals or machines received a score of one, while someone whose list included “dog,” “car” and “ocean” received a three.

“A high cognitive flexibility score indicates an ability to switch between categories, overcome fixedness, and thus think more creativity,” Ritter and her colleagues write.
On both tests, those who made their breakfast treat backwards had higher scores. Breaking their normal sandwich-making pattern apparently opened them up; their minds wandered more freely, allowing for more innovative thought.

[div class=attrib]Read the entire article here.[end-div]

What’s in a Name?

Are you a Leszczynska or a Bob? And, do you wish to be liked? Well, sorry Leszczynska. It turns out that having an easily pronounceable name makes you more likable.

[div class=attrib]From Wired:[end-div]

Though it might seem impossible, and certainly inadvisable, to judge a person by their name, a new study suggests our brains try anyway.

The more pronounceable a person’s name is, the more likely people are to favor them.

“When we can process a piece of information more easily, when it’s easier to comprehend, we come to like it more,” said psychologist Adam Alter of New York University and co-author of a Journal of Experimental Social Psychology study published in December.

Fluency, the idea that the brain favors information that’s easy to use, dates back to the 1960s, when researchers found that people most liked images of Chinese characters if they’d seen them many times before.

Researchers since then have explored other roles that names play, how they affect our judgment and to what degree.

Studies have shown, for example, that people can partly predict a person’s income and education using only their first name. Childhood is perhaps the richest area for name research: Boys with girls’ names are more likely to be suspended from school. And the less popular a name is, the more likely a child is to be delinquent.

In 2005, Alter and his colleagues explored how pronounceability of company names affects their performance in the stock market. Stripped of all obvious influences, they found companies with simpler names and ticker symbols traded better than the stocks of more difficult-to-pronounce companies.

“The effect is often very, very hard to quantify because so much depends on context, but it’s there and measurable,” Alter said. “You can’t avoid it.”

But how much does pronunciation guide our perceptions of people? To find out, Alter and colleagues Simon Laham and Peter Koval of the University of Melbourne carried out five studies.

In the first, they asked 19 female and 16 male college students to rank 50 surnames according to their ease or difficulty of pronunciation, and according to how much they liked or disliked them. In the second, they had 17 females and 7 male students vote for hypothetical political candidates solely on the basis of their names. In the third, they asked 55 female and 19 male students to vote on candidates about whom they knew both names and some political positions.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Dave Mosher/Wired.[end-div]

Salad Bar Strategies

It turns out that human behavior at the ubiquitous, self-serve salad bar in your suburban restaurant or hotel is a rather complex affair. There is a method to optimizing the type and quantity of food on one’s plate.

[div class=attrib]From the New Scientist:[end-div]

Competition, greed and skulduggery are the name of the game if you want to eat your fill. Smorgasbord behaviour is surprisingly complex.

A mathematician, an engineer and a psychologist go up to a buffet… No, it’s not the start of a bad joke.

While most of us would dive into the sandwiches without thinking twice, these diners see a groaning table as a welcome opportunity to advance their research.

Look behind the salads, sausage rolls and bite-size pizzas and it turns out that buffets are a microcosm of greed, sexual politics and altruism – a place where our food choices are driven by factors we’re often unaware of. Understand the science and you’ll see buffets very differently next time you fill your plate.

The story starts with Lionel Levine of Cornell University in Ithaca, New York, and Katherine Stange of Stanford University, California. They were sharing food at a restaurant one day, and wondered: do certain choices lead to tastier platefuls when food must be divided up? You could wolf down everything in sight, of course, but these guys are mathematicians, so they turned to a more subtle approach: game theory.

Applying mathematics to a buffet is harder than it sounds, so they started by simplifying things. They modelled two people taking turns to pick items from a shared platter – hardly a buffet, more akin to a polite tapas-style meal. It was never going to generate a strategy for any occasion, but hopefully useful principles would nonetheless emerge. And for their bellies, the potential rewards were great.

First they assumed that each diner would have individual preferences. One might place pork pie at the top and beetroot at the bottom, for example, while others might salivate over sausage rolls. That ranking can be plugged into calculations by giving each food item a score, where higher-ranked foods are worth more points. The most enjoyable buffet meal would be the one that scores highest in total.

In some scenarios, the route to the most enjoyable plate was straightforward. If both people shared the same rankings, they should pick their favourites first. But Levine and Stange also uncovered a counter-intuitive effect: it doesn’t always pay to take the favourite item first. To devise an optimum strategy, they say, you should take into account what your food rival considers to be the worst food on the table.

If that makes your brow furrow, consider this: if you know your fellow diner hates chicken legs, you know that can be the last morsel you aim to eat – even if it’s one of your favourites. In principle, if you had full knowledge of your food rival’s preferences, it would be possible to work backwards from their least favourite and identify the optimum order in which to fill your plate, according to the pair’s calculations, which will appear in American Mathematical Monthly (arxiv.org/abs/1104.0961).

So how do you know what to select first? In reality, the buffet might be long gone before you had worked it out. Even if you did, the researchers’ strategy also assumes that you are at a rather polite buffet, taking turns, so it has its limitations. However, it does provide practical advice in some scenarios. For example, imagine Amanda is up against Brian, who she knows has the opposite ranking of tastes to her. Amanda loves sausages, hates pickled onions, and is middling about quiche. Brian loves pickled onions, hates sausages, shares the same view of quiche. Having identified that her favourites are safe, Amanda should prioritise morsels where their taste-ranking matched – the quiche, in other words.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: salad bars: Courtesy of Google search.[end-div]

Social Influence Through Social Media: Not!

Online social networks are an unprecedentedly rich source of material for psychologists, social scientists and observers of human behavior. Now a recent study shows that influence through these networks may not be as powerful or widespread as first thought. The study, “Social Selection and Peer Influence in an Online Social Network,” by Kevin Lewis, Marco Gonzalez and Jason Kaufman is available here.

[div class=attrib]From the Wall Street Journal:[end-div]

Social media gives ordinary people unprecedented power to broadcast their taste in movies, books and film, but for the most part those tastes don’t rub off on other people, a new study of college students finds. Instead, social media appears to strengthen our bonds with people whose tastes already resemble ours.

Researchers followed the Facebook pages and networks of some 1,000 students, at one college, for four years (looking only at public information). The strongest determinant of Facebook friendship was “mere propinquity” — living in the same building, studying the same subject—but people also self-segregated by gender, race, socioeconomic background and place of origin.

When it came to culture, researchers used an algorithm to identify taste “clusters” within the categories of music, movies, and books. They learned that fans of “lite/classic rock”* and “classical/jazz” were significantly more likely than chance would predict to form and maintain friendships, as were devotees of films featuring “dark satire” or “raunchy comedy / gore.” But this was the case for no other music or film genre — and for no books.

What’s more, “jazz/classical” was the only taste to spread from people who possessed it to those who lacked it. The researchers suggest that this is because liking jazz and classical music serves as a class marker, one that college-age people want to acquire. (I’d prefer to believe that they adopt those tastes on aesthetic grounds, but who knows?) “Indie/alt” music, in fact, was the opposite of contagious: People whose friends liked that style music tended to drop that preference themselves, over time.

[div class=attrib]Read the entire article here.[end-div]

Walking Through Doorways and Forgetting

[div class=attrib]From Scientific American:[end-div]

The French poet Paul Valéry once said, “The purpose of psychology is to give us a completely different idea of the things we know best.”  In that spirit, consider a situation many of us will find we know too well:  You’re sitting at your desk in your office at home. Digging for something under a stack of papers, you find a dirty coffee mug that’s been there so long it’s eligible for carbon dating.  Better wash it. You pick up the mug, walk out the door of your office, and head toward the kitchen.  By the time you get to the kitchen, though, you’ve forgotten why you stood up in the first place, and you wander back to your office, feeling a little confused—until you look down and see the cup.

So there’s the thing we know best:  The common and annoying experience of arriving somewhere only to realize you’ve forgotten what you went there to do.  We all know why such forgetting happens: we didn’t pay enough attention, or too much time passed, or it just wasn’t important enough.  But a “completely different” idea comes from a team of researchers at the University of Notre Dame.  The first part of their paper’s title sums it up:  “Walking through doorways causes forgetting.”

Gabriel Radvansky, Sabine Krawietz and Andrea Tamplin seated participants in front of a computer screen running a video game in which they could move around using the arrow keys.  In the game, they would walk up to a table with a colored geometric solid sitting on it. Their task was to pick up the object and take it to another table, where they would put the object down and pick up a new one. Whichever object they were currently carrying was invisible to them, as if it were in a virtual backpack.??Sometimes, to get to the next object the participant simply walked across the room. Other times, they had to walk the same distance, but through a door into a new room. From time to time, the researchers gave them a pop quiz, asking which object was currently in their backpack.  The quiz was timed so that when they walked through a doorway, they were tested right afterwards.  As the title said, walking through doorways caused forgetting: Their responses were both slower and less accurate when they’d walked through a doorway into a new room than when they’d walked the same distance within the same room.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Doorway, Titicaca, Bolivia. Courtesy of M.Gerra Assoc.[end-div]

The Psychology of Gift Giving

[div class=attrib]From the Wall Street Journal:[end-div]

Many of my economist friends have a problem with gift-giving. They view the holidays not as an occasion for joy but as a festival of irrationality, an orgy of wealth-destruction.

Rational economists fixate on a situation in which, say, your Aunt Bertha spends $50 on a shirt for you, and you end up wearing it just once (when she visits). Her hard-earned cash has evaporated, and you don’t even like the present! One much-cited study estimated that as much as a third of the money spent on Christmas is wasted, because recipients assign a value lower than the retail price to the gifts they receive. Rational economists thus make a simple suggestion: Give cash or give nothing.

But behavioral economics, which draws on psychology as well as on economic theory, is much more appreciative of gift giving. Behavioral economics better understands why people (rightly, in my view) don’t want to give up the mystery, excitement and joy of gift giving.

In this view, gifts aren’t irrational. It’s just that rational economists have failed to account for their genuine social utility. So let’s examine the rational and irrational reasons to give gifts.

Some gifts, of course, are basically straightforward economic exchanges. This is the case when we buy a nephew a package of socks because his mother says he needs them. It is the least exciting kind of gift but also the one that any economist can understand.

A second important kind of gift is one that tries to create or strengthen a social connection. The classic example is when somebody invites us for dinner and we bring something for the host. It’s not about economic efficiency. It’s a way to express our gratitude and to create a social bond with the host.

Another category of gift, which I like a lot, is what I call “paternalistic” gifts—things you think somebody else should have. I like a certain Green Day album or Julian Barnes novel or the book “Predictably Irrational,” and I think that you should like it, too. Or I think that singing lessons or yoga classes will expand your horizons—and so I buy them for you.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Google search results for ‘gifts’.[end-div]

Would You Let An Atheist Teacher Babysit Your Children?

For adults living in North America, the answer is that it’s probably more likely that they would prefer a rapist teacher as babysitter over an atheistic one. Startling as that may seem, the conclusion is backed by some real science, excerpted below.

[div class=attrib]From the Washington Post:[end-div]

A new study finds that atheists are among society’s most distrusted group, comparable even to rapists in certain circumstances.

Psychologists at the University of British Columbia and the University of Oregon say that their study demonstrates that anti-atheist prejudice stems from moral distrust, not dislike, of nonbelievers.

“It’s pretty remarkable,” said Azim Shariff, an assistant professor of psychology at the University of Oregon and a co-author of the study, which appears in the current issue of Journal of Personality and Social Psychology.

The study, conducted among 350 Americans adults and 420 Canadian college students, asked participants to decide if a fictional driver damaged a parked car and left the scene, then found a wallet and took the money, was the driver more likely to be a teacher, an atheist teacher, or a rapist teacher?

The participants, who were from religious and nonreligious backgrounds, most often chose the atheist teacher.

The study is part of an attempt to understand what needs religion fulfills in people. Among the conclusions is a sense of trust in others.

“People find atheists very suspect,” Shariff said. “They don’t fear God so we should distrust them; they do not have the same moral obligations of others. This is a common refrain against atheists. People fear them as a group.”

[div class=attrib]Follow the entire article here.[end-div]

[div class=attrib]Image: Ariane Sherine and Professor Richard Dawkins pose in front of a London bus featuring an atheist advertisement with the slogan “There’s probably no God. Now stop worrying and enjoy your life”. Courtesy Heathcliff  O’Malley / Daily Telegraph.[end-div]