Inside the Weird Teenage Brain

[div class=attrib]From the Wall Street Journal:[end-div]

“What was he thinking?” It’s the familiar cry of bewildered parents trying to understand why their teenagers act the way they do.

How does the boy who can thoughtfully explain the reasons never to drink and drive end up in a drunken crash? Why does the girl who knows all about birth control find herself pregnant by a boy she doesn’t even like? What happened to the gifted, imaginative child who excelled through high school but then dropped out of college, drifted from job to job and now lives in his parents’ basement?

Adolescence has always been troubled, but for reasons that are somewhat mysterious, puberty is now kicking in at an earlier and earlier age. A leading theory points to changes in energy balance as children eat more and move less.

At the same time, first with the industrial revolution and then even more dramatically with the information revolution, children have come to take on adult roles later and later. Five hundred years ago, Shakespeare knew that the emotionally intense combination of teenage sexuality and peer-induced risk could be tragic—witness “Romeo and Juliet.” But, on the other hand, if not for fate, 13-year-old Juliet would have become a wife and mother within a year or two.

Our Juliets (as parents longing for grandchildren will recognize with a sigh) may experience the tumult of love for 20 years before they settle down into motherhood. And our Romeos may be poetic lunatics under the influence of Queen Mab until they are well into graduate school.

What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.

The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again.

The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.

Recent studies in the neuroscientist B.J. Casey’s lab at Cornell University suggest that adolescents aren’t reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

From an evolutionary point of view, this all makes perfect sense. One of the most distinctive evolutionary features of human beings is our unusually long, protected childhood. Human children depend on adults for much longer than those of any other primate. That long protected period also allows us to learn much more than any other animal. But eventually, we have to leave the safe bubble of family life, take what we learned as children and apply it to the real adult world.

Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.

[div class=attrib]Read more here.[end-div]

See the Aurora, then Die

One item that features prominently on so-called “things-to-do-before-you-die” lists is seeing the Aurora Borealis, or Northern Lights.

The recent surge in sunspot activity and solar flares has caused a corresponding uptick in geo-magnetic storms here on Earth. The resulting Aurorae have been nothing short of spectacular. More images here, courtesy of Smithsonian magazine.

Do We Need Philosophy Outside of the Ivory Tower?

In her song “What I Am”, Edie Brickell reminds us that philosophy is “the talk on a cereal box” and “a walk on the slippery rocks“.

Philosopher Gary Gutting makes the case that the discipline is more important than ever, and yes, it belongs in the mainstream consciousness, and not just within the confines of academia.

[div class=attrib]From the New York Times:[end-div]

Almost every article that appears in The Stone provokes some comments from readers challenging the very idea that philosophy has anything relevant to say to non-philosophers.  There are, in particular, complaints that philosophy is an irrelevant “ivory-tower” exercise, useless to any except those interested in logic-chopping for its own sake.

There is an important conception of philosophy that falls to this criticism.  Associated especially with earlier modern philosophers, particularly René Descartes, this conception sees philosophy as the essential foundation of the beliefs that guide our everyday life.  For example, I act as though there is a material world and other people who experience it as I do.   But how do I know that any of this is true?  Couldn’t I just be dreaming of a world outside my thoughts?  And, since (at best) I see only other human bodies, what reason do I have to think that there are any minds connected to those bodies?  To answer these questions, it would seem that I need rigorous philosophical arguments for my existence and the existence of other thinking humans.

Of course, I don’t actually need any such arguments, if only because I have no practical alternative to believing that I and other people exist.  As soon as we stop thinking weird philosophical thoughts, we immediately go back to believing what skeptical arguments seem to call into question.  And rightly so, since, as David Hume pointed out, we are human beings before we are philosophers.

But what Hume and, by our day, virtually all philosophers are rejecting is only what I’m calling the foundationalist conception of philosophy. Rejecting foundationalism means accepting that we have every right to hold basic beliefs that are not legitimated by philosophical reflection.  More recently, philosophers as different as Richard Rorty and Alvin Plantinga have cogently argued that such basic beliefs include not only the “Humean” beliefs that no one can do without, but also substantive beliefs on controversial questions of ethics, politics and religion.  Rorty, for example, maintained that the basic principles of liberal democracy require no philosophical grounding (“the priority of democracy over philosophy”).

If you think that the only possible “use” of philosophy would be to provide a foundation for beliefs that need no foundation, then the conclusion that philosophy is of little importance for everyday life follows immediately.  But there are other ways that philosophy can be of practical significance.

Even though basic beliefs on ethics, politics and religion do not require prior philosophical justification, they do need what we might call “intellectual maintenance,” which itself typically involves philosophical thinking.  Religious believers, for example, are frequently troubled by the existence of horrendous evils in a world they hold was created by an all-good God.  Some of their trouble may be emotional, requiring pastoral guidance.  But religious commitment need not exclude a commitment to coherent thought. For instance, often enough believers want to know if their belief in God makes sense given the reality of evil.  The philosophy of religion is full of discussions relevant to this question.  Similarly, you may be an atheist because you think all arguments for God’s existence are obviously fallacious. But if you encounter, say, a sophisticated version of the cosmological argument, or the design argument from fine-tuning, you may well need a clever philosopher to see if there’s anything wrong with it.

[div class=attrib]Read the entire article here.[end-div]

Forget the Groupthink: Rise of the Introvert

Author Susan Cain reviews her intriguing book, “Quiet : The Power of Introverts” in an interview with Gareth Cook over at Mind Matters / Scientific American.

She shows us how social and business interactions and group-driven processes, often led and coordinated by extroverts, may not be the most efficient method for introverts to shine creatively.

[div class=attrib]From Mind Matters:[end-div]

Cook: This may be a stupid question, but how do you define an introvert? How can somebody tell whether they are truly introverted or extroverted?

Cain: Not a stupid question at all! Introverts prefer quiet, minimally stimulating environments, while extroverts need higher levels of stimulation to feel their best. Stimulation comes in all forms – social stimulation, but also lights, noise, and so on. Introverts even salivate more than extroverts do if you place a drop of lemon juice on their tongues! So an introvert is more likely to enjoy a quiet glass of wine with a close friend than a loud, raucous party full of strangers.

It’s also important to understand that introversion is different from shyness. Shyness is the fear of negative judgment, while introversion is simply the preference for less stimulation. Shyness is inherently uncomfortable; introversion is not. The traits do overlap, though psychologists debate to what degree.

Cook: You argue that our culture has an extroversion bias. Can you explain what you mean?

Cain: In our society, the ideal self is bold, gregarious, and comfortable in the spotlight. We like to think that we value individuality, but mostly we admire the type of individual who’s comfortable “putting himself out there.” Our schools, workplaces, and religious institutions are designed for extroverts. Introverts are to extroverts what American women were to men in the 1950s — second-class citizens with gigantic amounts of untapped talent.

In my book, I travel the country – from a Tony Robbins seminar to Harvard Business School to Rick Warren’s powerful Saddleback Church – shining a light on the bias against introversion. One of the most poignant moments was when an evangelical pastor I met at Saddleback confided his shame that “God is not pleased” with him because he likes spending time alone.

Cook: How does this cultural inclination affect introverts?

Cain: Many introverts feel there’s something wrong with them, and try to pass as extroverts. But whenever you try to pass as something you’re not, you lose a part of yourself along the way. You especially lose a sense of how to spend your time. Introverts are constantly going to parties and such when they’d really prefer to be home reading, studying, inventing, meditating, designing, thinking, cooking…or any number of other quiet and worthwhile activities.

According to the latest research, one third to one half of us are introverts – that’s one out of every two or three people you know. But you’d never guess that, right? That’s because introverts learn from an early age to act like pretend-extroverts.

[div class=attrib]Read the entire article here.[end-div]

Our Beautiful Home

A composite image of the beautiful blue planet, taken through NASA’s eyes on January 4, 2012. It’s so gorgeous that theDiagonal’s editor wishes he lived there.

[div class=attrib]Image of Earth from NASA’s Earth observing satellite Suomi NPP. Courtesy of NASA/NOAA/GSFC/Suomi NPP/VIIRS/Norman Kuring.[end-div]

Self-Esteem and Designer Goods

[div class=attrib]From Scientific American:[end-div]

Sellers have long charged a premium for objects that confer some kind of social status, even if they offer few, if any, functional benefits over cheaper products. Designer sunglasses, $200,000 Swiss watches, and many high-end cars often seem to fall into this category. If a marketer can make a mundane item seem like a status symbol—maybe by wrapping it in a fancy package or associating it with wealth, success or beauty—they can charge more for it.

Although this practice may seem like a way to trick consumers out of their hard-earned cash, studies show that people do reap real psychological benefits from the purchase of high status items. Still, some people may gain more than others do, and studies also suggest that buying fancy stuff for yourself is unlikely to be the best way to boost your happiness or self-esteem.

In 2008, two research teams demonstrated that people process social values in the brain’s reward center: the striatum, which also responds to monetary gains. That these two values share a cerebral home suggests we may weigh our reputation in cash terms. Whether we like it or not, attaching a monetary value to social status makes good scientific sense.

Much of what revs up this reward center—food and recreational drugs, for example—is associated with a temporary rush of pleasure or good feeling, rather than long-lasting satisfaction. But when we literally pay for that good feeling, by buying a high-status car or watch, say, the effect may last long enough to unleash profitable behaviors. In a study published last year, researchers at National Sun Yat-Sen University in Taiwan found that the mere use of brand name products seemed to make people feel they deserved higher salaries, in one case, and in the other, would be more attractive to a potential date, reports Roger Dooley in his Neuromarketing blog. Thus, even if the boost of good feeling—and self-worth—is short-lived, it might spawn actions that yield lasting benefits.

Other data suggest that owning fancy things might have more direct psychological benefits. In a study published in 2010, psychologist Ed Deiner at the University of Illinois and his colleagues found that standard of living, as measured by household income and ownership of luxury goods, predicted a person’s overall satisfaction with life—although it did not seem to enhance positive emotions.  That rush of pleasure you get from the purchase probably does fade, but a type of self-esteem effect seems to last.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image of luxury goods. Courtesy of Google search.[end-div]

Political and Social Stability and God

theDiagonal has carried several recent articles (here and here) that paint atheists in the same category as serial killers and child molesters, particularly in the United States. Why are atheists so reviled?

A study by Will Gervais and Ara Norenzayanat at the University of British Columbia shows that it boils down to trust. Simply put, we are more likely to find someone to be trustworthy if we believe God is watching over us.

Interestingly, their research also showed that atheists are more likely to be found in greater numbers in a population governed by a stable government with a broad social safety-net. Political instability, it seems, drives more citizens to believe in God.

[div class=attrib]From Scientific American:[end-div]

Atheists are one of the most disliked groups in America. Only 45 percent of Americans say they would vote for a qualified atheist presidential candidate, and atheists are rated as the least desirable group for a potential son-in-law or daughter-in-law to belong to. Will Gervais at the University of British Columbia recently published a set of studies looking at why atheists are so disliked. His conclusion: It comes down to trust.

Gervais and his colleagues presented participants with a story about a person who accidentally hits a parked car and then fails to leave behind valid insurance information for the other driver. Participants were asked to choose the probability that the person in question was a Christian, a Muslim, a rapist, or an atheist. They thought it equally probable the culprit was an atheist or a rapist, and unlikely the person was a Muslim or Christian. In a different study, Gervais looked at how atheism influences people’s hiring decisions. People were asked to choose between an atheist or a religious candidate for a job requiring either a high or low degree of trust. For the high-trust job of daycare worker, people were more likely to prefer the religious candidate. For the job of waitress, which requires less trust, the atheists fared much better.

It wasn’t just the highly religious participants who expressed a distrust of atheists. People identifying themselves as having no religious affiliation held similar opinions. Gervais and his colleagues discovered that people distrust atheists because of the belief that people behave better when they think that God is watching over them. This belief may have some truth to it. Gervais and his colleague Ara Norenzayan have found that reminding people about God’s presence has the same effect as telling people they are being watched by others: it increases their feelings of self-consciousness and leads them to behave in more socially acceptable ways.

When we know that somebody believes in the possibility of divine punishment, we seem to assume they are less likely to do something unethical. Based on this logic, Gervais and Norenzayan hypothesized that reminding people about the existence of secular authority figures, such as policemen and judges, might alleviate people’s prejudice towards atheists. In one study, they had people watch either a travel video or a video of a police chief giving an end-of-the-year report. They then asked participants how much they agreed with certain statements about atheists (e.g., “I would be uncomfortable with an atheist teaching my child.”) In addition, they measured participants’ prejudice towards other groups, including Muslims and Jewish people. Their results showed that viewing the video of the police chief resulted in less distrust towards atheists. However, it had no effect on people’s prejudice towards other groups. From a psychological standpoint, God and secular authority figures may be somewhat interchangeable. The existence of either helps us feel more trusting of others.

Gervais and Norenzayan’s findings may shed light on an interesting puzzle: why acceptance towards atheism has grown rapidly in some countries but not others. In many Scandinavian countries, including Norway and Sweden, the number of people who report believing in God has reached an all-time low. This may have something to do with the way these countries have established governments that guarantee a high level of social security for all of their citizens.  Aaron Kay and his colleagues ran a study in Canada which found that political insecurity may push us towards believing in God. They gave participants two versions of a fictitious news story: one describing Canada’s current political situation as stable, the other describing it as potentially unstable. After reading one of the two articles, people’s beliefs in God were measured. People who read the article describing the government as potentially unstable were more likely to agree that God, or some other type of nonhuman entity, is in control of the universe. A common belief in the divine may help people feel more secure. Yet when security is achieved by more secular means, it may remove some of the draw of faith.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: In God We Trust. Courtesy of the Houston Chronicle.[end-div]

Do We Become More Conservative as We Age?

A popular stereotype suggests that we become increasingly conservative in our values as we age. Thus, one would expect that older voters would be more likely to vote for Republican candidates. However, a recent social study debunks this view.

[div class=attrib]From Discovery:[end-div]

Amidst the bipartisan banter of election season, there persists an enduring belief that people get more conservative as they age — making older people more likely to vote for Republican candidates.

Ongoing research, however, fails to back up the stereotype. While there is some evidence that today’s seniors may be more conservative than today’s youth, that’s not because older folks are more conservative than they use to be. Instead, our modern elders likely came of age at a time when the political situation favored more conservative views.

In fact, studies show that people may actually get more liberal over time when it comes to certain kinds of beliefs. That suggests that we are not pre-determined to get stodgy, set in our ways or otherwise more inflexible in our retirement years.

Contrary to popular belief, old age can be an open-minded and enlightening time.

NEWS: Is There a Liberal Gene?

“Pigeonholing older people into these rigid attitude boxes or conservative boxes is not a good idea,” said Nick Dangelis, a sociologist and gerontologist at the University of Vermont in Burlington.

“Rather, when they were born, what experiences they had growing up, as well as political, social and economic events have a lot to do with how people behave,” he said. “Our results are showing that these have profound effects.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A Board of Elections volunteer watches people cast their ballots during early voting October 23, 2008 in Savannah, Georgia. Courtesy of MSNBC.[end-div]

Wikipedia Blackout and Intellectual Curiosity

Perhaps the recent dimming of Wikipedia, for 24 hours on January 18, (and other notable websites) in protest of the planned online privacy legislation in the U.S. Congress, wasn’t all that bad.

Many would argue that Wikipedia has been a great boon in democratizing content authorship and disseminating information. So, when it temporarily shuttered its online doors, many shuddered from withdrawal. Yet, this “always on”, instantly available, crowdsourced resource is undermining an important human trait: intellectual curiosity.

When Wikipedia went off-air many of us, including Jonathan Jones, were forced to search a little deeper and a little longer for facts and information. In doing so, it reawakened our need to discover, connect, and conceptualize for ourselves, rather than take as rote the musings of the anonymous masses, just one click away. Yes, we exercised our brains a little harder that day.

[div class=attrib]By Jonathan Jones over at the Guardian:[end-div]

I got really excited this morning. Looking up an artist online – Rembrandt, if you want to know – I noticed something different. As usual, the first item offered was his Wikipedia entry. But after a few seconds, the Rembrandt page dissolved into a darkened screen with a big W and an explanation I was too thrilled to read at that moment. Wikipedia offline? Wikipedia offline! A new dawn for humanity …

Only after a couple of glasses of champagne did I look again and realise that Wikipedia is offline only for 24 hours, in protest against what it sees as assaults on digital freedom.

OK, so I’m slightly hamming that up. Wikipedia is always the first site my search engine offers, for any artist, but I try to ignore it. I detest the way this site claims to offer the world’s knowledge when all it often contains is a half-baked distillation of third-hand information. To call this an encyclopedia is like saying an Airfix model is a real Spitfire. Actually, not even a kit model – more like one made out of matchsticks.

I have a modest proposal for Wikipedia: can it please stay offline for ever? It has already achieved something remarkable, replacing genuine intellectual curiosity and discovery with a world of lazy, instant factoids. Can it take a rest and let civilisation recover?

On its protest page today, the website asks us to “imagine a world without free knowledge”. These words betray a colossal arrogance. Do the creators of Wikipedia really believe they are the world’s only source of “free knowledge”?

Institutions that offer free knowledge have existed for thousands of years. They are called libraries. Public libraries flourished in ancient Greece and Rome, and were revived in the Renaissance. In the 19th century, libraries were built in cities and towns everywhere. What is the difference between a book and Wikipedia? It has a named author or authors, and they are made to work hard, by editors and teams of editors, to get their words into print. Those words, when they appear, vary vastly in value and importance, but the knowledge that can be gleaned – not just from one book but by comparing different books with one another, checking them against each other, reaching your own conclusions – is subtle, rich, beautiful. This knowledge cannot be packaged or fixed; if you keep an open mind, it is always changing.

[div class=attrib]Read the whole article here.[end-div]

Defying Gravity using Science

Gravity defying feats have long been a favored pastime for magicians and illusionists. Well, science has now caught up to and surpassed our friends with sleight of hand. Check out this astonishing video (after the 10 second ad) of a “quantum locked”, levitating superconducting disc, courtesy of New Scientist.

[div class=attrib]From the New Scientist:[end-div]

FOR centuries, con artists have convinced the masses that it is possible to defy gravity or walk through walls. Victorian audiences gasped at tricks of levitation involving crinolined ladies hovering over tables. Even before then, fraudsters and deluded inventors were proudly displaying perpetual-motion machines that could do impossible things, such as make liquids flow uphill without consuming energy. Today, magicians still make solid rings pass through each other and become interlinked – or so it appears. But these are all cheap tricks compared with what the real world has to offer.

Cool a piece of metal or a bucket of helium to near absolute zero and, in the right conditions, you will see the metal levitating above a magnet, liquid helium flowing up the walls of its container or solids passing through each other. “We love to observe these phenomena in the lab,” says Ed Hinds of Imperial College, London.

This weirdness is not mere entertainment, though. From these strange phenomena we can tease out all of chemistry and biology, find deliverance from our energy crisis and perhaps even unveil the ultimate nature of the universe. Welcome to the world of superstuff.

This world is a cold one. It only exists within a few degrees of absolute zero, the lowest temperature possible. Though you might think very little would happen in such a frozen place, nothing could be further from the truth. This is a wild, almost surreal world, worthy of Lewis Carroll.

One way to cross its threshold is to cool liquid helium to just above 2 kelvin. The first thing you might notice is that you can set the helium rotating, and it will just keep on spinning. That’s because it is now a “superfluid”, a liquid state with no viscosity.

Another interesting property of a superfluid is that it will flow up the walls of its container. Lift a bucketful of superfluid helium out of a vat of the stuff, and it will flow up the sides of the bucket, over the lip and down the outside, rejoining the fluid it was taken from.

[div class=attrib]Read more here.[end-div]

Handedness Shapes Perception and Morality

A group of new research studies show that our left- or right-handedness shapes our perception of “goodness” and “badness”.

[div class=attrib]From Scientific American:[end-div]

A series of studies led by psychologist Daniel Casasanto suggests that one thing that may shape our choice is the side of the menu an item appears on. Specifically, Casasanto and his team have shown that for left-handers, the left side of any space connotes positive qualities such as goodness, niceness, and smartness. For right-handers, the right side of any space connotes these same virtues. He calls this idea that “people with different bodies think differently, in predictable ways” the body-specificity hypothesis.

In one of Casasanto’s experiments, adult participants were shown pictures of two aliens side by side and instructed to circle the alien that best exemplified an abstract characteristic. For example, participants may have been asked to circle the “more attractive” or “less honest” alien. Of the participants who showed a directional preference (most participants did), the majority of right-handers attributed positive characteristics more often to the aliens on the right whereas the majority of left-handers attributed positive characteristics more often to aliens on the left.

Handedness was found to predict choice in experiments mirroring real-life situations as well. When participants read near-identical product descriptions on either side of a page and were asked to indicate the products they wanted to buy, most righties chose the item described on the right side while most lefties chose the product on the left. Similarly, when subjects read side-by-side resumes from two job applicants presented in a random order, they were more likely to choose the candidate described on their dominant side.

Follow-up studies on children yielded similar results. In one experiment, children were shown a drawing of a bookshelf with a box to the left and a box to the right. They were then asked to think of a toy they liked and a toy they disliked and choose the boxes in which they would place the toys. Children tended to choose to place their preferred toy in the box to their dominant side and the toy they did not like to their non-dominant side.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Drawing Hands by M. C. Escher, 1948, Lithograph. Courtesy of Wikipedia.[end-div]

An Evolutionary Benefit to Self-deception

[div class=attrib]From Scientific American:[end-div]

We lie to ourselves all the time. We tell ourselves that we are better than average — that we are more moral, more capable, less likely to become sick or suffer an accident. It’s an odd phenomenon, and an especially puzzling one to those who think about our evolutionary origins. Self-deception is so pervasive that it must confer some advantage. But how could we be well served by a brain that deceives us? This is one of the topics tackled by Robert Trivers in his new book, “The Folly of Fools,” a colorful survey of deception that includes plane crashes, neuroscience and the transvestites of the animal world. He answered questions from Mind Matters editor Gareth Cook.

Cook: Do you have any favorite examples of deception in the natural world?
Trivers: Tough call. They are so numerous, intricate and bizarre.  But you can hardly beat female mimics for general interest. These are males that mimic females in order to achieve closeness to a territory-holding male, who then attracts a real female ready to lay eggs. The territory-holding male imagines that he is in bed (so to speak) with two females, when really he is in bed with one female and another male, who, in turn, steals part of the paternity of the eggs being laid by the female. The internal dynamics of such transvestite threesomes is only just being analyzed. But for pure reproductive artistry one can not beat the tiny blister beetles that assemble in arrays of 100’s to 1000’s, linking together to produce the larger illusion of a female solitary bee, which attracts a male bee who flies into the mirage in order to copulate and thereby carries the beetles to their next host.

Cook: At what age do we see the first signs of deception in humans?
Trivers: In the last trimester of pregnancy, that is, while the offspring is still inside its mother. The baby takes over control of the mother’s blood sugar level (raising it), pulse rate (raising it) and blood distribution (withdrawing it from extremities and positioning it above the developing baby). It does so by putting into the maternal blood stream the same chemicals—or close mimics—as those that the mother normally produces to control these variables. You could argue that this benefits mom. She says, my child knows better what it needs than I do so let me give the child control. But it is not in the mother’s best interests to allow the offspring to get everything it wants; the mother must apportion her biological investment among other offspring, past, present and future. The proof is in the inefficiency of the new arrangement, the hallmark of conflict. The offspring produces these chemicals at 1000 times the level that the mother does. This suggests a co-evolutionary struggle in which the mother’s body becomes deafer as the offspring becomes louder.
After birth, the first clear signs of deception come about age 6 months, which is when the child fakes need when there appears to be no good reason. The child will scream and bawl, roll on the floor in apparent agony and yet stop within seconds after the audience leaves the room, only to resume within seconds when the audience is back. Later, the child will hide objects from the view of others and deny that it cares about a punishment when it clearly does.  So-called ‘white lies’, of the sort “The meal you served was delicious” appear after age 5.

[div class=attrib]Read the entire article here.[end-div]

On the Need for Charisma

[div class=attrib]From Project Syndicate:[end-div]

A leadership transition is scheduled in two major autocracies in 2012. Neither is likely to be a surprise. Xi Jinping is set to replace Hu Jintao as President in China, and, in Russia, Vladimir Putin has announced that he will reclaim the presidency from Dmitri Medvedev. Among the world’s democracies, political outcomes this year are less predictable. Nicolas Sarkozy faces a difficult presidential re-election campaign in France, as does Barack Obama in the United States.

In the 2008 US presidential election, the press told us that Obama won because he had “charisma” – the special power to inspire fascination and loyalty. If so, how can his re-election be uncertain just four years later? Can a leader lose his or her charisma? Does charisma originate in the individual, in that person’s followers, or in the situation? Academic research points to all three.

Charisma proves surprisingly hard to identify in advance. A recent survey concluded that “relatively little” is known about who charismatic leaders are. Dick Morris, an American political consultant, reports that in his experience, “charisma is the most elusive of political traits, because it doesn’t exist in reality; only in our perception once a candidate has made it by hard work and good issues.” Similarly, the business press has described many a CEO as “charismatic” when things are going well, only to withdraw the label when profits fall.

Political scientists have tried to create charisma scales that would predict votes or presidential ratings, but they have not proven fruitful. Among US presidents, John F. Kennedy is often described as charismatic, but obviously not for everyone, given that he failed to capture a majority of the popular vote, and his ratings varied during his presidency.

Kennedy’s successor, Lyndon Johnson, lamented that he lacked charisma. That was true of his relations with the public, but Johnson could be magnetic – even overwhelming – in personal contacts. One careful study of presidential rhetoric found that even such famous orators as Franklin Roosevelt and Ronald Reagan could not count on charisma to enact their programs.

Charisma is more easily identified after the fact. In that sense, the concept is circular. It is like the old Chinese concept of the “mandate of heaven”: emperors were said to rule because they had it, and when they were overthrown, it was because they had lost it.

But no one could predict when that would happen. Similarly, success is often used to prove – after the fact – that a modern political leader has charisma. It is much harder to use charisma to predict who will be a successful leader.

[div class=attrib]Read the entire article here.[end-div]

Barcode as Art

The ubiquitous and utilitarian barcode turns 60 years old. Now, it’s upstart and more fashionable sibling, the QR or quick response, code, seems to be stealing the show by finding its way from the product on the grocery store shelf to the world of art and design.

[div class=attrib]From the New York Times:[end-div]

It’s usually cause for celebration when a product turns 60. How could it have survived for so long, unless it is genuinely wanted or needed, or maybe both?

One of the sexagenarians this year, the bar code, has more reasons than most to celebrate. Having been a familiar part of daily life for decades, those black vertical lines have taken on a new role of telling ethically aware consumers whether their prospective purchases are ecologically and socially responsible. Not bad for a 60-year-old.

But a new rival has surfaced. A younger version of the bar code, the QR, or “Quick Response” code, threatens to become as ubiquitous as the original, and is usurping some of its functions. Both symbols are black and white, geometric in style and rectangular in shape, but there the similarities end, because each one has a dramatically different impact on the visual landscape, aesthetically and symbolically.

First, the bar code. The idea of embedding information about a product, including its price, in a visual code that could be decrypted quickly and accurately at supermarket checkouts was hatched in the late 1940s by Bernard Silver and Norman Joseph Woodland, graduate students at the Drexel Institute of Technology in Philadelphia. Their idea was that retailers would benefit from speeding up the checkout process, enabling them to employ fewer staff, and from reducing the expense and inconvenience caused when employees keyed in the wrong prices.

At 8.01 a.m. on June 26, 1974, a packet of Wrigley’s Juicy Fruit chewing gum was sold for 67 cents at a Marsh Supermarket in Troy, Ohio — the first commercial transaction to use a bar code. More than five billion bar-coded products are now scanned at checkouts worldwide every day. Some of those codes will also have been vetted on the cellphones of shoppers who wanted to check the product’s impact on their health and the environment, and the ethical credentials of the manufacturer. They do so by photographing the bar code with their phones and using an application to access information about the product on ethical rating Web sites like GoodGuide.

As for the QR code, it was developed in the mid-1990s by the Japanese carmaker Toyota to track components during the manufacturing process. A mosaic of tiny black squares on a white background, the QR code has greater storage capacity than the original bar code. Soon, Japanese cellphone makers were adding QR readers to camera phones, and people were using them to download text, films and Web links from QR codes on magazines, newspapers, billboards and packaging. The mosaic codes then appeared in other countries and are now common all over the world. Anyone who has downloaded a QR reading application can decrypt them with a camera phone.

 

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Shrink-Wrapped Couples

Once in a while a photographer comes along with a simple yet thoroughly new perspective. Japanese artist Photographer Hal fits this description. His images of young Japanese in a variety of contorted and enclosed situations are sometimes funny and disturbing, but certainly different and provocative.

[div class=attrib]From flavorwire:[end-div]

Japanese artist Photographer Hal has stuffed club kids into bathtubs and other cramped spaces in his work before, but this time he’s chosen to shrink-wrap them like living dolls squirming under plastic. With some nude, and some dressed in candy-colored attire, Hal covers his models with a plastic sheeting that he vacuums the air from in order to distort their features and bond them together. It only takes a few seconds for him to snap several images before releasing them, and the results are humorous and somewhat grotesque.

[div class=attrib]See more of Photographer Hal’s work here.[end-div]

The Unconscious Mind Boosts Creativity

[div class=attrib]From Miller-McCune:[end-div]

New research finds we’re better able to identify genuinely creative ideas when they’ve emerged from the unconscious mind.

Truly creative ideas are both highly prized and, for most of us, maddeningly elusive. If our best efforts produce nothing brilliant, we’re often advised to put aside the issue at hand and give our unconscious minds a chance to work.

Newly published research suggests that is indeed a good idea — but not for the reason you might think.

A study from the Netherlands finds allowing ideas to incubate in the back of the mind is, in a narrow sense, overrated. People who let their unconscious minds take a crack at a problem were no more adept at coming up with innovative solutions than those who consciously deliberated over the dilemma.

But they did perform better on the vital second step of this process: determining which of their ideas was the most creative. That realization provides essential information; without it, how do you decide which solution you should actually try to implement?

Given the value of discerning truly fresh ideas, “we can conclude that the unconscious mind plays a vital role in creative performance,” a research team led by Simone Ritter of the Radboud University Behavioral Science Institute writes in the journal Thinking Skills and Creativity.

In the first of two experiments, 112 university students were given two minutes to come up with creative ideas to an everyday problem: how to make the time spent waiting in line at a cash register more bearable. Half the participants went at it immediately, while the others first spent two minutes performing a distracting task — clicking on circles that appeared on a computer screen. This allowed time for ideas to percolate outside their conscious awareness.

After writing down as many ideas as they could think of, they were asked to choose which of their notions was the most creative.  Participants were scored by the number of ideas they came up with, the creativity level of those ideas (as measured by trained raters), and whether their perception of their most innovative idea coincided with that of the raters.
The two groups scored evenly on both the number of ideas generated and the average creativity of those ideas. But those who had been distracted, and thus had ideas spring from their unconscious minds, were better at selecting their most creative concept.

[div class=attrib]Read the entire article here.[end-div]

Stephen Colbert: Seriously Funny

A fascinating article of Stephen Colbert, a funny man with some serious jokes about our broken political process.

[div class=attrib]From the New York Times magazine:[end-div]

There used to be just two Stephen Colberts, and they were hard enough to distinguish. The main difference was that one thought the other was an idiot. The idiot Colbert was the one who made a nice paycheck by appearing four times a week on “The Colbert Report” (pronounced in the French fashion, with both t’s silent), the extremely popular fake news show on Comedy Central. The other Colbert, the non-idiot, was the 47-year-old South Carolinian, a practicing Catholic, who lives with his wife and three children in suburban Montclair, N.J., where, according to one of his neighbors, he is “extremely normal.” One of the pleasures of attending a live taping of “The Colbert Report” is watching this Colbert transform himself into a Republican superhero.

Suburban Colbert comes out dressed in the other Colbert’s guise — dark two-button suit, tasteful Brooks Brothersy tie, rimless Rumsfeldian glasses — and answers questions from the audience for a few minutes. (The questions are usually about things like Colbert’s favorite sport or favorite character from “The Lord of the Rings,” but on one memorable occasion a young black boy asked him, “Are you my father?” Colbert hesitated a moment and then said, “Kareem?”) Then he steps onstage, gets a last dab of makeup while someone sprays his hair into an unmussable Romney-like helmet, and turns himself into his alter ego. His body straightens, as if jolted by a shock. A self-satisfied smile creeps across his mouth, and a manically fatuous gleam steals into his eyes.

Lately, though, there has emerged a third Colbert. This one is a version of the TV-show Colbert, except he doesn’t exist just on screen anymore. He exists in the real world and has begun to meddle in it. In 2008, the old Colbert briefly ran for president, entering the Democratic primary in his native state of South Carolina. (He hadn’t really switched parties, but the filing fee for the Republican primary was too expensive.) In 2010, invited by Representative Zoe Lofgren, he testified before Congress about the problem of illegal-immigrant farmworkers and remarked that “the obvious answer is for all of us to stop eating fruits and vegetables.”

But those forays into public life were spoofs, more or less. The new Colbert has crossed the line that separates a TV stunt from reality and a parody from what is being parodied. In June, after petitioning the Federal Election Commission, he started his own super PAC — a real one, with real money. He has run TV ads, endorsed (sort of) the presidential candidacy of Buddy Roemer, the former governor of Louisiana, and almost succeeded in hijacking and renaming the Republican primary in South Carolina. “Basically, the F.E.C. gave me the license to create a killer robot,” Colbert said to me in October, and there are times now when the robot seems to be running the television show instead of the other way around.

“It’s bizarre,” remarked an admiring Jon Stewart, whose own program, “The Daily Show,” immediately precedes “The Colbert Report” on Comedy Central and is where the Colbert character got his start. “Here is this fictional character who is now suddenly interacting in the real world. It’s so far up its own rear end,” he said, or words to that effect, “that you don’t know what to do except get high and sit in a room with a black light and a poster.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Images courtesy of Google search.[end-div]

Crossword Puzzles and Cognition

[div class=attrib]From the New Scientist:[end-div]

TACKLING a crossword can crowd the tip of your tongue. You know that you know the answers to 3 down and 5 across, but the words just won’t come out. Then, when you’ve given up and moved on to another clue, comes blessed relief. The elusive answer suddenly occurs to you, crystal clear.

The processes leading to that flash of insight can illuminate many of the human mind’s curious characteristics. Crosswords can reflect the nature of intuition, hint at the way we retrieve words from our memory, and reveal a surprising connection between puzzle solving and our ability to recognise a human face.

“What’s fascinating about a crossword is that it involves many aspects of cognition that we normally study piecemeal, such as memory search and problem solving, all rolled into one ball,” says Raymond Nickerson, a psychologist at Tufts University in Medford, Massachusetts. In a paper published earlier this year, he brought profession and hobby together by analysing the mental processes of crossword solving (Psychonomic Bulletin and Review, vol 18, p 217).

1 across: “You stinker!” – audible cry that allegedly marked displacement activity (6)

Most of our mental machinations take place pre-consciously, with the results dropping into our conscious minds only after they have been decided elsewhere in the brain. Intuition plays a big role in solving a crossword, Nickerson observes. Indeed, sometimes your pre-conscious mind may be so quick that it produces the goods instantly.

At other times, you might need to take a more methodical approach and consider possible solutions one by one, perhaps listing synonyms of a word in the clue.

Even if your list doesn’t seem to make much sense, it might reflect the way your pre-conscious mind is homing in on the solution. Nickerson points to work in the 1990s by Peter Farvolden at the University of Toronto in Canada, who gave his subjects four-letter fragments of seven-letter target words (as may happen in some crossword layouts, especially in the US, where many words overlap). While his volunteers attempted to work out the target, they were asked to give any other word that occurred to them in the meantime. The words tended to be associated in meaning with the eventual answer, hinting that the pre-conscious mind solves a problem in steps.

Should your powers of deduction fail you, it may help to let your mind chew over the clue while your conscious attention is elsewhere. Studies back up our everyday experience that a period of incubation can lead you to the eventual “aha” moment. Don’t switch off entirely, though. For verbal problems, a break from the clue seems to be more fruitful if you occupy yourself with another task, such as drawing a picture or reading (Psychological Bulletin, vol 135, p 94).

So if 1 across has you flummoxed, you could leave it and take a nice bath, or better still read a novel. Or just move on to the next clue.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Newspaper crossword puzzle. Courtesy of Polytechnic West.[end-div]

Morality for Atheists

The social standing of atheists seems to be on the rise. Back in December we cited a research study that found atheists to be more reviled than rapists. Well, a more recent study now finds that atheists are less disliked than members of the Tea Party.

With this in mind Louise Antony ponders how it is possible for atheists to acquire morality without the help of God.

[div class=attrib]From the New York Times:[end-div]

I was heartened to learn recently that atheists are no longer the most reviled group in the United States: according to the political scientists Robert Putnam and David Campbell, we’ve been overtaken by the Tea Party.  But even as I was high-fiving my fellow apostates (“We’re number two!  We’re number two!”), I was wondering anew: why do so many people dislike atheists?

I gather that many people believe that atheism implies nihilism — that rejecting God means rejecting morality.  A person who denies God, they reason, must be, if not actively evil, at least indifferent to considerations of right and wrong.  After all, doesn’t the dictionary list “wicked” as a synonym for “godless?”  And isn’t it true, as Dostoevsky said, that “if God is dead, everything is permitted”?

Well, actually — no, it’s not.  (And for the record, Dostoevsky never said it was.)   Atheism does not entail that anything goes.

Admittedly, some atheists are nihilists.  (Unfortunately, they’re the ones who get the most press.)  But such atheists’ repudiation of morality stems more from an antecedent cynicism about ethics than from any philosophical view about the divine.  According to these nihilistic atheists, “morality” is just part of a fairy tale we tell each other in order to keep our innate, bestial selfishness (mostly) under control.  Belief in objective “oughts” and “ought nots,” they say, must fall away once we realize that there is no universal enforcer to dish out rewards and punishments in the afterlife.  We’re left with pure self-interest, more or less enlightened.

This is a Hobbesian view: in the state of nature “[t]he notions of right and wrong, justice and injustice have no place.  Where there is no common power, there is no law: where no law, no injustice.”  But no atheist has to agree with this account of morality, and lots of us do not.  We “moralistic atheists” do not see right and wrong as artifacts of a divine protection racket.  Rather, we find moral value to be immanent in the natural world, arising from the vulnerabilities of sentient beings and from the capacities of rational beings to recognize and to respond to those vulnerabilities and capacities in others.

This view of the basis of morality is hardly incompatible with religious belief.  Indeed, anyone who believes that God made human beings in His image believes something like this — that there is a moral dimension of things, and that it is in our ability to apprehend it that we resemble the divine.  Accordingly, many theists, like many atheists, believe that moral value is inherent in morally valuable things.  Things don’t become morally valuable because God prefers them; God prefers them because they are morally valuable. At least this is what I was taught as a girl, growing up Catholic: that we could see that God was good because of the things He commands us to do.  If helping the poor were not a good thing on its own, it wouldn’t be much to God’s credit that He makes charity a duty.

It may surprise some people to learn that theists ever take this position, but it shouldn’t.  This position is not only consistent with belief in God, it is, I contend, a more pious position than its opposite.  It is only if morality is independent of God that we can make moral sense out of religious worship.  It is only if morality is independent of God that any person can have a moral basis for adhering to God’s commands.

Let me explain why.  First let’s take a cold hard look at the consequences of pinning morality to the existence of God.  Consider the following moral judgments — judgments that seem to me to be obviously true:

• It is wrong to drive people from their homes or to kill them because you want their land.

• It is wrong to enslave people.

• It is wrong to torture prisoners of war.

•  Anyone who witnesses genocide, or enslavement, or torture, is morally required
to try to stop it.

To say that morality depends on the existence of God is to say that none of these specific moral judgments is true unless God exists.  That seems to me to be a remarkable claim.  If God turned out not to exist — then slavery would be O.K.?  There’d be nothing wrong with torture?  The pain of another human being would mean nothing?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Sam Harris. Courtesy of Salon.[end-div]

The Sheer Joy of Unconnectedness

Seventeenth century polymath Blaise Pascal had it right when he remarked, “Distraction is the only thing that consoles us for our miseries, and yet it is itself the greatest of our miseries.”

Here in the 21st century we have so many distractions that even our distractions get little attention. Author Pico Iyer shares his prognosis, and shows that perhaps the very much younger generation may be making some progress “in terms of sensing not what’s new, but what’s essential.”

[div class=attrib]From the New York Times:[end-div]

ABOUT a year ago, I flew to Singapore to join the writer Malcolm Gladwell, the fashion designer Marc Ecko and the graphic designer Stefan Sagmeister in addressing a group of advertising people on “Marketing to the Child of Tomorrow.” Soon after I arrived, the chief executive of the agency that had invited us took me aside. What he was most interested in, he began — I braced myself for mention of some next-generation stealth campaign — was stillness.

A few months later, I read an interview with the perennially cutting-edge designer Philippe Starck. What allowed him to remain so consistently ahead of the curve? “I never read any magazines or watch TV,” he said, perhaps a little hyperbolically. “Nor do I go to cocktail parties, dinners or anything like that.” He lived outside conventional ideas, he implied, because “I live alone mostly, in the middle of nowhere.”

Around the same time, I noticed that those who part with $2,285 a night to stay in a cliff-top room at the Post Ranch Inn in Big Sur pay partly for the privilege of not having a TV in their rooms; the future of travel, I’m reliably told, lies in “black-hole resorts,” which charge high prices precisely because you can’t get online in their rooms.

Has it really come to this?

In barely one generation we’ve moved from exulting in the time-saving devices that have so expanded our lives to trying to get away from them — often in order to make more time. The more ways we have to connect, the more many of us seem desperate to unplug. Like teenagers, we appear to have gone from knowing nothing about the world to knowing too much all but overnight.

Internet rescue camps in South Korea and China try to save kids addicted to the screen.

Writer friends of mine pay good money to get the Freedom software that enables them to disable (for up to eight hours) the very Internet connections that seemed so emancipating not long ago. Even Intel (of all companies) experimented in 2007 with conferring four uninterrupted hours of quiet time every Tuesday morning on 300 engineers and managers. (The average office worker today, researchers have found, enjoys no more than three minutes at a time at his or her desk without interruption.) During this period the workers were not allowed to use the phone or send e-mail, but simply had the chance to clear their heads and to hear themselves think. A majority of Intel’s trial group recommended that the policy be extended to others.

THE average American spends at least eight and a half hours a day in front of a screen, Nicholas Carr notes in his eye-opening book “The Shallows,” in part because the number of hours American adults spent online doubled between 2005 and 2009 (and the number of hours spent in front of a TV screen, often simultaneously, is also steadily increasing).

The average American teenager sends or receives 75 text messages a day, though one girl in Sacramento managed to handle an average of 10,000 every 24 hours for a month. Since luxury, as any economist will tell you, is a function of scarcity, the children of tomorrow, I heard myself tell the marketers in Singapore, will crave nothing more than freedom, if only for a short while, from all the blinking machines, streaming videos and scrolling headlines that leave them feeling empty and too full all at once.

The urgency of slowing down — to find the time and space to think — is nothing new, of course, and wiser souls have always reminded us that the more attention we pay to the moment, the less time and energy we have to place it in some larger context.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Processing large amounts of information may lead our brains to forget exactly where it all came from. Courtesy of NY Daily News / Chamoun/Getty.[end-div]

Levelling the Political Playing Field

Let’s face it, taking money out of politics in the United States, especially since the 2010 Supreme Court Decision (Citizens United v. Federal Election Commission), is akin to asking a hardcore addict to give up his or her favorite substance — it’s unlikely to be easy, if at all possible.

So, another approach might be to “re-distribute” the funds more equitably. Not a new idea — a number of European nations do this today. However, Max Frankel over at the NY Review of Books offers a thoughtful proposal with a new twist.

[div class=attrib]By Max Frankel:[end-div]

Every election year brings vivid reminders of how money distorts our politics, poisons our lawmaking, and inevitably widens the gulf between those who can afford to buy influence and the vast majority of Americans who cannot. In 2012, this gulf will become a chasm: one analysis predicts that campaign spending on presidential, congressional, and state elections may exceed $6 billion and all previous records. The Supreme Court has held that money is in effect speech, it talks; and those without big money have become progressively voiceless.

That it may cost as much as a billion dollars to run for President is scandal enough, but the multimillions it now takes to pursue or defend a seat in Congress are even more corrupting. Many of our legislators spend hours of every day begging for contributions from wealthy constituents and from the lobbyists for corporate interests. The access and influence that they routinely sell give the moneyed a seat at the tables where laws are written, to the benefit of those contributors and often to the disadvantage of the rest of us.

And why do the candidates need all that money? Because electoral success requires them to buy endless hours of expensive television time for commercials that advertise their virtues and, more often, roundly assail their opponents with often spurious claims. Of the more than a billion dollars spent on political commercials this year, probably more than half will go for attack ads.

It has long been obvious that television ads dominate electioneering in America. Most of those thirty-second ads are glib at best but much of the time they are unfair smears of the opposition. And we all know that those sordid slanders work—the more negative the better—unless they are instantly answered with equally facile and equally expensive rebuttals.

Other election expenses pale beside the ever larger TV budgets. Campaign staffs, phone and email solicitations, billboards and buttons and such could easily be financed with the small contributions of ordinary voters. But the decisive TV competitions leave politicians at the mercy of self-interested wealthy individuals, corporations, unions, and groups, now often disguised in “Super PACs” that can spend freely on any candidate so long as they are not overtly coordinating with that candidate’s campaign. Even incumbents who face no immediate threat feel a need to keep hoarding huge war chests with which to discourage potential challengers. Senator Charles Schumer of New York, for example, was easily reelected to a third term in 2010 but stands poised five years before his next run with a rapidly growing fund of $10 million.

A rational people looking for fairness in their politics would have long ago demanded that television time be made available at no cost and apportioned equally among rival candidates. But no one expects that any such arrangement is now possible. Political ads are jealously guarded as a major source of income by television stations. And what passes for news on most TV channels gives short shrift to most political campaigns except perhaps to “cover” the advertising combat.

As a political reporter and editor, I concluded long ago that efforts to limit campaign contributions and expenditures have been either disingenuous or futile. Most spending caps are too porous. In fact, they have further distorted campaigns by favoring wealthy candidates whose spending on their own behalf the Supreme Court has exempted from all limitations. And the public has overwhelmingly rejected the use of tax money to subsidize campaigning. In any case, private money that wants to buy political influence tends to behave like water running downhill: it will find a way around most obstacles. Since the court’s decision in the 2010 Citizens United case, big money is now able to find endless new paths, channeling even tax-exempt funds into political pools.

There are no easy ways to repair our entire election system. But I believe that a large degree of fairness could be restored to our campaigns if we level the TV playing field. And given the television industry’s huge stake in paid political advertising, it (and the Supreme Court) would surely resist limiting campaign ads, as many European countries do. With so much campaign cash floating around, there is only one attractive remedy I know of: double the price of political commercials so that every candidate’s purchase of TV time automatically pays for a comparable slot awarded to an opponent. The more you spend, the more your rival benefits as well. The more you attack, the more you underwrite the opponent’s responses. The desirable result would likely be that rival candidates would negotiate an arms control agreement, setting their own limits on their TV budgets and maybe even on their rhetoric.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Alliance for a Just Society.[end-div]

How to (Not) Read a Tough Book

Ever picked up a copy of the Illiad or War and Peace or Foucault’s Pendulum or Finnegan’s Wake leafed through the first five pages and given up? Well, you may be in good company. So, here are some useful tips for the readers, and non-readers alike, on how to get through some notable classics that demand our fullest attention and faculties.

[div class=attrib]From the Wall Street Journal:[end-div]

I’m determined to finish “The Iliad” before I start anything else, but I’ve been having trouble picking it up amid all the seasonal distractions and therefore I’m not reading anything at all: It’s blocking other books. Suggestions?

—E.S., New York

When I decided to read “War and Peace” a few years ago, I worried about exactly this problem: a challenging book slowing me down so much that I simply stopped reading anything at all. My solution, which worked, was to assign myself a certain number of pages—in this case, 100—each day, after which I was free to read anything else. One hundred pages a day may seem like a lot, but I had time on my hands, and (of course) “War and Peace” turned out to be anything but laborious. Still, there was a psychological comfort in knowing that if I wasn’t enjoying it, I wasn’t in a reading straitjacket.

With a book like “The Iliad,” which is far more demanding than “War and Peace,” I’d say one or two pages a day would be a perfectly respectable goal. You could see that time as a period of meditation or prayer—an excuse to be alone, quiet and contemplative.

You could also alternate reading “The Iliad” with listening to someone else read it. There’s no rule that says you can’t mix media on a single book, especially when it’s poetry, and the divine Alfred Molina reads Stephen Mitchell’s new translation of Homer’s classic.

Reading a work like “The Iliad” shouldn’t feel like punishment or homework. If it does, then read a sentence a day with the patience of Penelope.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Achilles tending Patroclus wounded by an arrow, identified by inscriptions on the upper part of the vase. Tondo of an Attic red-figure kylix, ca. 500 BC. From Vulci. Courtesy of Wikipedia.[end-div]

From Nine Dimensions to Three

Over the last 40 years or so physicists and cosmologists have sought to construct a single grand theory that describes our entire universe from the subatomic soup that makes up particles and describes all forces to the vast constructs of our galaxies, and all in between and beyond. Yet a major stumbling block has been how to bring together the quantum theories that have so successfully described, and predicted, the microscopic with our current understanding of gravity. String theory is one such attempt to develop a unified theory of everything, but it remains jumbled with many possible solutions and, currently, is beyond experimental verification.

Recently however, theorists in Japan announced a computer simulation which shows how our current 3-dimensional universe may have evolved from a 9-dimensional space hypothesized by string theory.

[div class=attrib]From Interactions:[end-div]

A group of three researchers from KEK, Shizuoka University and Osaka University has for the first time revealed the way our universe was born with 3 spatial dimensions from 10-dimensional superstring theory1 in which spacetime has 9 spatial directions and 1 temporal direction. This result was obtained by numerical simulation on a supercomputer.

[Abstract]

According to Big Bang cosmology, the universe originated in an explosion from an invisibly tiny point. This theory is strongly supported by observation of the cosmic microwave background2 and the relative abundance of elements. However, a situation in which the whole universe is a tiny point exceeds the reach of Einstein’s general theory of relativity, and for that reason it has not been possible to clarify how the universe actually originated.

In superstring theory, which is considered to be the “theory of everything”, all the elementary particles are represented as various oscillation modes of very tiny strings. Among those oscillation modes, there is one that corresponds to a particle that mediates gravity, and thus the general theory of relativity can be naturally extended to the scale of elementary particles. Therefore, it is expected that superstring theory allows the investigation of the birth of the universe. However, actual calculation has been intractable because the interaction between strings is strong, so all investigation thus far has been restricted to discussing various models or scenarios.

Superstring theory predicts a space with 9 dimensions3, which poses the big puzzle of how this can be consistent with the 3-dimensional space that we live in.

A group of 3 researchers, Jun Nishimura (associate professor at KEK), Asato Tsuchiya (associate professor at Shizuoka University) and Sang-Woo Kim (project researcher at Osaka University) has succeeded in simulating the birth of the universe, using a supercomputer for calculations based on superstring theory. This showed that the universe had 9 spatial dimensions at the beginning, but only 3 of these underwent expansion at some point in time.

This work will be published soon in Physical Review Letters.

[The content of the research]

In this study, the team established a method for calculating large matrices (in the IKKT matrix model4), which represent the interactions of strings, and calculated how the 9-dimensional space changes with time. In the figure, the spatial extents in 9 directions are plotted against time.

If one goes far enough back in time, space is indeed extended in 9 directions, but then at some point only 3 of those directions start to expand rapidly. This result demonstrates, for the first time, that the 3-dimensional space that we are living in indeed emerges from the 9-dimensional space that superstring theory predicts.

This calculation was carried out on the supercomputer Hitachi SR16000 (theoretical performance: 90.3 TFLOPS) at the Yukawa Institute for Theoretical Physics of Kyoto University.

[The significance of the research]

It is almost 40 years since superstring theory was proposed as the theory of everything, extending the general theory of relativity to the scale of elementary particles. However, its validity and its usefulness remained unclear due to the difficulty of performing actual calculations. The newly obtained solution to the space-time dimensionality puzzle strongly supports the validity of the theory.

Furthermore, the establishment of a new method to analyze superstring theory using computers opens up the possibility of applying this theory to various problems. For instance, it should now be possible to provide a theoretical understanding of the inflation5 that is believed to have taken place in the early universe, and also the accelerating expansion of the universe6, whose discovery earned the Nobel Prize in Physics this year. It is expected that superstring theory will develop further and play an important role in solving such puzzles in particle physics as the existence of the dark matter that is suggested by cosmological observations, and the Higgs particle, which is expected to be discovered by LHC experiments.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A visualization of strings. Courtesy of R. Dijkgraaf / Universe Today.[end-div]

Salad Bar Strategies

It turns out that human behavior at the ubiquitous, self-serve salad bar in your suburban restaurant or hotel is a rather complex affair. There is a method to optimizing the type and quantity of food on one’s plate.

[div class=attrib]From the New Scientist:[end-div]

Competition, greed and skulduggery are the name of the game if you want to eat your fill. Smorgasbord behaviour is surprisingly complex.

A mathematician, an engineer and a psychologist go up to a buffet… No, it’s not the start of a bad joke.

While most of us would dive into the sandwiches without thinking twice, these diners see a groaning table as a welcome opportunity to advance their research.

Look behind the salads, sausage rolls and bite-size pizzas and it turns out that buffets are a microcosm of greed, sexual politics and altruism – a place where our food choices are driven by factors we’re often unaware of. Understand the science and you’ll see buffets very differently next time you fill your plate.

The story starts with Lionel Levine of Cornell University in Ithaca, New York, and Katherine Stange of Stanford University, California. They were sharing food at a restaurant one day, and wondered: do certain choices lead to tastier platefuls when food must be divided up? You could wolf down everything in sight, of course, but these guys are mathematicians, so they turned to a more subtle approach: game theory.

Applying mathematics to a buffet is harder than it sounds, so they started by simplifying things. They modelled two people taking turns to pick items from a shared platter – hardly a buffet, more akin to a polite tapas-style meal. It was never going to generate a strategy for any occasion, but hopefully useful principles would nonetheless emerge. And for their bellies, the potential rewards were great.

First they assumed that each diner would have individual preferences. One might place pork pie at the top and beetroot at the bottom, for example, while others might salivate over sausage rolls. That ranking can be plugged into calculations by giving each food item a score, where higher-ranked foods are worth more points. The most enjoyable buffet meal would be the one that scores highest in total.

In some scenarios, the route to the most enjoyable plate was straightforward. If both people shared the same rankings, they should pick their favourites first. But Levine and Stange also uncovered a counter-intuitive effect: it doesn’t always pay to take the favourite item first. To devise an optimum strategy, they say, you should take into account what your food rival considers to be the worst food on the table.

If that makes your brow furrow, consider this: if you know your fellow diner hates chicken legs, you know that can be the last morsel you aim to eat – even if it’s one of your favourites. In principle, if you had full knowledge of your food rival’s preferences, it would be possible to work backwards from their least favourite and identify the optimum order in which to fill your plate, according to the pair’s calculations, which will appear in American Mathematical Monthly (arxiv.org/abs/1104.0961).

So how do you know what to select first? In reality, the buffet might be long gone before you had worked it out. Even if you did, the researchers’ strategy also assumes that you are at a rather polite buffet, taking turns, so it has its limitations. However, it does provide practical advice in some scenarios. For example, imagine Amanda is up against Brian, who she knows has the opposite ranking of tastes to her. Amanda loves sausages, hates pickled onions, and is middling about quiche. Brian loves pickled onions, hates sausages, shares the same view of quiche. Having identified that her favourites are safe, Amanda should prioritise morsels where their taste-ranking matched – the quiche, in other words.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: salad bars: Courtesy of Google search.[end-div]

Ronald Searle

Ronald Searle, your serious wit and your heroic pen will be missed. Searle died on December 30, aged 91.

The first “real” book purchased by theDiagonal’s editor with his own money was “How To Be Topp” by Geoffrey Willans and Ronald Searle. The book featured Searle’s unique and unmistakable illustrations of anti-hero Nigel Molesworth, a stoic, shrewd and droll English schoolboy.

Yet while Searle will be best remembered for his drawings of Molesworth and friends at St.Custard’s high school and his invention of St.Trinian’s (school for rowdy schoolgirls), he leaves behind a critical body of work that graphically illustrates his brutal captivity at the hands of the Japanese during the Second World War.

Most of these drawings appear in his 1986 book, Ronald Searle: To the Kwai and Back, War Drawings 1939-1945. In the book, Searle also wrote of his experiences as a prisoner. Many of his original drawings are now in the permanent collection of the Imperial War Museum, London.

[div class=attrib]From the BBC:[end-div]

British cartoonist Ronald Searle, best known for creating the fictional girls’ school St Trinian’s, has died aged 91.

His daughter Kate Searle said in a statement that he “passed away peacefully in his sleep” in a hospital in France.

Searle’s spindly cartoons of the naughty schoolgirls first appeared in 1941, before the idea was adapted for film.

The first movie version, The Belles of St Trinian’s, was released in 1954.

Joyce Grenfell and George Cole starred in the film, along with Alastair Sim, who appeared in drag as headmistress Millicent Fritton.

Searle also provided illustrations the Molesworth series, written by Geoffrey Willans.

The gothic, line-drawn cartoons breathed life into the gruesome pupils of St Custard’s school, in particular the outspoken, but functionally-illiterate Nigel Molesworth “the goriller of 3B”.

Searle’s work regularly appeared in magazines and newspapers, including Punch and The New Yorker.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Welcome back to the new term molesworth! From How to be Topp. Courtesy of Geoffrey Willans and Ronald Searle / Vanguard Press.[end-div]

Weight Loss and the Coordinated Defense Mechanism

New research into obesity and weight loss shows us why it’s so hard to keep weight lost from dieting from returning. The good news is that weight (re-)gain is not all due to a simple lack of control and laziness. However, the bad news is that keeping one’s weight down may be much more difficult due to the body’s complex defense mechanism.

Tara Parker-Pope over at the Well blog reviews some of the new findings, which seem to point the finger at a group hormones and specific genes that work together to help us regain those lost pounds.

[div class=attrib]From the New York Times:[end-div]

For 15 years, Joseph Proietto has been helping people lose weight. When these obese patients arrive at his weight-loss clinic in Australia, they are determined to slim down. And most of the time, he says, they do just that, sticking to the clinic’s program and dropping excess pounds. But then, almost without exception, the weight begins to creep back. In a matter of months or years, the entire effort has come undone, and the patient is fat again. “It has always seemed strange to me,” says Proietto, who is a physician at the University of Melbourne. “These are people who are very motivated to lose weight, who achieve weight loss most of the time without too much trouble and yet, inevitably, gradually, they regain the weight.”

Anyone who has ever dieted knows that lost pounds often return, and most of us assume the reason is a lack of discipline or a failure of willpower. But Proietto suspected that there was more to it, and he decided to take a closer look at the biological state of the body after weight loss.

Beginning in 2009, he and his team recruited 50 obese men and women. The men weighed an average of 233 pounds; the women weighed about 200 pounds. Although some people dropped out of the study, most of the patients stuck with the extreme low-calorie diet, which consisted of special shakes called Optifast and two cups of low-starch vegetables, totaling just 500 to 550 calories a day for eight weeks. Ten weeks in, the dieters lost an average of 30 pounds.

At that point, the 34 patients who remained stopped dieting and began working to maintain the new lower weight. Nutritionists counseled them in person and by phone, promoting regular exercise and urging them to eat more vegetables and less fat. But despite the effort, they slowly began to put on weight. After a year, the patients already had regained an average of 11 of the pounds they struggled so hard to lose. They also reported feeling far more hungry and preoccupied with food than before they lost the weight.

While researchers have known for decades that the body undergoes various metabolic and hormonal changes while it’s losing weight, the Australian team detected something new. A full year after significant weight loss, these men and women remained in what could be described as a biologically altered state. Their still-plump bodies were acting as if they were starving and were working overtime to regain the pounds they lost. For instance, a gastric hormone called ghrelin, often dubbed the “hunger hormone,” was about 20 percent higher than at the start of the study. Another hormone associated with suppressing hunger, peptide YY, was also abnormally low. Levels of leptin, a hormone that suppresses hunger and increases metabolism, also remained lower than expected. A cocktail of other hormones associated with hunger and metabolism all remained significantly changed compared to pre-dieting levels. It was almost as if weight loss had put their bodies into a unique metabolic state, a sort of post-dieting syndrome that set them apart from people who hadn’t tried to lose weight in the first place.

“What we see here is a coordinated defense mechanism with multiple components all directed toward making us put on weight,” Proietto says. “This, I think, explains the high failure rate in obesity treatment.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]