All posts by Mike

Oil, Art and the 0.0001 Percent

[div class=attrib]From Vanity Fair:[end-div]

The tiny, oil-rich nation of Qatar has purchased a Paul Cézanne painting, The Card Players, for more than $250 million. The deal, in a single stroke, sets the highest price ever paid for a work of art and upends the modern art market.

If the price seems insane, it may well be, since it more than doubles the current auction record for a work of art. And this is no epic van Gogh landscape or Vermeer portrait, but an angular, moody representation of two Aix-en-Provence peasants in a card game. But, for its $250 million, Qatar gets more than a post-Impressionist masterpiece; it wins entry into an exclusive club. There are four other Cézanne Card Players in the series; and they are in the collections of the Metropolitan Museum of Art, the Musée d’Orsay, the Courtauld, and the Barnes Foundation. For a nation in the midst of building a museum empire, it’s instant cred.

Is the painting, created at the cusp of the 20th century, worth it? Well, Cézanne inspired Cubism and presaged abstract art, and Picasso called him “the father of us all.” That said, “$250 million is a fortune,” notes Victor Wiener, the fine-art appraiser called in by Lloyd’s of London when Steve Wynn put his elbow through a Picasso, in 2006. “But you take any art-history course, and a Card Players is likely in it. It’s a major, major image.” For months, he said, “its sale has been rumored. Now, everyone will use this price as a point of departure: it changes the whole art-market structure.”

The Cézanne sale actually took place in 2011, and details of the secret deal are now coming out as a slew of V.I.P. collectors, curators, and dealers head to Qatar for the opening next week of a Takashi Murakami blockbuster that was recently on view in the Palace of Versailles. The nation, located on its own small jetty off the Arabian Peninsula, is a new destination on the art-world grand tour: current exhibitions include an 80-foot-high Richard Serra and a Louise Bourgeois retrospective (her bronze spider is crawling across the Doha Convention Center), and in March it hosts a Global Art Forum that attracts artists, curators, and patrons from museum groups worldwide.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Card Players by Paul Cezanne. Courtesy of Vanity Fair.[end-div]

Yawning and Empathy

[div class=attrib]From Scientific American:[end-div]

You can tell a lot about a person from their body. And I don’t just mean how many hours they spend at the gym, or how easy it is for them to sweet-talk their way out of speeding tickets. For the past several decades researchers have been studying the ways in which the body reveals properties of the mind. An important subset of this work has taken this idea a step further: do the ways our bodies relate to one another tell us about the ways in which our minds relate to one another? Consider behavioral mimicry. Many studies have found that we quite readily mimic the nonverbal behavior of those with whom we interact. Furthermore, the degree to which we mimic others is predicted by both our personality traits as well as our relationship to those around us. In short, the more empathetic we are, the more we mimic, and the more we like the people we’re interacting with, the more we mimic. The relationship between our bodies reveals something about the relationship between our minds.

The bulk of this research has made use of clever experimental manipulations involving research assistant actors. The actor crosses his legs and then waits to see if the participant crosses his legs, too. If so, we’ve found mimicry, and can now compare the presence of mimicry with self-reports of, say, liking and interpersonal closeness to see if there is a relationship. More naturalistic evidence for this phenomenon has been much harder to come by. That is, to what extent do we see this kind of nonverbal back and forth in the real world and to what extent does it reveal the same properties of minds that seem to hold true in the lab?

A recent study conducted by Ivan Norscia and Elisabetta Palagi and published in the journal PLoSONE has found such evidence in the unlikeliest of places: yawns. More specifically, yawn contagion, or that annoyingly inevitable phenomenon that follows seeing, hearing (and even reading) about another yawn. You’ve certainly experienced this, but perhaps you have not considered what it might reveal to others (beyond a lack of sleep or your interest level in their conversation). Past work has demonstrated that, similar to behavioral mimicry, contagious yawners tend to be higher in dispositional empathy. That is, they tend to be the type of people who are better, and more interested in, understanding other people’s internal states. Not only that, but contagious yawning seems to emerge in children at the same time that they develop the cognitive capacities involved in empathizing with others. And children who lack this capacity, such as in autism, also show deficits in their ability to catch others’ yawns. In short, the link between yawning and empathizing appears strong.

Given that regions of the brain involved in empathizing with others can be influenced by the degree of psychological closeness to those others, Norscia and Palagi wanted to know whether contagious yawning might also reveal information about how we relate to those around us. Specifically, are we more likely to catch the yawns of people to whom we are emotionally closer? Can we deduce something about the quality of the relationships between individuals based solely on their pattern of yawning?  Yawning might tell us the degree to which we empathize with, and by extension care about, the people around us.

[div  class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Alex Gumerov/iStock / Scientific American.[end-div]

Women and Pain

New research suggests that women feel pain more intensely than men.

[div class=attrib]From Scientific American:[end-div]

When a woman falls ill, her pain may be more intense than a man’s, a new study suggests.

Across a number of different diseases, including diabetes, arthritis and certain respiratory infections, women in the study reported feeling more pain than men, the researchers said.

The study is one of the largest to examine sex differences in human pain perception. The results are in line with earlier findings, and reveal that sex differences in pain sensitivity may be present in many more diseases than previously thought.

Because pain is subjective, the researchers can’t know for sure whether women, in fact, experience more pain than men. A number of factors, including a person’s mood and whether they take pain medication, likely influence how much pain they say they’re in.

In all, the researchers assessed sex differences in reported pain for more than 250 diseases and conditions.

For almost every diagnosis, women reported higher average pain scores than men. Women’s scores were, on average, 20 percent higher than men’s scores, according to the study.

Women with lower back pain, and knee and leg strain consistently reported higher scores than men. Women also reported feeling more pain in the neck (for conditions such as torticollis, in which the neck muscles twist or spasm) and sinuses (during sinus infections) than did men, a result not found by previous research.

It could be that women assign different numbers to the level of pain they perceive compared with men, said Roger B. Fillingim, a pain researcher at the University of Florida College of Dentistry, who was not involved with the new study.

But the study was large, and the findings are backed up by previous work, Fillingim said.

“I think the most [simple] explanation is that women are indeed experiencing higher levels of pain than men,” Fillingim said.

The reason for this is not known, Fillingim said. Past research suggests a number of factors contribute to perceptions of pain level, including hormones, genetics and psychological factors, which may vary between men and women, Fillingim said. It’s also possible the pain systems work differently in men and women, or women experience more severe forms of disease than men, he said.

[div class=attrib]Read the entire article here.[end-div]

[div class]Image courtesy of CNN.[end-div]

L’Entente Cordiale: Parenting the French Way

French children, it seems, unlike their cousins in the United States, don’t suffer temper tantrums, sit patiently at meal-times, defer to their parents, eat all their vegetables, respect adults, and are generally happy. Why is this and should American parents ditch the latest pop psychology handbooks for parenting lessons from La Belle France?

[div class=attrib]From the Wall Street Journal:[end-div]

When my daughter was 18 months old, my husband and I decided to take her on a little summer holiday. We picked a coastal town that’s a few hours by train from Paris, where we were living (I’m American, he’s British), and booked a hotel room with a crib. Bean, as we call her, was our only child at this point, so forgive us for thinking: How hard could it be?

We ate breakfast at the hotel, but we had to eat lunch and dinner at the little seafood restaurants around the old port. We quickly discovered that having two restaurant meals a day with a toddler deserved to be its own circle of hell.

Bean would take a brief interest in the food, but within a few minutes she was spilling salt shakers and tearing apart sugar packets. Then she demanded to be sprung from her high chair so she could dash around the restaurant and bolt dangerously toward the docks.

Our strategy was to finish the meal quickly. We ordered while being seated, then begged the server to rush out some bread and bring us our appetizers and main courses at the same time. While my husband took a few bites of fish, I made sure that Bean didn’t get kicked by a waiter or lost at sea. Then we switched. We left enormous, apologetic tips to compensate for the arc of torn napkins and calamari around our table.

After a few more harrowing restaurant visits, I started noticing that the French families around us didn’t look like they were sharing our mealtime agony. Weirdly, they looked like they were on vacation. French toddlers were sitting contentedly in their high chairs, waiting for their food, or eating fish and even vegetables. There was no shrieking or whining. And there was no debris around their tables.

Though by that time I’d lived in France for a few years, I couldn’t explain this. And once I started thinking about French parenting, I realized it wasn’t just mealtime that was different. I suddenly had lots of questions. Why was it, for example, that in the hundreds of hours I’d clocked at French playgrounds, I’d never seen a child (except my own) throw a temper tantrum? Why didn’t my French friends ever need to rush off the phone because their kids were demanding something? Why hadn’t their living rooms been taken over by teepees and toy kitchens, the way ours had?

Soon it became clear to me that quietly and en masse, French parents were achieving outcomes that created a whole different atmosphere for family life. When American families visited our home, the parents usually spent much of the visit refereeing their kids’ spats, helping their toddlers do laps around the kitchen island, or getting down on the floor to build Lego villages. When French friends visited, by contrast, the grownups had coffee and the children played happily by themselves.

By the end of our ruined beach holiday, I decided to figure out what French parents were doing differently. Why didn’t French children throw food? And why weren’t their parents shouting? Could I change my wiring and get the same results with my own offspring?

Driven partly by maternal desperation, I have spent the last several years investigating French parenting. And now, with Bean 6 years old and twins who are 3, I can tell you this: The French aren’t perfect, but they have some parenting secrets that really do work.

I first realized I was on to something when I discovered a 2009 study, led by economists at Princeton, comparing the child-care experiences of similarly situated mothers in Columbus, Ohio, and Rennes, France. The researchers found that American moms considered it more than twice as unpleasant to deal with their kids. In a different study by the same economists, working mothers in Texas said that even housework was more pleasant than child care.

[div class=attrib]Read the entire article here. This is adapted from “Bringing Up Bébé: One American Mother Discovers the Wisdom of French Parenting,” to be published February 7, 2012 by the Penguin Press.[end-div]

[div class=attrib]Image: That’s the way to do it … a young boy at the Côte d’Or restaurant, Saulieu. Courtesy of Owen Franken/Corbis / Guardian [end-div]

The Other Mona Lisa

[div class=attrib]From the Guardian:[end-div]

A contemporaneous copy of the world’s most famous painting has been sensationally discovered by conservators at the Prado in Madrid, allowing us to see the Mona Lisa as she would probably have looked at the time.

In art historical terms, the discovery is nothing short of remarkable. The Prado painting had long been thought to be one of dozens of surviving replicas of Leonardo’s masterpiece, made in the 16th and 17th centuries.

But, the Arts Newspaper reports, recent conservation reveals that the work was in fact painted by a pupil working alongside Leonardo.

The original painting hangs behind glass and with enormous security at the Louvre, a gallery it is unlikely to ever leave. There is also no prospect of it being cleaned in the forseeable future, meaning crowds view a work that, although undeniably beautiful, has several layers of old, cracked varnish.

This newly discovered work – found under black overpaint – allows the viewer to see a much fresher version of the captivating young woman, generally acknowledged to be Lisa Gherardini, the wife of the Florentine cloth merchant Francesco del Giocondo.

The Prado said the restoration had been carried out over the past few months in preparation for an exhibition at the Louvre in March.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Detail of the nearly conserved Leonardo da Vinci pupil’s take of the Mona Lisa. The Prado has yet to finish conservation work on the whole painting. Courtesy of Museo Nacional del Pradio / Guardian.[end-div]

The More Things Stay the Same, the More They Change?

[div class=attrib]From Scientific American:[end-div]

Some things never change. physicists call them the constants of nature. Such quantities as the velocity of light, c, Newton’s constant of gravitation, G, and the mass of the electron, me, are assumed to be the same at all places and times in the universe. They form the scaffolding around which the theories of physics are erected, and they define the fabric of our universe. Physics has progressed by making ever more accurate measurements of their values.

And yet, remarkably, no one has ever successfully predicted or explained any of the constants. Physicists have no idea why constants take the special numerical values that they do (given the choice of units). In SI units, c is 299,792,458; G is 6.673 × 10–11; and me is 9.10938188 × 10–31 —numbers that follow no discernible pattern. The only thread running through the values is that if many of them were even slightly different, complex atomic structures such as living beings would not be possible. The desire to explain the constants has been one of the driving forces behind efforts to develop a complete unified description of nature, or “theory of everything.” Physicists have hoped that such a theory would show that each of the constants of nature could have only one logically possible value. It would reveal an underlying order to the seeming arbitrariness of nature.

In recent years, however, the status of the constants has grown more muddied, not less. Researchers have found that the best candidate for a theory of everything, the variant of string theory called M-theory, is self-consistent only if the universe has more than four dimensions of space and time—as many as seven more. One  implication is that the constants we observe may not, in fact, be the truly fundamental ones. Those live in the full higher-dimensional space, and we see only their three-dimensional “shadows.”

Meanwhile physicists have also come to appreciate that the values of many of the constants may be the result of mere happenstance, acquired during random events and elementary particle processes early in the history of the universe. In fact, string theory allows for a vast number—10500 —of possible “worlds” with different self-consistent sets of laws and constants. So far researchers have no idea why our combination was selected. Continued study may reduce the number of logically possible worlds to one, but we have to remain open to the unnerving possibility that our known universe is but one of many—a part of a multiverse—and that different parts of the multiverse exhibit different solutions to the theory, our observed laws of nature being merely one edition of many systems of local bylaws.

No further explanation would then be possible for many of our numerical constants other than that they constitute a rare combination that permits consciousness to evolve. Our observable uni verse could be one of many isolated oases surrounded by an infinity of lifeless space—a surreal place where different forces of nature hold sway and particles such as electrons or structures such as carbon atoms and DNA molecules could be impossibilities. If you tried to venture into that outside world, you would cease to be.

Thus, string theory gives with the right hand and takes with the left. It was devised in part to explain the seemingly arbitrary values of the physical constants, and the basic equations of the theory contain few arbitrary parameters. Yet so far string theory offers no explanation for the observed values of the constants.

[div class=attrib]Read the entire article here.[end-div]

Time for An Over-The-Counter Morality Pill?

Stories of people who risk life and limb to help a stranger and those who turn a blind eye are as current as they are ancient. Almost on a daily basis the 24-hours news cycle carries a heartwarming story of someone doing good to or for another; and seemingly just as often comes the story of indifference. Social and psychological researchers have studied this behavior in humans, and animals, for decades. However, only recently has progress been made in identifying some underlying factors. Peter Singer, a professor of bioethics at Princeton University, and researcher Agata Sagan recap some current understanding.

All of this leads to a conundrum: would it be ethical to market a “morality” pill that would make us do more good more often?

[div class=attrib]From the New York Times:[end-div]

Last October, in Foshan, China, a 2-year-old girl was run over by a van. The driver did not stop. Over the next seven minutes, more than a dozen people walked or bicycled past the injured child. A second truck ran over her. Eventually, a woman pulled her to the side, and her mother arrived. The child died in a hospital. The entire scene was captured on video and caused an uproar when it was shown by a television station and posted online. A similar event occurred in London in 2004, as have others, far from the lens of a video camera.

Yet people can, and often do, behave in very different ways.

A news search for the words “hero saves” will routinely turn up stories of bystanders braving oncoming trains, swift currents and raging fires to save strangers from harm. Acts of extreme kindness, responsibility and compassion are, like their opposites, nearly universal.

Why are some people prepared to risk their lives to help a stranger when others won’t even stop to dial an emergency number?

Scientists have been exploring questions like this for decades. In the 1960s and early ’70s, famous experiments by Stanley Milgram and Philip Zimbardo suggested that most of us would, under specific circumstances, voluntarily do great harm to innocent people. During the same period, John Darley and C. Daniel Batson showed that even some seminary students on their way to give a lecture about the parable of the Good Samaritan would, if told that they were running late, walk past a stranger lying moaning beside the path. More recent research has told us a lot about what happens in the brain when people make moral decisions. But are we getting any closer to understanding what drives our moral behavior?

Here’s what much of the discussion of all these experiments missed: Some people did the right thing. A recent experiment (about which we have some ethical reservations) at the University of Chicago seems to shed new light on why.

Researchers there took two rats who shared a cage and trapped one of them in a tube that could be opened only from the outside. The free rat usually tried to open the door, eventually succeeding. Even when the free rats could eat up all of a quantity of chocolate before freeing the trapped rat, they mostly preferred to free their cage-mate. The experimenters interpret their findings as demonstrating empathy in rats. But if that is the case, they have also demonstrated that individual rats vary, for only 23 of 30 rats freed their trapped companions.

The causes of the difference in their behavior must lie in the rats themselves. It seems plausible that humans, like rats, are spread along a continuum of readiness to help others. There has been considerable research on abnormal people, like psychopaths, but we need to know more about relatively stable differences (perhaps rooted in our genes) in the great majority of people as well.

Undoubtedly, situational factors can make a huge difference, and perhaps moral beliefs do as well, but if humans are just different in their predispositions to act morally, we also need to know more about these differences. Only then will we gain a proper understanding of our moral behavior, including why it varies so much from person to person and whether there is anything we can do about it.

[div class=attrib]Read more here.[end-div]

A Theory of Everything? Nah

A peer-reviewed journal recently published a 100-page scientific paper describing a theory of everything that unifies quantum theory and relativity (a long sought-after goal) with the origin of life, evolution and cosmology. And, best of all the paper contains no mathematics.

The paper written by a faculty member at Case Western Reserve University raises interesting issues about the peer review process and the viral spread of information, whether it’s correct or not.

[div class=attrib]From Ars Technica:[end-div]

Physicists have been working for decades on a “theory of everything,” one that unites quantum mechanics and relativity. Apparently, they were being too modest. Yesterday saw publication of a press release claiming a biologist had just published a theory accounting for all of that—and handling the origin of life and the creation of the Moon in the bargain. Better yet, no math!

Where did such a crazy theory originate? In the mind of a biologist at a respected research institution, Case Western Reserve University Medical School. Amazingly, he managed to get his ideas published, then amplified by an official press release. At least two sites with poor editorial control then reposted the press release—verbatim—as a news story.

Gyres all the way down

The theory in question springs from the brain of one Erik Andrulis, a CWRU faculty member who has a number of earlier papers on fairly standard biochemistry. The new paper was accepted by an open access journal called Life, meaning that you can freely download a copy of its 105 pages if you’re so inclined. Apparently, the journal is peer-reviewed, which is a bit of a surprise; even accepting that the paper makes a purely theoretical proposal, it is nothing like science as I’ve ever seen it practiced.

The basic idea is that everything, from subatomic particles to living systems, is based on helical systems the author calls “gyres,” which transform matter, energy, and information. These transformations then determine the properties of various natural systems, living and otherwise. What are these gyres? It’s really hard to say; even Andrulis admits that they’re just “a straightforward and non-mathematical core model” (although he seems to think that’s a good thing). Just about everything can be derived from this core model; the author cites “major phenomena including, but not limited to, quantum gravity, phase transitions of water, why living systems are predominantly CHNOPS (carbon, hydrogen, nitrogen, oxygen, phosphorus, and sulfur), homochirality of sugars and amino acids, homeoviscous adaptation, triplet code, and DNA mutations.”

He’s serious about the “not limited to” part; one of the sections describes how gyres could cause the Moon to form.

Is this a viable theory of everything? The word “boson,” the particle that carries forces, isn’t in the text at all. “Quark” appears once—in the title of one of the 800 references. The only subatomic particle Andrulis describes is the electron; he skips from there straight up to oxygen. Enormous gaps exist everywhere one looks.

[div class=attrib]Read more here.[end-div]

Inside the Weird Teenage Brain

[div class=attrib]From the Wall Street Journal:[end-div]

“What was he thinking?” It’s the familiar cry of bewildered parents trying to understand why their teenagers act the way they do.

How does the boy who can thoughtfully explain the reasons never to drink and drive end up in a drunken crash? Why does the girl who knows all about birth control find herself pregnant by a boy she doesn’t even like? What happened to the gifted, imaginative child who excelled through high school but then dropped out of college, drifted from job to job and now lives in his parents’ basement?

Adolescence has always been troubled, but for reasons that are somewhat mysterious, puberty is now kicking in at an earlier and earlier age. A leading theory points to changes in energy balance as children eat more and move less.

At the same time, first with the industrial revolution and then even more dramatically with the information revolution, children have come to take on adult roles later and later. Five hundred years ago, Shakespeare knew that the emotionally intense combination of teenage sexuality and peer-induced risk could be tragic—witness “Romeo and Juliet.” But, on the other hand, if not for fate, 13-year-old Juliet would have become a wife and mother within a year or two.

Our Juliets (as parents longing for grandchildren will recognize with a sigh) may experience the tumult of love for 20 years before they settle down into motherhood. And our Romeos may be poetic lunatics under the influence of Queen Mab until they are well into graduate school.

What happens when children reach puberty earlier and adulthood later? The answer is: a good deal of teenage weirdness. Fortunately, developmental psychologists and neuroscientists are starting to explain the foundations of that weirdness.

The crucial new idea is that there are two different neural and psychological systems that interact to turn children into adults. Over the past two centuries, and even more over the past generation, the developmental timing of these two systems has changed. That, in turn, has profoundly changed adolescence and produced new kinds of adolescent woe. The big question for anyone who deals with young people today is how we can go about bringing these cogs of the teenage mind into sync once again.

The first of these systems has to do with emotion and motivation. It is very closely linked to the biological and chemical changes of puberty and involves the areas of the brain that respond to rewards. This is the system that turns placid 10-year-olds into restless, exuberant, emotionally intense teenagers, desperate to attain every goal, fulfill every desire and experience every sensation. Later, it turns them back into relatively placid adults.

Recent studies in the neuroscientist B.J. Casey’s lab at Cornell University suggest that adolescents aren’t reckless because they underestimate risks, but because they overestimate rewards—or, rather, find rewards more rewarding than adults do. The reward centers of the adolescent brain are much more active than those of either children or adults. Think about the incomparable intensity of first love, the never-to-be-recaptured glory of the high-school basketball championship.

What teenagers want most of all are social rewards, especially the respect of their peers. In a recent study by the developmental psychologist Laurence Steinberg at Temple University, teenagers did a simulated high-risk driving task while they were lying in an fMRI brain-imaging machine. The reward system of their brains lighted up much more when they thought another teenager was watching what they did—and they took more risks.

From an evolutionary point of view, this all makes perfect sense. One of the most distinctive evolutionary features of human beings is our unusually long, protected childhood. Human children depend on adults for much longer than those of any other primate. That long protected period also allows us to learn much more than any other animal. But eventually, we have to leave the safe bubble of family life, take what we learned as children and apply it to the real adult world.

Becoming an adult means leaving the world of your parents and starting to make your way toward the future that you will share with your peers. Puberty not only turns on the motivational and emotional system with new force, it also turns it away from the family and toward the world of equals.

[div class=attrib]Read more here.[end-div]

See the Aurora, then Die

One item that features prominently on so-called “things-to-do-before-you-die” lists is seeing the Aurora Borealis, or Northern Lights.

The recent surge in sunspot activity and solar flares has caused a corresponding uptick in geo-magnetic storms here on Earth. The resulting Aurorae have been nothing short of spectacular. More images here, courtesy of Smithsonian magazine.

Do We Need Philosophy Outside of the Ivory Tower?

In her song “What I Am”, Edie Brickell reminds us that philosophy is “the talk on a cereal box” and “a walk on the slippery rocks“.

Philosopher Gary Gutting makes the case that the discipline is more important than ever, and yes, it belongs in the mainstream consciousness, and not just within the confines of academia.

[div class=attrib]From the New York Times:[end-div]

Almost every article that appears in The Stone provokes some comments from readers challenging the very idea that philosophy has anything relevant to say to non-philosophers.  There are, in particular, complaints that philosophy is an irrelevant “ivory-tower” exercise, useless to any except those interested in logic-chopping for its own sake.

There is an important conception of philosophy that falls to this criticism.  Associated especially with earlier modern philosophers, particularly René Descartes, this conception sees philosophy as the essential foundation of the beliefs that guide our everyday life.  For example, I act as though there is a material world and other people who experience it as I do.   But how do I know that any of this is true?  Couldn’t I just be dreaming of a world outside my thoughts?  And, since (at best) I see only other human bodies, what reason do I have to think that there are any minds connected to those bodies?  To answer these questions, it would seem that I need rigorous philosophical arguments for my existence and the existence of other thinking humans.

Of course, I don’t actually need any such arguments, if only because I have no practical alternative to believing that I and other people exist.  As soon as we stop thinking weird philosophical thoughts, we immediately go back to believing what skeptical arguments seem to call into question.  And rightly so, since, as David Hume pointed out, we are human beings before we are philosophers.

But what Hume and, by our day, virtually all philosophers are rejecting is only what I’m calling the foundationalist conception of philosophy. Rejecting foundationalism means accepting that we have every right to hold basic beliefs that are not legitimated by philosophical reflection.  More recently, philosophers as different as Richard Rorty and Alvin Plantinga have cogently argued that such basic beliefs include not only the “Humean” beliefs that no one can do without, but also substantive beliefs on controversial questions of ethics, politics and religion.  Rorty, for example, maintained that the basic principles of liberal democracy require no philosophical grounding (“the priority of democracy over philosophy”).

If you think that the only possible “use” of philosophy would be to provide a foundation for beliefs that need no foundation, then the conclusion that philosophy is of little importance for everyday life follows immediately.  But there are other ways that philosophy can be of practical significance.

Even though basic beliefs on ethics, politics and religion do not require prior philosophical justification, they do need what we might call “intellectual maintenance,” which itself typically involves philosophical thinking.  Religious believers, for example, are frequently troubled by the existence of horrendous evils in a world they hold was created by an all-good God.  Some of their trouble may be emotional, requiring pastoral guidance.  But religious commitment need not exclude a commitment to coherent thought. For instance, often enough believers want to know if their belief in God makes sense given the reality of evil.  The philosophy of religion is full of discussions relevant to this question.  Similarly, you may be an atheist because you think all arguments for God’s existence are obviously fallacious. But if you encounter, say, a sophisticated version of the cosmological argument, or the design argument from fine-tuning, you may well need a clever philosopher to see if there’s anything wrong with it.

[div class=attrib]Read the entire article here.[end-div]

Forget the Groupthink: Rise of the Introvert

Author Susan Cain reviews her intriguing book, “Quiet : The Power of Introverts” in an interview with Gareth Cook over at Mind Matters / Scientific American.

She shows us how social and business interactions and group-driven processes, often led and coordinated by extroverts, may not be the most efficient method for introverts to shine creatively.

[div class=attrib]From Mind Matters:[end-div]

Cook: This may be a stupid question, but how do you define an introvert? How can somebody tell whether they are truly introverted or extroverted?

Cain: Not a stupid question at all! Introverts prefer quiet, minimally stimulating environments, while extroverts need higher levels of stimulation to feel their best. Stimulation comes in all forms – social stimulation, but also lights, noise, and so on. Introverts even salivate more than extroverts do if you place a drop of lemon juice on their tongues! So an introvert is more likely to enjoy a quiet glass of wine with a close friend than a loud, raucous party full of strangers.

It’s also important to understand that introversion is different from shyness. Shyness is the fear of negative judgment, while introversion is simply the preference for less stimulation. Shyness is inherently uncomfortable; introversion is not. The traits do overlap, though psychologists debate to what degree.

Cook: You argue that our culture has an extroversion bias. Can you explain what you mean?

Cain: In our society, the ideal self is bold, gregarious, and comfortable in the spotlight. We like to think that we value individuality, but mostly we admire the type of individual who’s comfortable “putting himself out there.” Our schools, workplaces, and religious institutions are designed for extroverts. Introverts are to extroverts what American women were to men in the 1950s — second-class citizens with gigantic amounts of untapped talent.

In my book, I travel the country – from a Tony Robbins seminar to Harvard Business School to Rick Warren’s powerful Saddleback Church – shining a light on the bias against introversion. One of the most poignant moments was when an evangelical pastor I met at Saddleback confided his shame that “God is not pleased” with him because he likes spending time alone.

Cook: How does this cultural inclination affect introverts?

Cain: Many introverts feel there’s something wrong with them, and try to pass as extroverts. But whenever you try to pass as something you’re not, you lose a part of yourself along the way. You especially lose a sense of how to spend your time. Introverts are constantly going to parties and such when they’d really prefer to be home reading, studying, inventing, meditating, designing, thinking, cooking…or any number of other quiet and worthwhile activities.

According to the latest research, one third to one half of us are introverts – that’s one out of every two or three people you know. But you’d never guess that, right? That’s because introverts learn from an early age to act like pretend-extroverts.

[div class=attrib]Read the entire article here.[end-div]

Our Beautiful Home

A composite image of the beautiful blue planet, taken through NASA’s eyes on January 4, 2012. It’s so gorgeous that theDiagonal’s editor wishes he lived there.

[div class=attrib]Image of Earth from NASA’s Earth observing satellite Suomi NPP. Courtesy of NASA/NOAA/GSFC/Suomi NPP/VIIRS/Norman Kuring.[end-div]

Self-Esteem and Designer Goods

[div class=attrib]From Scientific American:[end-div]

Sellers have long charged a premium for objects that confer some kind of social status, even if they offer few, if any, functional benefits over cheaper products. Designer sunglasses, $200,000 Swiss watches, and many high-end cars often seem to fall into this category. If a marketer can make a mundane item seem like a status symbol—maybe by wrapping it in a fancy package or associating it with wealth, success or beauty—they can charge more for it.

Although this practice may seem like a way to trick consumers out of their hard-earned cash, studies show that people do reap real psychological benefits from the purchase of high status items. Still, some people may gain more than others do, and studies also suggest that buying fancy stuff for yourself is unlikely to be the best way to boost your happiness or self-esteem.

In 2008, two research teams demonstrated that people process social values in the brain’s reward center: the striatum, which also responds to monetary gains. That these two values share a cerebral home suggests we may weigh our reputation in cash terms. Whether we like it or not, attaching a monetary value to social status makes good scientific sense.

Much of what revs up this reward center—food and recreational drugs, for example—is associated with a temporary rush of pleasure or good feeling, rather than long-lasting satisfaction. But when we literally pay for that good feeling, by buying a high-status car or watch, say, the effect may last long enough to unleash profitable behaviors. In a study published last year, researchers at National Sun Yat-Sen University in Taiwan found that the mere use of brand name products seemed to make people feel they deserved higher salaries, in one case, and in the other, would be more attractive to a potential date, reports Roger Dooley in his Neuromarketing blog. Thus, even if the boost of good feeling—and self-worth—is short-lived, it might spawn actions that yield lasting benefits.

Other data suggest that owning fancy things might have more direct psychological benefits. In a study published in 2010, psychologist Ed Deiner at the University of Illinois and his colleagues found that standard of living, as measured by household income and ownership of luxury goods, predicted a person’s overall satisfaction with life—although it did not seem to enhance positive emotions.  That rush of pleasure you get from the purchase probably does fade, but a type of self-esteem effect seems to last.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image of luxury goods. Courtesy of Google search.[end-div]

Political and Social Stability and God

theDiagonal has carried several recent articles (here and here) that paint atheists in the same category as serial killers and child molesters, particularly in the United States. Why are atheists so reviled?

A study by Will Gervais and Ara Norenzayanat at the University of British Columbia shows that it boils down to trust. Simply put, we are more likely to find someone to be trustworthy if we believe God is watching over us.

Interestingly, their research also showed that atheists are more likely to be found in greater numbers in a population governed by a stable government with a broad social safety-net. Political instability, it seems, drives more citizens to believe in God.

[div class=attrib]From Scientific American:[end-div]

Atheists are one of the most disliked groups in America. Only 45 percent of Americans say they would vote for a qualified atheist presidential candidate, and atheists are rated as the least desirable group for a potential son-in-law or daughter-in-law to belong to. Will Gervais at the University of British Columbia recently published a set of studies looking at why atheists are so disliked. His conclusion: It comes down to trust.

Gervais and his colleagues presented participants with a story about a person who accidentally hits a parked car and then fails to leave behind valid insurance information for the other driver. Participants were asked to choose the probability that the person in question was a Christian, a Muslim, a rapist, or an atheist. They thought it equally probable the culprit was an atheist or a rapist, and unlikely the person was a Muslim or Christian. In a different study, Gervais looked at how atheism influences people’s hiring decisions. People were asked to choose between an atheist or a religious candidate for a job requiring either a high or low degree of trust. For the high-trust job of daycare worker, people were more likely to prefer the religious candidate. For the job of waitress, which requires less trust, the atheists fared much better.

It wasn’t just the highly religious participants who expressed a distrust of atheists. People identifying themselves as having no religious affiliation held similar opinions. Gervais and his colleagues discovered that people distrust atheists because of the belief that people behave better when they think that God is watching over them. This belief may have some truth to it. Gervais and his colleague Ara Norenzayan have found that reminding people about God’s presence has the same effect as telling people they are being watched by others: it increases their feelings of self-consciousness and leads them to behave in more socially acceptable ways.

When we know that somebody believes in the possibility of divine punishment, we seem to assume they are less likely to do something unethical. Based on this logic, Gervais and Norenzayan hypothesized that reminding people about the existence of secular authority figures, such as policemen and judges, might alleviate people’s prejudice towards atheists. In one study, they had people watch either a travel video or a video of a police chief giving an end-of-the-year report. They then asked participants how much they agreed with certain statements about atheists (e.g., “I would be uncomfortable with an atheist teaching my child.”) In addition, they measured participants’ prejudice towards other groups, including Muslims and Jewish people. Their results showed that viewing the video of the police chief resulted in less distrust towards atheists. However, it had no effect on people’s prejudice towards other groups. From a psychological standpoint, God and secular authority figures may be somewhat interchangeable. The existence of either helps us feel more trusting of others.

Gervais and Norenzayan’s findings may shed light on an interesting puzzle: why acceptance towards atheism has grown rapidly in some countries but not others. In many Scandinavian countries, including Norway and Sweden, the number of people who report believing in God has reached an all-time low. This may have something to do with the way these countries have established governments that guarantee a high level of social security for all of their citizens.  Aaron Kay and his colleagues ran a study in Canada which found that political insecurity may push us towards believing in God. They gave participants two versions of a fictitious news story: one describing Canada’s current political situation as stable, the other describing it as potentially unstable. After reading one of the two articles, people’s beliefs in God were measured. People who read the article describing the government as potentially unstable were more likely to agree that God, or some other type of nonhuman entity, is in control of the universe. A common belief in the divine may help people feel more secure. Yet when security is achieved by more secular means, it may remove some of the draw of faith.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: In God We Trust. Courtesy of the Houston Chronicle.[end-div]

Do We Become More Conservative as We Age?

A popular stereotype suggests that we become increasingly conservative in our values as we age. Thus, one would expect that older voters would be more likely to vote for Republican candidates. However, a recent social study debunks this view.

[div class=attrib]From Discovery:[end-div]

Amidst the bipartisan banter of election season, there persists an enduring belief that people get more conservative as they age — making older people more likely to vote for Republican candidates.

Ongoing research, however, fails to back up the stereotype. While there is some evidence that today’s seniors may be more conservative than today’s youth, that’s not because older folks are more conservative than they use to be. Instead, our modern elders likely came of age at a time when the political situation favored more conservative views.

In fact, studies show that people may actually get more liberal over time when it comes to certain kinds of beliefs. That suggests that we are not pre-determined to get stodgy, set in our ways or otherwise more inflexible in our retirement years.

Contrary to popular belief, old age can be an open-minded and enlightening time.

NEWS: Is There a Liberal Gene?

“Pigeonholing older people into these rigid attitude boxes or conservative boxes is not a good idea,” said Nick Dangelis, a sociologist and gerontologist at the University of Vermont in Burlington.

“Rather, when they were born, what experiences they had growing up, as well as political, social and economic events have a lot to do with how people behave,” he said. “Our results are showing that these have profound effects.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A Board of Elections volunteer watches people cast their ballots during early voting October 23, 2008 in Savannah, Georgia. Courtesy of MSNBC.[end-div]

Wikipedia Blackout and Intellectual Curiosity

Perhaps the recent dimming of Wikipedia, for 24 hours on January 18, (and other notable websites) in protest of the planned online privacy legislation in the U.S. Congress, wasn’t all that bad.

Many would argue that Wikipedia has been a great boon in democratizing content authorship and disseminating information. So, when it temporarily shuttered its online doors, many shuddered from withdrawal. Yet, this “always on”, instantly available, crowdsourced resource is undermining an important human trait: intellectual curiosity.

When Wikipedia went off-air many of us, including Jonathan Jones, were forced to search a little deeper and a little longer for facts and information. In doing so, it reawakened our need to discover, connect, and conceptualize for ourselves, rather than take as rote the musings of the anonymous masses, just one click away. Yes, we exercised our brains a little harder that day.

[div class=attrib]By Jonathan Jones over at the Guardian:[end-div]

I got really excited this morning. Looking up an artist online – Rembrandt, if you want to know – I noticed something different. As usual, the first item offered was his Wikipedia entry. But after a few seconds, the Rembrandt page dissolved into a darkened screen with a big W and an explanation I was too thrilled to read at that moment. Wikipedia offline? Wikipedia offline! A new dawn for humanity …

Only after a couple of glasses of champagne did I look again and realise that Wikipedia is offline only for 24 hours, in protest against what it sees as assaults on digital freedom.

OK, so I’m slightly hamming that up. Wikipedia is always the first site my search engine offers, for any artist, but I try to ignore it. I detest the way this site claims to offer the world’s knowledge when all it often contains is a half-baked distillation of third-hand information. To call this an encyclopedia is like saying an Airfix model is a real Spitfire. Actually, not even a kit model – more like one made out of matchsticks.

I have a modest proposal for Wikipedia: can it please stay offline for ever? It has already achieved something remarkable, replacing genuine intellectual curiosity and discovery with a world of lazy, instant factoids. Can it take a rest and let civilisation recover?

On its protest page today, the website asks us to “imagine a world without free knowledge”. These words betray a colossal arrogance. Do the creators of Wikipedia really believe they are the world’s only source of “free knowledge”?

Institutions that offer free knowledge have existed for thousands of years. They are called libraries. Public libraries flourished in ancient Greece and Rome, and were revived in the Renaissance. In the 19th century, libraries were built in cities and towns everywhere. What is the difference between a book and Wikipedia? It has a named author or authors, and they are made to work hard, by editors and teams of editors, to get their words into print. Those words, when they appear, vary vastly in value and importance, but the knowledge that can be gleaned – not just from one book but by comparing different books with one another, checking them against each other, reaching your own conclusions – is subtle, rich, beautiful. This knowledge cannot be packaged or fixed; if you keep an open mind, it is always changing.

[div class=attrib]Read the whole article here.[end-div]

Defying Gravity using Science

Gravity defying feats have long been a favored pastime for magicians and illusionists. Well, science has now caught up to and surpassed our friends with sleight of hand. Check out this astonishing video (after the 10 second ad) of a “quantum locked”, levitating superconducting disc, courtesy of New Scientist.

[div class=attrib]From the New Scientist:[end-div]

FOR centuries, con artists have convinced the masses that it is possible to defy gravity or walk through walls. Victorian audiences gasped at tricks of levitation involving crinolined ladies hovering over tables. Even before then, fraudsters and deluded inventors were proudly displaying perpetual-motion machines that could do impossible things, such as make liquids flow uphill without consuming energy. Today, magicians still make solid rings pass through each other and become interlinked – or so it appears. But these are all cheap tricks compared with what the real world has to offer.

Cool a piece of metal or a bucket of helium to near absolute zero and, in the right conditions, you will see the metal levitating above a magnet, liquid helium flowing up the walls of its container or solids passing through each other. “We love to observe these phenomena in the lab,” says Ed Hinds of Imperial College, London.

This weirdness is not mere entertainment, though. From these strange phenomena we can tease out all of chemistry and biology, find deliverance from our energy crisis and perhaps even unveil the ultimate nature of the universe. Welcome to the world of superstuff.

This world is a cold one. It only exists within a few degrees of absolute zero, the lowest temperature possible. Though you might think very little would happen in such a frozen place, nothing could be further from the truth. This is a wild, almost surreal world, worthy of Lewis Carroll.

One way to cross its threshold is to cool liquid helium to just above 2 kelvin. The first thing you might notice is that you can set the helium rotating, and it will just keep on spinning. That’s because it is now a “superfluid”, a liquid state with no viscosity.

Another interesting property of a superfluid is that it will flow up the walls of its container. Lift a bucketful of superfluid helium out of a vat of the stuff, and it will flow up the sides of the bucket, over the lip and down the outside, rejoining the fluid it was taken from.

[div class=attrib]Read more here.[end-div]

Handedness Shapes Perception and Morality

A group of new research studies show that our left- or right-handedness shapes our perception of “goodness” and “badness”.

[div class=attrib]From Scientific American:[end-div]

A series of studies led by psychologist Daniel Casasanto suggests that one thing that may shape our choice is the side of the menu an item appears on. Specifically, Casasanto and his team have shown that for left-handers, the left side of any space connotes positive qualities such as goodness, niceness, and smartness. For right-handers, the right side of any space connotes these same virtues. He calls this idea that “people with different bodies think differently, in predictable ways” the body-specificity hypothesis.

In one of Casasanto’s experiments, adult participants were shown pictures of two aliens side by side and instructed to circle the alien that best exemplified an abstract characteristic. For example, participants may have been asked to circle the “more attractive” or “less honest” alien. Of the participants who showed a directional preference (most participants did), the majority of right-handers attributed positive characteristics more often to the aliens on the right whereas the majority of left-handers attributed positive characteristics more often to aliens on the left.

Handedness was found to predict choice in experiments mirroring real-life situations as well. When participants read near-identical product descriptions on either side of a page and were asked to indicate the products they wanted to buy, most righties chose the item described on the right side while most lefties chose the product on the left. Similarly, when subjects read side-by-side resumes from two job applicants presented in a random order, they were more likely to choose the candidate described on their dominant side.

Follow-up studies on children yielded similar results. In one experiment, children were shown a drawing of a bookshelf with a box to the left and a box to the right. They were then asked to think of a toy they liked and a toy they disliked and choose the boxes in which they would place the toys. Children tended to choose to place their preferred toy in the box to their dominant side and the toy they did not like to their non-dominant side.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Drawing Hands by M. C. Escher, 1948, Lithograph. Courtesy of Wikipedia.[end-div]

An Evolutionary Benefit to Self-deception

[div class=attrib]From Scientific American:[end-div]

We lie to ourselves all the time. We tell ourselves that we are better than average — that we are more moral, more capable, less likely to become sick or suffer an accident. It’s an odd phenomenon, and an especially puzzling one to those who think about our evolutionary origins. Self-deception is so pervasive that it must confer some advantage. But how could we be well served by a brain that deceives us? This is one of the topics tackled by Robert Trivers in his new book, “The Folly of Fools,” a colorful survey of deception that includes plane crashes, neuroscience and the transvestites of the animal world. He answered questions from Mind Matters editor Gareth Cook.

Cook: Do you have any favorite examples of deception in the natural world?
Trivers: Tough call. They are so numerous, intricate and bizarre.  But you can hardly beat female mimics for general interest. These are males that mimic females in order to achieve closeness to a territory-holding male, who then attracts a real female ready to lay eggs. The territory-holding male imagines that he is in bed (so to speak) with two females, when really he is in bed with one female and another male, who, in turn, steals part of the paternity of the eggs being laid by the female. The internal dynamics of such transvestite threesomes is only just being analyzed. But for pure reproductive artistry one can not beat the tiny blister beetles that assemble in arrays of 100’s to 1000’s, linking together to produce the larger illusion of a female solitary bee, which attracts a male bee who flies into the mirage in order to copulate and thereby carries the beetles to their next host.

Cook: At what age do we see the first signs of deception in humans?
Trivers: In the last trimester of pregnancy, that is, while the offspring is still inside its mother. The baby takes over control of the mother’s blood sugar level (raising it), pulse rate (raising it) and blood distribution (withdrawing it from extremities and positioning it above the developing baby). It does so by putting into the maternal blood stream the same chemicals—or close mimics—as those that the mother normally produces to control these variables. You could argue that this benefits mom. She says, my child knows better what it needs than I do so let me give the child control. But it is not in the mother’s best interests to allow the offspring to get everything it wants; the mother must apportion her biological investment among other offspring, past, present and future. The proof is in the inefficiency of the new arrangement, the hallmark of conflict. The offspring produces these chemicals at 1000 times the level that the mother does. This suggests a co-evolutionary struggle in which the mother’s body becomes deafer as the offspring becomes louder.
After birth, the first clear signs of deception come about age 6 months, which is when the child fakes need when there appears to be no good reason. The child will scream and bawl, roll on the floor in apparent agony and yet stop within seconds after the audience leaves the room, only to resume within seconds when the audience is back. Later, the child will hide objects from the view of others and deny that it cares about a punishment when it clearly does.  So-called ‘white lies’, of the sort “The meal you served was delicious” appear after age 5.

[div class=attrib]Read the entire article here.[end-div]

On the Need for Charisma

[div class=attrib]From Project Syndicate:[end-div]

A leadership transition is scheduled in two major autocracies in 2012. Neither is likely to be a surprise. Xi Jinping is set to replace Hu Jintao as President in China, and, in Russia, Vladimir Putin has announced that he will reclaim the presidency from Dmitri Medvedev. Among the world’s democracies, political outcomes this year are less predictable. Nicolas Sarkozy faces a difficult presidential re-election campaign in France, as does Barack Obama in the United States.

In the 2008 US presidential election, the press told us that Obama won because he had “charisma” – the special power to inspire fascination and loyalty. If so, how can his re-election be uncertain just four years later? Can a leader lose his or her charisma? Does charisma originate in the individual, in that person’s followers, or in the situation? Academic research points to all three.

Charisma proves surprisingly hard to identify in advance. A recent survey concluded that “relatively little” is known about who charismatic leaders are. Dick Morris, an American political consultant, reports that in his experience, “charisma is the most elusive of political traits, because it doesn’t exist in reality; only in our perception once a candidate has made it by hard work and good issues.” Similarly, the business press has described many a CEO as “charismatic” when things are going well, only to withdraw the label when profits fall.

Political scientists have tried to create charisma scales that would predict votes or presidential ratings, but they have not proven fruitful. Among US presidents, John F. Kennedy is often described as charismatic, but obviously not for everyone, given that he failed to capture a majority of the popular vote, and his ratings varied during his presidency.

Kennedy’s successor, Lyndon Johnson, lamented that he lacked charisma. That was true of his relations with the public, but Johnson could be magnetic – even overwhelming – in personal contacts. One careful study of presidential rhetoric found that even such famous orators as Franklin Roosevelt and Ronald Reagan could not count on charisma to enact their programs.

Charisma is more easily identified after the fact. In that sense, the concept is circular. It is like the old Chinese concept of the “mandate of heaven”: emperors were said to rule because they had it, and when they were overthrown, it was because they had lost it.

But no one could predict when that would happen. Similarly, success is often used to prove – after the fact – that a modern political leader has charisma. It is much harder to use charisma to predict who will be a successful leader.

[div class=attrib]Read the entire article here.[end-div]

Barcode as Art

The ubiquitous and utilitarian barcode turns 60 years old. Now, it’s upstart and more fashionable sibling, the QR or quick response, code, seems to be stealing the show by finding its way from the product on the grocery store shelf to the world of art and design.

[div class=attrib]From the New York Times:[end-div]

It’s usually cause for celebration when a product turns 60. How could it have survived for so long, unless it is genuinely wanted or needed, or maybe both?

One of the sexagenarians this year, the bar code, has more reasons than most to celebrate. Having been a familiar part of daily life for decades, those black vertical lines have taken on a new role of telling ethically aware consumers whether their prospective purchases are ecologically and socially responsible. Not bad for a 60-year-old.

But a new rival has surfaced. A younger version of the bar code, the QR, or “Quick Response” code, threatens to become as ubiquitous as the original, and is usurping some of its functions. Both symbols are black and white, geometric in style and rectangular in shape, but there the similarities end, because each one has a dramatically different impact on the visual landscape, aesthetically and symbolically.

First, the bar code. The idea of embedding information about a product, including its price, in a visual code that could be decrypted quickly and accurately at supermarket checkouts was hatched in the late 1940s by Bernard Silver and Norman Joseph Woodland, graduate students at the Drexel Institute of Technology in Philadelphia. Their idea was that retailers would benefit from speeding up the checkout process, enabling them to employ fewer staff, and from reducing the expense and inconvenience caused when employees keyed in the wrong prices.

At 8.01 a.m. on June 26, 1974, a packet of Wrigley’s Juicy Fruit chewing gum was sold for 67 cents at a Marsh Supermarket in Troy, Ohio — the first commercial transaction to use a bar code. More than five billion bar-coded products are now scanned at checkouts worldwide every day. Some of those codes will also have been vetted on the cellphones of shoppers who wanted to check the product’s impact on their health and the environment, and the ethical credentials of the manufacturer. They do so by photographing the bar code with their phones and using an application to access information about the product on ethical rating Web sites like GoodGuide.

As for the QR code, it was developed in the mid-1990s by the Japanese carmaker Toyota to track components during the manufacturing process. A mosaic of tiny black squares on a white background, the QR code has greater storage capacity than the original bar code. Soon, Japanese cellphone makers were adding QR readers to camera phones, and people were using them to download text, films and Web links from QR codes on magazines, newspapers, billboards and packaging. The mosaic codes then appeared in other countries and are now common all over the world. Anyone who has downloaded a QR reading application can decrypt them with a camera phone.

 

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Shrink-Wrapped Couples

Once in a while a photographer comes along with a simple yet thoroughly new perspective. Japanese artist Photographer Hal fits this description. His images of young Japanese in a variety of contorted and enclosed situations are sometimes funny and disturbing, but certainly different and provocative.

[div class=attrib]From flavorwire:[end-div]

Japanese artist Photographer Hal has stuffed club kids into bathtubs and other cramped spaces in his work before, but this time he’s chosen to shrink-wrap them like living dolls squirming under plastic. With some nude, and some dressed in candy-colored attire, Hal covers his models with a plastic sheeting that he vacuums the air from in order to distort their features and bond them together. It only takes a few seconds for him to snap several images before releasing them, and the results are humorous and somewhat grotesque.

[div class=attrib]See more of Photographer Hal’s work here.[end-div]

The Unconscious Mind Boosts Creativity

[div class=attrib]From Miller-McCune:[end-div]

New research finds we’re better able to identify genuinely creative ideas when they’ve emerged from the unconscious mind.

Truly creative ideas are both highly prized and, for most of us, maddeningly elusive. If our best efforts produce nothing brilliant, we’re often advised to put aside the issue at hand and give our unconscious minds a chance to work.

Newly published research suggests that is indeed a good idea — but not for the reason you might think.

A study from the Netherlands finds allowing ideas to incubate in the back of the mind is, in a narrow sense, overrated. People who let their unconscious minds take a crack at a problem were no more adept at coming up with innovative solutions than those who consciously deliberated over the dilemma.

But they did perform better on the vital second step of this process: determining which of their ideas was the most creative. That realization provides essential information; without it, how do you decide which solution you should actually try to implement?

Given the value of discerning truly fresh ideas, “we can conclude that the unconscious mind plays a vital role in creative performance,” a research team led by Simone Ritter of the Radboud University Behavioral Science Institute writes in the journal Thinking Skills and Creativity.

In the first of two experiments, 112 university students were given two minutes to come up with creative ideas to an everyday problem: how to make the time spent waiting in line at a cash register more bearable. Half the participants went at it immediately, while the others first spent two minutes performing a distracting task — clicking on circles that appeared on a computer screen. This allowed time for ideas to percolate outside their conscious awareness.

After writing down as many ideas as they could think of, they were asked to choose which of their notions was the most creative.  Participants were scored by the number of ideas they came up with, the creativity level of those ideas (as measured by trained raters), and whether their perception of their most innovative idea coincided with that of the raters.
The two groups scored evenly on both the number of ideas generated and the average creativity of those ideas. But those who had been distracted, and thus had ideas spring from their unconscious minds, were better at selecting their most creative concept.

[div class=attrib]Read the entire article here.[end-div]

Stephen Colbert: Seriously Funny

A fascinating article of Stephen Colbert, a funny man with some serious jokes about our broken political process.

[div class=attrib]From the New York Times magazine:[end-div]

There used to be just two Stephen Colberts, and they were hard enough to distinguish. The main difference was that one thought the other was an idiot. The idiot Colbert was the one who made a nice paycheck by appearing four times a week on “The Colbert Report” (pronounced in the French fashion, with both t’s silent), the extremely popular fake news show on Comedy Central. The other Colbert, the non-idiot, was the 47-year-old South Carolinian, a practicing Catholic, who lives with his wife and three children in suburban Montclair, N.J., where, according to one of his neighbors, he is “extremely normal.” One of the pleasures of attending a live taping of “The Colbert Report” is watching this Colbert transform himself into a Republican superhero.

Suburban Colbert comes out dressed in the other Colbert’s guise — dark two-button suit, tasteful Brooks Brothersy tie, rimless Rumsfeldian glasses — and answers questions from the audience for a few minutes. (The questions are usually about things like Colbert’s favorite sport or favorite character from “The Lord of the Rings,” but on one memorable occasion a young black boy asked him, “Are you my father?” Colbert hesitated a moment and then said, “Kareem?”) Then he steps onstage, gets a last dab of makeup while someone sprays his hair into an unmussable Romney-like helmet, and turns himself into his alter ego. His body straightens, as if jolted by a shock. A self-satisfied smile creeps across his mouth, and a manically fatuous gleam steals into his eyes.

Lately, though, there has emerged a third Colbert. This one is a version of the TV-show Colbert, except he doesn’t exist just on screen anymore. He exists in the real world and has begun to meddle in it. In 2008, the old Colbert briefly ran for president, entering the Democratic primary in his native state of South Carolina. (He hadn’t really switched parties, but the filing fee for the Republican primary was too expensive.) In 2010, invited by Representative Zoe Lofgren, he testified before Congress about the problem of illegal-immigrant farmworkers and remarked that “the obvious answer is for all of us to stop eating fruits and vegetables.”

But those forays into public life were spoofs, more or less. The new Colbert has crossed the line that separates a TV stunt from reality and a parody from what is being parodied. In June, after petitioning the Federal Election Commission, he started his own super PAC — a real one, with real money. He has run TV ads, endorsed (sort of) the presidential candidacy of Buddy Roemer, the former governor of Louisiana, and almost succeeded in hijacking and renaming the Republican primary in South Carolina. “Basically, the F.E.C. gave me the license to create a killer robot,” Colbert said to me in October, and there are times now when the robot seems to be running the television show instead of the other way around.

“It’s bizarre,” remarked an admiring Jon Stewart, whose own program, “The Daily Show,” immediately precedes “The Colbert Report” on Comedy Central and is where the Colbert character got his start. “Here is this fictional character who is now suddenly interacting in the real world. It’s so far up its own rear end,” he said, or words to that effect, “that you don’t know what to do except get high and sit in a room with a black light and a poster.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Images courtesy of Google search.[end-div]