Turing Test 2.0 – Intelligent Behavior Free of Bigotry

One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?

Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.

Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.

In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.

This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.

[div class=attrib]From Nature:[end-div]

Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.

Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?

Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.

He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:

“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”

That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.[end-div]

The New Middle Age

We have all heard it — 50 is the “new 30”, 60 is the “new 40”. Adolescence now seems to stretch on into the mid- to late-20s. And, what on Earth is “middle age” anyway? As these previously well defined life-stages become more fluid perhaps it’s time for yet another calibration.

[div class=attrib]From the Independent:[end-div]

One thing that can be said of “Middle Age” is that it’s moving further from the middle. The annual British Social Attitudes Survey suggests just a third of people in their 40s regard themselves as middle-aged, while almost a third of those in their 70s are still clinging to the label, arthritic fingers notwithstanding. In A Shed of One’s Own, his very funny new memoir of male midlife crisis and its avoidance, Marcus Berkmann reaches for a number of definitions for his time of life: “Middle age is comedy, and also tragedy,” he says. “Other people’s middle age is self-evidently ridiculous, while our own represents the collapse of all our hopes and dreams.”

He cites Denis Norden, who said: “Middle age is when, wherever you go on holiday, you pack a sweater.” And the fictional Frasier Crane, who maintains that the middle-aged “go ‘oof’ when [they] sit down on a sofa”. Shakespeare’s famous Seven Ages of Man speech, delivered by the melancholy Jacques in As You Like It, delineated the phases of human development by occupation: the schoolboy, the adolescent lover, the soldier, and the – presumably, middle-aged – legal professional. We have long defined ourselves compulsively by our stages in life; we yearn for maturity, then mourn the passing of youth. But to what extent are these stages socio-cultural (holidays/sweaters) and to what extent are they biological (sofas/”oof”)?

Patricia Cohen, New York Times reporter and author of another new study of ageing, In Our Prime: The Invention of Middle Age, might not be overly sympathetic to Berkmann’s plight. The mid-life crisis, she suggests, is a marketing trick designed to sell cosmetics, cars and expensive foreign holidays; people in their 20s and 30s are far more vulnerable to such a crisis than their parents. Cohen finds little evidence for so-called “empty nest syndrome”, or for the widespread stereotype of the rich man with the young “trophy wife”.

She even claims that middle age itself is a “cultural fiction”, and that Americans only became neurotic about entering their 40s at the turn of the 20th century, when they started lying to census-takers about their age. Before then, “age was not an essential ingredient of one’s identity”. Rather, people were classified according to “marker events”: marriage, parenthood and so on. In 1800 the average American woman had seven children; by 1900 she had three. They were out of her hair by her early 40s and, thanks to modern medicine, she could look forward to a further 20 years or more of active life.

As Berkmann laments, “one of the most tangible symptoms of middle age is the sensation that you’re being cast adrift from mainstream culture.” Then again, the baby boomers, and the more mature members of “Generation X”, are the most powerful of economic blocs. The over-50s spend far more on consumer goods than their younger counterparts, making them particularly valuable to advertisers – and perpetuating the idea of the middle-aged as a discernible demographic.

David Bainbridge, a vet and evolutionary zoologist, also weighs in on the topic in his latest book, Middle Age: A Natural History. Middle age is an exclusively human phenomenon, Bainbridge explains, and doesn’t exist elsewhere in the animal kingdom, where infirmity often follows hot on the heels of parenthood. It is, he argues, “largely the product of millions of years of human evolution… not a 20th-century cultural invention.” He urges readers to embrace middle age as “flux, not crisis” – which is probably what he said to his wife, when he bought himself a blue vintage Lotus soon after turning 40.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Practical Financial.[end-div]

Doctors Die Too, But Differently

[div class=attrib]From the Wall Street Journal:[end-div]

Years ago, Charlie, a highly respected orthopedist and a mentor of mine, found a lump in his stomach. It was diagnosed as pancreatic cancer by one of the best surgeons in the country, who had developed a procedure that could triple a patient’s five-year-survival odds—from 5% to 15%—albeit with a poor quality of life.

Charlie, 68 years old, was uninterested. He went home the next day, closed his practice and never set foot in a hospital again. He focused on spending time with his family. Several months later, he died at home. He got no chemotherapy, radiation or surgical treatment. Medicare didn’t spend much on him.

It’s not something that we like to talk about, but doctors die, too. What’s unusual about them is not how much treatment they get compared with most Americans, but how little. They know exactly what is going to happen, they know the choices, and they generally have access to any sort of medical care that they could want. But they tend to go serenely and gently.

Doctors don’t want to die any more than anyone else does. But they usually have talked about the limits of modern medicine with their families. They want to make sure that, when the time comes, no heroic measures are taken. During their last moments, they know, for instance, that they don’t want someone breaking their ribs by performing cardiopulmonary resuscitation (which is what happens when CPR is done right).

In a 2003 article, Joseph J. Gallo and others looked at what physicians want when it comes to end-of-life decisions. In a survey of 765 doctors, they found that 64% had created an advanced directive—specifying what steps should and should not be taken to save their lives should they become incapacitated. That compares to only about 20% for the general public. (As one might expect, older doctors are more likely than younger doctors to have made “arrangements,” as shown in a study by Paula Lester and others.)

Why such a large gap between the decisions of doctors and patients? The case of CPR is instructive. A study by Susan Diem and others of how CPR is portrayed on TV found that it was successful in 75% of the cases and that 67% of the TV patients went home. In reality, a 2010 study of more than 95,000 cases of CPR found that only 8% of patients survived for more than one month. Of these, only about 3% could lead a mostly normal life.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: The Triumph of Death, Pieter Bruegel the Elder, 1562. Museo del Prado in Madrid.[end-div]

Culture, Language and Genes

In the early 19th century Noah Webster set about re-defining written English. His aim was to standardize the spoken word in the fledgling nation and to distinguish American from British usage. In his own words, “as an independent nation, our honor requires us to have a system of our own, in language as well as government.”

He used his dictionary, which still bears his name today, as a tool to cleanse English of its stubborn reliance on aristocratic pedantry and over-reliance on Latin and Greek. He “simplified” the spelling of numerous words that he believed were contsructed with rules that were all too complicated. Thus, “colour” became “color” and “honour” switched to “honor”; “centre” became “center”, “behaviour” to “behavior”, “traveller” to “traveler”.

Webster offers a perfect example of why humanity seems so adept at fragmenting into diverse cultural groups that thrive through mutual uncomprehension. In “Wired for Culture”, evolutionary biologist Mark Pagel offers a compelling explanation based on that small, yet very selfish biological building block, the gene.

[div class=attrib]From the Wall Street Journal:[end-div]

The island of Gaua, part of Vanuatu in the Pacific, is just 13 miles across, yet it has five distinct native languages. Papua New Guinea, an area only slightly bigger than Texas, has 800 languages, some spoken by just a few thousand people.

Evolutionary biologists have long gotten used to the idea that bodies are just genes’ ways of making more genes, survival machines that carry genes to the next generation. Think of a salmon struggling upstream just to expend its body (now expendable) in spawning. Dr. Pagel’s idea is that cultures are an extension of this: that the way we use culture is to promote the long-term interests of our genes.

It need not be this way. When human beings’ lives became dominated by culture, they could have adopted habits that did not lead to having more descendants. But on the whole we did not; we set about using culture to favor survival of those like us at the expense of other groups, using religion, warfare, cooperation and social allegiance. As Dr. Pagel comments: “Our genes’ gamble at handing over control to…ideas paid off handsomely” in the conquest of the world.

What this means, he argues, is that if our “cultures have promoted our genetic interests throughout our history,” then our “particular culture is not for us, but for our genes.”

We’re expendable. The allegiance we feel to one tribe—religious, sporting, political, linguistic, even racial—is a peculiar mixture of altruism toward the group and hostility to other groups. Throughout history, united groups have stood, while divided ones fell.

Language is the most striking exemplar of Dr. Pagel’s thesis. He calls language “one of the most powerful, dangerous and subversive traits that natural selection has ever devised.” He draws attention to the curious parallels between genetics and linguistics. Both are digital systems, in which words or base pairs are recombined to make an infinite possibility of messages. (Elsewhere I once noted the numerical similarity between Shakespeare’s vocabulary of about 20,000 distinct words and his genome of about 21,000 genes).

Dr. Pagel points out that language is a “technology for rewiring other people’s minds…without either of you having to perform surgery.” But natural section was unlikely to favor such a technology if it helped just the speaker, or just the listener, at the expense of the other. Rather, he says that, just as the language of the genes promotes its own survival via a larger cooperative entity called the body, so language itself endures via the survival of the individual and the tribe.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of PA / Daily Mail.[end-div]

A Philosoper On Avoiding Death

Below we excerpt a brilliant essay by Alex Byrne summarizing his argument that our personal survival is grossly over-valued. But, this should not give future teleportation engineers chance to pause. Alex Byrne is a professor of philosophy at MIT.

[div class=attrib]From the Boston Review:[end-div]

Star Trek–style teleportation may one day become a reality. You step into the transporter, which instantly scans your body and brain, vaporizing them in the process. The information is transmitted to Mars, where it is used by the receiving station to reconstitute your body and brain exactly as they were on Earth. You then step out of the receiving station, slightly dizzy, but pleased to arrive on Mars in a few minutes, as opposed to the year it takes by old-fashioned spacecraft.

But wait. Do you really step out of the receiving station on Mars? Someone just like you steps out, someone who apparently remembers stepping into the transporter on Earth a few minutes before. But perhaps this person is merely your replica—a kind of clone or copy. That would not make this person you: in Las Vegas there is a replica of the Eiffel Tower, but the Eiffel Tower is in Paris, not in Las Vegas. If the Eiffel Tower were vaporized and a replica instantly erected in Las Vegas, the Eiffel Tower would not have been transported to Las Vegas. It would have ceased to exist. And if teleportation were like that, stepping into the transporter would essentially be a covert way of committing suicide. Troubled by these thoughts, you now realize that “you” have been commuting back and forth to Mars for years . . .

So which is it? You are preoccupied with a question about your survival: Do you survive teleportation to Mars? A lot hangs on the question, and it is not obvious how to answer it. Teleportation is just science fiction, of course; does the urgent fictional question have a counterpart in reality? Indeed it does: Do you, or could you, survive death?

Teeming hordes of humanity adhere to religious doctrines that promise survival after death: perhaps bodily resurrection at the Day of Judgment, reincarnation, or immaterial immortality. For these people, death is not the end.

Some of a more secular persuasion do not disagree. The body of the baseball great Ted Williams lies in a container cooled by liquid nitrogen to -321 degrees Fahrenheit, awaiting the Great Thawing, when he will rise to sign sports memorabilia again. (Williams’s prospects are somewhat compromised because his head has apparently been preserved separately.) For the futurist Ray Kurzweil, hope lies in the possibility that he will be uploaded to new and shiny hardware—as pictures are transferred to Facebook’s servers—leaving his outmoded biological container behind.

Isn’t all this a pipe dream? Why isn’t “uploading” merely a way of producing a perfect Kurzweil-impersonator, rather than the real thing? Cryogenic storage might help if I am still alive when frozen, but what good is it after I am dead? And is the religious line any more plausible? “Earth to earth, ashes to ashes, dust to dust” hardly sounds like the dawn of a new day. Where is—as the Book of Common Prayer has it—the “sure and certain hope of the Resurrection to eternal life”? If a forest fire consumes a house and the luckless family hamster, that’s the end of them, presumably. Why are we any different?

Philosophers have had a good deal of interest to say about these issues, under the unexciting rubric of “personal identity.” Let us begin our tour of some highlights with a more general topic: the survival, or “persistence,” of objects over time.

Physical objects (including plants and animals) typically come into existence at some time, and cease to exist at a later time, or so we normally think. For example, a cottage might come into existence when enough beams and bricks are assembled, and cease to exist a century later, when it is demolished to make room for a McMansion. A mighty oak tree began life as a tiny green shoot, or perhaps an acorn, and will end its existence when it is sawn into planks.

The cottage and the oak survive a variety of vicissitudes throughout their careers. The house survived Hurricane Irene, say. That is, the house existed before Irene and also existed after Irene. We can put this in terms of “identity”: the house existed before Irene and something existed after Irene that was identical to the house.

[div class=attrib]Read the entire essay here.[end-div]

A Very, Like, Interestaaaaaaang Linguistic Study?

Uptalk? Verbal fry? Linguistic curiosities enter the mainstream courtesy of trendsetting young women aged 18-25 and Australians.

[div class=attrib]From the Daily Telegraph:[end-div]

From Valley Girls to the Kardashians, young women have long been mocked for the way they talk.

Whether it be uptalk (pronouncing statements as if they were questions? Like this?), creating slang words like “bitchin’ ” and “ridic,” or the incessant use of “like” as a conversation filler, vocal trends associated with young women are often seen as markers of immaturity or even stupidity.

Right?

But linguists — many of whom once promoted theories consistent with that attitude — now say such thinking is outmoded. Girls and women in their teens and 20s deserve credit for pioneering vocal trends and popular slang, they say, adding that young women use these embellishments in much more sophisticated ways than people tend to realize.

“A lot of these really flamboyant things you hear are cute, and girls are supposed to be cute,” said Penny Eckert, a professor of linguistics at Stanford University. “But they’re not just using them because they’re girls. They’re using them to achieve some kind of interactional and stylistic end.”

The latest linguistic curiosity to emerge from the petri dish of girl culture gained a burst of public recognition in December, when researchers from Long Island University published a paper about it in The Journal of Voice. Working with what they acknowledged was a very small sample — recorded speech from 34 women ages 18 to 25 — the professors said they had found evidence of a new trend among female college students: a guttural fluttering of the vocal cords they called “vocal fry.”

A classic example of vocal fry, best described as a raspy or croaking sound injected (usually) at the end of a sentence, can be heard when Mae West says, “Why don’t you come up sometime and see me,” or, more recently on television, when Maya Rudolph mimics Maya Angelou on “Saturday Night Live.”

Not surprisingly, gadflies in cyberspace were quick to pounce on the study — or, more specifically, on the girls and women who are frying their words. “Are they trying to sound like Kesha or Britney Spears?” teased The Huffington Post, naming two pop stars who employ vocal fry while singing, although the study made no mention of them. “Very interesteeeaaaaaaaaang,” said Gawker.com, mocking the lazy, drawn-out affect.

Do not scoff, says Nassima Abdelli-Beruh, a speech scientist at Long Island University and an author of the study. “They use this as a tool to convey something,” she said. “You quickly realize that for them, it is as a cue.”

Other linguists not involved in the research also cautioned against forming negative judgments.

“If women do something like uptalk or vocal fry, it’s immediately interpreted as insecure, emotional or even stupid,” said Carmen Fought, a professor of linguistics at Pitzer College in Claremont, Calif. “The truth is this: Young women take linguistic features and use them as power tools for building relationships.”

The idea that young women serve as incubators of vocal trends for the culture at large has longstanding roots in linguistics. As Paris is to fashion, the thinking goes, so are young women to linguistic innovation.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Paul Hoppe, Daily Telegraph.[end-div]

Most Expensive Books

OK, it’s World Book Day today, March 1. Regardless whether or not this day was contrived by Hallmark (or more likely, Barnes and Noble or Amazon), it’s fascinating to look at some beautiful record holders.

[div class=attrib]From the Daily Telegraph:[end-div]

To mark World Book Day on March 1, we look at some of the world’s most valuable titles. In a list of the most expensive books sold at auction, The Economist put John James Audubon’s The Birds of America (1827-1838) at number one. It sold for $10.3m in 2010.

[div class=attrib]See the Top 10 most expensive books here.[end-div]

Daddy’s Girl, Yes; Mother’s Boy, No

Western social norms tolerate a strong bond between father and daughter; it’s OK to be a daddy’s girl. Yet, for a mother’s boy and mothers of mothers’ boys it’s a different story. In fact, a strong bond between mother and son is frequently looked upon with derision. Just check out the mother’s body “definition” in Wikipedia; there’s no formal entry for Daddy’s Girl.

Why is this, and is it right?

Excerpts below from the forthcoming book “From “The Mama’s Boy Myth” by Kate Stone Lombardi.

[div class=attrib]From the Wall Street Journal:[end-div]

My daughter Jeanie and I use Google chat throughout the day to discuss work, what we had for lunch, how we’re avoiding the gym, and emotional issues big and small. We may also catch up by phone in the evening. I can open up to Jeanie about certain things that I wouldn’t share with another soul, and I believe she would say the same about me. We are very close, which you probably won’t find particularly surprising or alarming.

Now switch genders. Suppose I told you that I am very close to my son, Paul. That I love hanging out with him and that we have dozens of inside jokes and shared traditions. Even though we speak frequently, I get a little thrill each time I hear his signature ringtone on my cellphone. Next, I confess that Paul is so sensitive and intuitive that he “gets me” in a very special way.

Are you starting to speculate that something is a little off? Are you getting uncomfortable about the kind of guy my son is growing up to be?

For generations mothers have gotten one message: that keeping their sons close is wrong, possibly even dangerous. A mother who fosters a deep emotional bond with her son, we’ve been told, is setting him up to be weak and effeminate—an archetypal mama’s boy. He’ll never be independent or able to form healthy adult relationships. As the therapist and child-rearing guru Michael Gurian wrote in his 1994 book about mothers and sons, “a mother’s job…is very much to hold back the coming of manhood.” A well-adjusted, loving mother is one who gradually but surely pushes her son away, both emotionally and physically, in order to allow him to become a healthy man.

This was standard operating procedure for our mothers, our grandmothers and even our great-grandmothers. Amazingly, we’re still encouraged to buy this parenting advice today.

Somehow, when so many of our other beliefs about the roles of men and women have been revolutionized, our view of the mother-son relationship has remained frozen in time. We’ve dramatically changed the way we raise our daughters, encouraging them to be assertive, play competitive sports and aim high in their educational and professional ambitions. We don’t fret about “masculinizing” our girls.

As for daughters and their fathers, while a “mama’s boy” may be a reviled creature, people tend to look tolerantly on a “daddy’s girl.” A loving and supportive father is considered essential to a girl’s self-esteem. Fathers are encouraged to be involved in their daughters’ lives, whether it’s coaching their soccer teams or escorting their teenage girls to father-daughter dances. A father who flouts gender stereotypes and teaches his daughter a traditionally masculine task—say, rebuilding a car engine—is considered to be pretty cool. But a mother who does something comparable—like teaching her son to knit or even encouraging him to talk more openly about his feelings—is looked at with contempt. What is she trying to do to that boy?

Many mothers are confused and anxious when it comes to raising boys. Should they defer to their husband when he insists that she stop kissing their first-grade son at school drop-off? If she cuddles her 10-year-old boy when he is hurt, will she turn him into a wimp? If she keeps him too close, will she make him gay? If her teenage boy is crying in his room, should she go in and comfort him, or will this embarrass and shame him? Anthony E. Wolf, a child psychologist and best-selling author, warns us that “strong emotional contact with his mother is especially upsetting to any teenage boy.”

None of these fears, however, is based on any actual science. In fact, research shows that boys suffer when they separate prematurely from their mothers and benefit from closeness in myriad ways throughout their lives.

A study published in Child Development involving almost 6,000 children, age 12 and younger, found that boys who were insecurely attached to their mothers acted more aggressive and hostile later in childhood—kicking and hitting others, yelling, disobeying adults and being generally destructive.

A study of more than 400 middle school boys revealed that sons who were close to their mothers were less likely to define masculinity as being physically tough, stoic and self-reliant. They not only remained more emotionally open, forming stronger friendships, but they also were less depressed and anxious than their more macho classmates. And they were getting better grades.

There is evidence that a strong mother-son bond prevents delinquency in adolescence. And though it has been long established that teenagers who have good communication with their parents are more likely to resist negative peer pressure, new research shows that it is a boy’s mother who is the most influential when it comes to risky behavior, not only with alcohol and drugs but also in preventing both early and unprotected sex.

Finally, there are no reputable scientific studies suggesting that a boy’s sexual orientation can be altered by his mother, no matter how much she loves him.

[div class=attrib]Read the entire article here.[end-div]

Social Skin

[div class=attrib]From Anthropology in Practice:[end-div]

Are you inked?

I’m not, though I’ve thought about it seriously and have a pretty good idea of what I would get and where I would put it—if I could work up the nerve to get in the chair. I’ll tell you one thing: It most certainly is not a QR code like Fred Bosch, who designed his tattoo to link to something new every time it’s scanned. While the idea is intriguing and presents an interesting re-imagining of tattoos in the digital age, it seems to run counter to the nature of tattoos.

Tattoo As Talisman and Symbol

The word “tattoo” derives from the Tahitian word “tatau” (wound) and the the Polynesian root “ta” (drawing), which neatly summarizes the history of the practice (1). Humans have been inscribing their bodies (and the bodies of others) for thousands of years for self decoration, to display affiliation, and for punitive reasons. The oldest example of a tattooed individual is 5,200 year-old Ötzi the Iceman, who was found in 1991 in the area of the Italian-Austrian border. He had several tattoos on his back, right knee, and around his ankles, which researchers believe may have served medicinal purposes—possibly a form of acupuncture before acupuncture existed (2). Tattoos have also been found on Egyptian mummies dating to 2000 B.C. And sculpted artifacts and figurines marked by body art and piercings provide clues that tattooing was widely practiced from 500 B.C. to – 500 A.D. (3).

Tattoos have been used to signify occupation, patriotism, loyalty, and religious affiliation. For example, there is a rich maritime tradition of tattoos, including initials (both seamen’s own and those of significant others), anchors, mermaids, fish, ships, and religious symbols (4). It seems that most seafarers in the 18th and 19th centuries entered the ranks of the tattooed with initials—possibly for identification purposes—before adding different imagery (5), reflecting what was popular at the time: seafarers born after the American Declaration of Independence displayed more patriotic symbols (e.g., flags, eagles, stars, the words “Independence” and “Liberty,” and the year 1776 than those born prior). And there are also some interesting superstitions tied to them suggesting that tattooing has been an important means of exerting control over one’s situation (6):

H-O-L-D-F-A-S-T, one letter on the back of each finger, next to the hand knuckle, will save a sailor whose life depends on holding a rope.

A crucifix on the back will save the seaman from flogging because no boatswain’s mate would whip a cross, and if he did, the cross would alleviate the pain.

A seaman who could stand to have a full rigged ship tattooed on his chest would automatically make a good topman.

Crucifixes tattooed on each arm and leg would save a man who had fallen in the water and found himself among 775,000 hungry white sharks, who would not even bother smelling him.

That last point might be a bit of a fisherman’s tale (what if it’s 774,000 white sharks?), but it serves nicely to show how deeply enmeshed tattooing has been with certain occupations.

Early Christians got tattoos of religious symbols. Tattoos were purchased by pilgrims and Crusaders as proof that they had made it to Jerusalem, serving as a symbol of witness and identification. The Church largely did not approve even though there was biblical authorization for the practice: While there is evidence that “God’s word and work were passed on through generations through tattoos inscribed on the bodies of Saints, like the stigmata on St. Francis of Assisi,” the idea that the unmarked body is representative of God’s image and should not be altered was persistent (7).

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Tattoo Galleries.[end-div]

Need Creative Inpiration? Take a New Route to Work

[div class=attrib]From Miller-McCune:[end-div]

Want to boost your creativity? Tomorrow morning, pour some milk into an empty bowl, and then add the cereal.

That may sound, well, flaky. But according to a newly published study, preparing a common meal in reverse order may stimulate innovative thinking.

Avoiding conventional behavior at the breakfast table “can help people break their cognitive patterns, and thus lead them to think more flexibly and creatively,” according to a research team led by psychologist Simone Ritter of Radboud University Nijmegen in the Netherlands.

She and her colleagues, including Rodica Ioana Damian of the University of California, Davis, argue that “active involvement in an unusual event” can trigger higher levels of creativity. They note this activity can take many forms, from studying abroad for a semester to coping with the unexpected death of a loved one.
But, writing in the Journal of Experimental Social Psychology, they provide evidence that something simpler will suffice.

The researchers describe an experiment in which Dutch university students were asked to prepare a breakfast sandwich popular in the Netherlands.

Half of them did so in the conventional manner: They put a slice of bread on a plate, buttered the bread and then placed chocolate chips on top. The others — prompted by a script on a computer screen — first put chocolate chips on a plate, then buttered a slice of bread and finally “placed the bread butter-side-down on the dish with the chocolate chips.”

After completing their culinary assignment, they turned their attention to the “Unusual Uses Task,” a widely used measure of creativity. They were given two minutes to generate uses for a brick and another two minutes to come up with as many answers as they could to the question: “What makes sound?”

“Cognitive flexibility” was scored not by counting how many answers they came up with, but rather by the number of categories those answers fell into. For the “What makes sound?” test, a participant whose answers were all animals or machines received a score of one, while someone whose list included “dog,” “car” and “ocean” received a three.

“A high cognitive flexibility score indicates an ability to switch between categories, overcome fixedness, and thus think more creativity,” Ritter and her colleagues write.
On both tests, those who made their breakfast treat backwards had higher scores. Breaking their normal sandwich-making pattern apparently opened them up; their minds wandered more freely, allowing for more innovative thought.

[div class=attrib]Read the entire article here.[end-div]

What’s in a Name?

Are you a Leszczynska or a Bob? And, do you wish to be liked? Well, sorry Leszczynska. It turns out that having an easily pronounceable name makes you more likable.

[div class=attrib]From Wired:[end-div]

Though it might seem impossible, and certainly inadvisable, to judge a person by their name, a new study suggests our brains try anyway.

The more pronounceable a person’s name is, the more likely people are to favor them.

“When we can process a piece of information more easily, when it’s easier to comprehend, we come to like it more,” said psychologist Adam Alter of New York University and co-author of a Journal of Experimental Social Psychology study published in December.

Fluency, the idea that the brain favors information that’s easy to use, dates back to the 1960s, when researchers found that people most liked images of Chinese characters if they’d seen them many times before.

Researchers since then have explored other roles that names play, how they affect our judgment and to what degree.

Studies have shown, for example, that people can partly predict a person’s income and education using only their first name. Childhood is perhaps the richest area for name research: Boys with girls’ names are more likely to be suspended from school. And the less popular a name is, the more likely a child is to be delinquent.

In 2005, Alter and his colleagues explored how pronounceability of company names affects their performance in the stock market. Stripped of all obvious influences, they found companies with simpler names and ticker symbols traded better than the stocks of more difficult-to-pronounce companies.

“The effect is often very, very hard to quantify because so much depends on context, but it’s there and measurable,” Alter said. “You can’t avoid it.”

But how much does pronunciation guide our perceptions of people? To find out, Alter and colleagues Simon Laham and Peter Koval of the University of Melbourne carried out five studies.

In the first, they asked 19 female and 16 male college students to rank 50 surnames according to their ease or difficulty of pronunciation, and according to how much they liked or disliked them. In the second, they had 17 females and 7 male students vote for hypothetical political candidates solely on the basis of their names. In the third, they asked 55 female and 19 male students to vote on candidates about whom they knew both names and some political positions.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Dave Mosher/Wired.[end-div]

The United States of Henry

It is a distinct possibility that had it not been for the graphic tales of violence and sex from the New World around 500 years ago the continent that is now known as America would have taken another label.

Amerigo Vespucci’s westward expeditions between 1497 and 1502 made landfall in what is now Guyana, Venezuela and Brazil. Accounts of these expeditions, with stories of lustful natives having cannibalistic tendencies and life spans of 150 years, caught the public imagination. Present day scholars dispute the authenticity of many of Vespucci’s first hand accounts and letters; in fact the dates and number of Vespucci’s expeditions remains unsettled to this day.

However, 500 years ago Vespucci was held in relatively high esteem, particularly by a German geographer named Martin Waldseemüller. It was Waldseemüller who in 1505 enamored by Vespucci’s colorful observations published a new survey of world geography and named the newly discovered southern continent “America”, after Vespucci. In the survey Waldseemüller wrote: “I do not see what right any one would have to object to calling this part, after Americus who discovered it and who is a man of intelligence, Amerige, that is, the Land of Americus, or America: since both Europa and Asia got their names from women”.

For those interested in the etymology of the name “America” read on. Amerigo Vespucci is the modern Italianate form of the medieval latinized name Emericus (or Americus) Vespucius. In, assigning the name “America”, Waldseemüller took the feminine form of Americus. The German equivalent to Emericus is Heinrich, which in English, is, of course, Henry.

[div class=attrib]From the Independent:[end-div]

[Amerigo Vespucci] was not a natural sailor. Writing to Lorenzo de’Medici, he moaned about “the risks of shipwreck, the innumerable physical deprivations, the permanent anguish that afflicted our spirits… we were prey to such terrible fear that we gave up every hope of surviving.” But when everything was as bad as it could get, “In the midst of this terrible tempest… it pleased the Almighty to show us the continent, new earth and an unknown world.”

These were the words that, once set in type, galvanised Europe. Vespucci knew the geographical works of Ptolemy and had spent years steeped in maps and geographical speculation. For him the coast of modern Venezuela and Brazil where his expedition landed had nothing in common with the zones described by explorers of the Orient. Instead this was something far more fascinating – an unimagined world.

“Surely,” he wrote, “if the terrestrial paradise be in any part of this earth, I esteem that it is not far from these parts.” In his description, this New World is made up of extremes. On the one hand, the people he encounters are living in a dream-like state of bliss: with no metals except gold, no clothes, no signs of age, few diseases, no government, no religion, no trade. In a land rich in animals and plants, colours and fragrances, free from the stain of civilisation, “they live 150 years and rarely fall ill”.

But turn the coin and he was in a world of devils. “They eat one another, the victor [eats] the vanquished,” he wrote. “I know a man… who was reputed to have eaten more than 300 human bodies…” The women are intensely desirable: “none… among them who had a flabby breast,” but they are also monsters and witches: “… Being very lustful, [they] cause the private parts of their husbands to swell up to such a huge size that they appear deformed and disgusting… in consequence of this many lose their organs which break through lack of attention, and they remain eunuchs… When [the women] had the opportunity of copulating with Christians, urged by excessive lust, they defiled… themselves.”

Vespucci’s sensational description inspired an early etching of the Florentine’s first encounter with an American: the explorer and the naked, voluptuous and very pale woman lock eyes; the woman is in the act of clambering off a hammock and moving in his direction. Meanwhile, on a nearby hillock, a woman is roasting the lower half of a human body over a fire.

The wild and fantastic nature of Vespucci’s descriptions raises the question of how reliable any of his observations are – but then vast doubt surrounds almost everything about his adventures. We don’t know how many voyages he undertook; his authorship of some of the accounts is questionable; and it is not even universally accepted that he identified South America for what it was, a new continent.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of biography.com.[end-div]

Your Guide to Online Morality

By most estimates Facebook has around 800 million registered users. This means that its policies governing what is or is not appropriate user content should bear detailed scrutiny. So, a look at Facebook’s recently publicized guidelines for sexual and violent content show a somewhat peculiar view of morality. It’s a view that some characterize as typically American prudishness, but with a blind eye towards violence.

[div class=attrib]From the Guardian:[end-div]

Facebook bans images of breastfeeding if nipples are exposed – but allows “graphic images” of animals if shown “in the context of food processing or hunting as it occurs in nature”. Equally, pictures of bodily fluids – except semen – are allowed as long as no human is included in the picture; but “deep flesh wounds” and “crushed heads, limbs” are OK (“as long as no insides are showing”), as are images of people using marijuana but not those of “drunk or unconscious” people.

The strange world of Facebook’s image and post approval system has been laid bare by a document leaked from the outsourcing company oDesk to the Gawker website, which indicates that the sometimes arbitrary nature of picture and post approval actually has a meticulous – if faintly gore-friendly and nipple-unfriendly – approach.

For the giant social network, which has 800 million users worldwide and recently set out plans for a stock market flotation which could value it at up to $100bn (£63bn), it is a glimpse of its inner workings – and odd prejudices about sex – that emphasise its American origins.

Facebook has previously faced an outcry from breastfeeding mothers over its treatment of images showing them with their babies. The issue has rumbled on, and now seems to have been embedded in its “Abuse Standards Violations”, which states that banned items include “breastfeeding photos showing other nudity, or nipple clearly exposed”. It also bans “naked private parts” including “female nipple bulges and naked butt cracks” – though “male nipples are OK”.

The guidelines, which have been set out in full, depict a world where sex is banned but gore is acceptable. Obvious sexual activity, even if “naked parts” are hidden, people “using the bathroom”, and “sexual fetishes in any form” are all also banned. The company also bans slurs or racial comments “of any kind” and “support for organisations and people primarily known for violence”. Also banned is anyone who shows “approval, delight, involvement etc in animal or human torture”.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Guardian / Photograph: Dominic Lipinski/PA.[end-div]

A Hidden World Revealed Through Nine Eyes

Since mid-2007 the restless nine-eyed cameras of Google Street View have been snapping millions, if not billions, of images of the world’s streets.

The mobile cameras with 360 degree views, perched atop Google’s fleet of specially adapted vehicles, have already covered most of North America, Brazil, South Africa, Australia and large swathes of Europe. In roaming many of the world roadways Google’s cameras have also snapped numerous accidental images: people caught unaware, car accidents, odd views into nearby buildings, eerie landscapes.

Regardless of the privacy issues here, the photographs make for some fascinating in-the-moment art. A number of enterprising artists and photographers have included some of these esoteric Google Street View “out-takes” into their work. A selection from Jon Rafman below. See more of his and Google’s work here.

 

 

Engineering the Ultimate Solar Power Collector: The Leaf

[div class=attrib]From Cosmic Log:[end-div]

Researchers have been trying for decades to improve upon Mother Nature’s favorite solar-power trick — photosynthesis — but now they finally think they see the sunlight at the end of the tunnel.

“We now understand photosynthesis much better than we did 20 years ago,” said Richard Cogdell, a botanist at the University of Glasgow who has been doing research on bacterial photosynthesis for more than 30 years. He and three colleagues discussed their efforts to tweak the process that powers the world’s plant life today in Vancouver, Canada, during the annual meeting of the American Association for the Advancement of Science.

The researchers are taking different approaches to the challenge, but what they have in common is their search for ways to get something extra out of the biochemical process that uses sunlight to turn carbon dioxide and water into sugar and oxygen. “You can really view photosynthesis as an assembly line with about 168 steps,” said Steve Long, head of the University of Illinois’ Photosynthesis and Atmospheric Change Laboratory.

Revving up Rubisco
Howard Griffiths, a plant physiologist at the University of Cambridge, just wants to make improvements in one section of that assembly line. His research focuses on ways to get more power out of the part of the process driven by an enzyme called Rubisco. He said he’s trying to do what many auto mechanics have done to make their engines run more efficiently: “You turbocharge it.”

Some plants, such as sugar cane and corn, already have a turbocharged Rubisco engine, thanks to a molecular pathway known as C4. Geneticists believe the C4 pathway started playing a significant role in plant physiology in just the past 10 million years or so. Now Griffiths is looking into strategies to add the C4 turbocharger to rice, which ranks among the world’s most widely planted staple crops.

The new cellular machinery might be packaged in a micro-compartment that operates within the plant cell. That’s the way biochemical turbochargers work in algae and cyanobacteria. Griffiths and his colleagues are looking at ways to create similar micro-compartments for higher plants. The payoff would come in the form of more efficient carbon dioxide conversion, with higher crop productivity as a result. “For a given amount of carbon gain, the plant uses less water,” Griffiths said.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Kumaravel via Flickr, Creative Commons.[end-div]

Synaesthesia: Smell the Music

[div class=attrib]From the Economist:[end-div]

THAT some people make weird associations between the senses has been acknowledged for over a century. The condition has even been given a name: synaesthesia. Odd as it may seem to those not so gifted, synaesthetes insist that spoken sounds and the symbols which represent them give rise to specific colours or that individual musical notes have their own hues.

Yet there may be a little of this cross-modal association in everyone. Most people agree that loud sounds are “brighter” than soft ones. Likewise, low-pitched sounds are reminiscent of large objects and high-pitched ones evoke smallness. Anne-Sylvie Crisinel and Charles Spence of Oxford University think something similar is true between sound and smell.

Ms Crisinel and Dr Spence wanted to know whether an odour sniffed from a bottle could be linked to a specific pitch, and even a specific instrument. To find out, they asked 30 people to inhale 20 smells—ranging from apple to violet and wood smoke—which came from a teaching kit for wine-tasting. After giving each sample a good sniff, volunteers had to click their way through 52 sounds of varying pitches, played by piano, woodwind, string or brass, and identify which best matched the smell. The results of this study, to be published later this month in Chemical Senses, are intriguing.

The researchers’ first finding was that the volunteers did not think their request utterly ridiculous. It rather made sense, they told them afterwards. The second was that there was significant agreement between volunteers. Sweet and sour smells were rated as higher-pitched, smoky and woody ones as lower-pitched. Blackberry and raspberry were very piano. Vanilla had elements of both piano and woodwind. Musk was strongly brass.

It is not immediately clear why people employ their musical senses in this way to help their assessment of a smell. But gone are the days when science assumed each sense worked in isolation. People live, say Dr Spence and Ms Crisinel, in a multisensory world and their brains tirelessly combine information from all sources to make sense, as it were, of what is going on around them. Nor is this response restricted to humans. Studies of the brains of mice show that regions involved in olfaction also react to sound.

Taste, too, seems linked to hearing. Ms Crisinel and Dr Spence have previously established that sweet and sour tastes, like smells, are linked to high pitch, while bitter tastes bring lower pitches to mind. Now they have gone further. In a study that will be published later this year they and their colleagues show how altering the pitch and instruments used in background music can alter the way food tastes.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of cerebromente.org.br.[end-div]

Religion for Atheists and the Agape Restaurant

Alain de Botton is a writer of book-length essays on love, travel, architecture and literature. In his latest book, Religion for Atheists, de Botton argues that while the supernatural claims of all religions are entirely false, religions still have important things to teach the secular world. An excerpt from the book below.

[div class=attrib]From the Wall Street Journal:[end-div]

One of the losses that modern society feels most keenly is the loss of a sense of community. We tend to imagine that there once existed a degree of neighborliness that has been replaced by ruthless anonymity, by the pursuit of contact with one another primarily for individualistic ends: for financial gain, social advancement or romantic love.

In attempting to understand what has eroded our sense of community, historians have assigned an important role to the privatization of religious belief that occurred in Europe and the U.S. in the 19th century. They have suggested that we began to disregard our neighbors at around the same time that we ceased to honor our gods as a community.

This raises two questions: How did religion once enhance the spirit of community? More practically, can secular society ever recover that spirit without returning to the theological principles that were entwined with it? I, for one, believe that it is possible to reclaim our sense of community—and that we can do so, moreover, without having to build upon a religious foundation.

Insofar as modern society ever promises us access to a community, it is one centered on the worship of professional success. We sense that we are brushing up against its gates when the first question we are asked at a party is “What do you do?,” our answer to which will determine whether we are warmly welcomed or conclusively abandoned.

In these competitive, pseudo-communal gatherings, only a few sides of us count as currency with which to buy the goodwill of strangers. What matters above all is what is on our business cards. Those who have opted to spend their lives looking after children, writing poetry or nurturing orchards will be left in no doubt that they have run contrary to the dominant mores of the powerful, who will marginalize them accordingly.

Given this level of discrimination, it is no surprise that many of us choose to throw ourselves with a vengeance into our careers. Focusing on work to the exclusion of almost everything else is a plausible strategy in a world that accepts workplace achievements as the main tokens for securing not just the financial means to survive physically but also the attention that we require to thrive psychologically.

Religions seem to know a great deal about our loneliness. Even if we believe very little of what they tell us about the afterlife or the supernatural origins of their doctrines, we can nevertheless admire their understanding of what separates us from strangers and their attempts to melt away one or two of the prejudices that normally prevent us from building connections with others.

Consider Catholicism, which starts to create a sense of community with a setting. It marks off a piece of the earth, puts walls up around it and declares that within their confines there will reign values utterly unlike the ones that hold sway in the world beyond. A church gives us rare permission to lean over and say hello to a stranger without any danger of being thought predatory or insane.

The composition of the congregation also feels significant. Those in attendance tend not to be uniformly of the same age, race, profession or educational or income level; they are a random sampling of souls united only by their shared commitment to certain values. We are urged to overcome our provincialism and our tendency to be judgmental—and to make a sign of peace to whomever chance has placed on either side of us. The Church asks us to leave behind all references to earthly status. Here no one asks what anyone else “does.” It no longer matters who is the bond dealer and who the cleaner.

The Church does more, however, than merely declare that worldly success doesn’t matter. In a variety of ways, it enables us to imagine that we could be happy without it. Appreciating the reasons why we try to acquire status in the first place, it establishes conditions under which we can willingly surrender our attachment to it.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alain de Botton. Courtesy of BBC.[end-div]

 

Travel Photo Clean-up

[tube]flNomXIIWr4[/tube]

We’ve all experienced this phenomenon when on vacation: you’re at a beautiful location with a significant other, friends or kids; the backdrop is idyllic, the subjects are exquisitely posed, you need to preserve and share this perfect moment with a photograph, you get ready to snap the shutter. Then, at that very moment an oblivious tourist, unperturbed locals or a stray goat wander into the picture, too late, the picture is ruined, and it’s getting dark, so there’s no time to reinvent that perfect scene! Oh well, you’ll still be able to talk about the scene’s unspoiled perfection when you get home.

But now, there’s an app for that.

[div class=attrib]From New Scientist:[end-div]

 

It’s the same scene played out at tourist sites the world over: You’re trying to take a picture of a partner or friend in front of some monument, statue or building and other tourists keep striding unwittingly – or so they say – into the frame.

Now a new smartphone app promises to let you edit out these unwelcome intruders, leaving just leave your loved one and a beautiful view intact.

Remove, developed by Swedish photography firm Scalada, takes a burst of shots of your scene. It then identifies the objects which are moving – based on their relative position in each frame. These objects are then highlighted and you can delete the ones you don’t want and keep the ones you do, leaving you with a nice, clean composite shot.

Loud party of schoolchildren stepping in front of the Trevi Fountain? Select and delete. Unwanted, drunken stag party making the Charles Bridge in Prague look untidy? See you later.

Remove uses similar technology to the firm’s Rewind app, launched last year, which merges composite group shots to create the best single image.

The app is just a prototype at the moment – as is the video above – but Scalado will demonstrate a full version at the 2012 Mobile World Conference in Barcelona later this month.

Beautiful Explanations

Each year for the past 15 years Edge has posed a weighty question to a group of scientists, researchers, philosophers, mathematicians and thinkers. For 2012, Edge asked the question, “What Is Your Favorite Deep, Elegant, or Beautiful Explanation?”, to 192 of our best and brightest. Back came 192 different and no-less wonderful answers. We can post but a snippet here, so please visit the Edge, and then make a note to buy the book (it’s not available yet).

Read the entire article here.

The Mysterious Coherence Between Fundamental Physics and Mathematics
Peter Woit, Mathematical Physicist, Columbia University; Author, Not Even Wrong

Any first course in physics teaches students that the basic quantities one uses to describe a physical system include energy, momentum, angular momentum and charge. What isn’t explained in such a course is the deep, elegant and beautiful reason why these are important quantities to consider, and why they satisfy conservation laws. It turns out that there’s a general principle at work: for any symmetry of a physical system, you can define an associated observable quantity that comes with a conservation law:

1. The symmetry of time translation gives energy
2. The symmetries of spatial translation give momentum
3. Rotational symmetry gives angular momentum
4. Phase transformation symmetry gives charge

 

Einstein Explains Why Gravity Is Universal
Sean Carroll, Theoretical Physicist, Caltech; Author, From Eternity to Here: The Quest for the Ultimate Theory of Time

The ancient Greeks believed that heavier objects fall faster than lighter ones. They had good reason to do so; a heavy stone falls quickly, while a light piece of paper flutters gently to the ground. But a thought experiment by Galileo pointed out a flaw. Imagine taking the piece of paper and tying it to the stone. Together, the new system is heavier than either of its components, and should fall faster. But in reality, the piece of paper slows down the descent of the stone.

Galileo argued that the rate at which objects fall would actually be a universal quantity, independent of their mass or their composition, if it weren’t for the interference of air resistance. Apollo 15 astronaut Dave Scott once illustrated this point by dropping a feather and a hammer while standing in vacuum on the surface of the Moon; as Galileo predicted, they fell at the same rate.

Subsequently, many scientists wondered why this should be the case. In contrast to gravity, particles in an electric field can respond very differently; positive charges are pushed one way, negative charges the other, and neutral particles not at all. But gravity is universal; everything responds to it in the same way.

Thinking about this problem led Albert Einstein to what he called “the happiest thought of my life.” Imagine an astronaut in a spaceship with no windows, and no other way to peer at the outside world. If the ship were far away from any stars or planets, everything inside would be in free fall, there would be no gravitational field to push them around. But put the ship in orbit around a massive object, where gravity is considerable. Everything inside will still be in free fall: because all objects are affected by gravity in the same way, no one object is pushed toward or away from any other one. Sticking just to what is observed inside the spaceship, there’s no way we could detect the existence of gravity.

 

True or False: Beauty Is Truth
Judith Rich Harris, Independent Investigator and Theoretician; Author, The Nurture Assumption; No Two Alike: Human Nature and Human Individuality

“Beauty is truth, truth beauty,” said John Keats. But what did he know? Keats was a poet, not a scientist.

In the world that scientists inhabit, truth is not always beautiful or elegant, though it may be deep. In fact, it’s my impression that the deeper an explanation goes, the less likely it is to be beautiful or elegant.

Some years ago, the psychologist B. F. Skinner proposed an elegant explanation of “the behavior of organisms,” based on the idea that rewarding a response—he called it reinforcement—increases the probability that the same response will occur again in the future. The theory failed, not because it was false (reinforcement generally does increase the probability of a response) but because it was too simple. It ignored innate components of behavior. It couldn’t even handle all learned behavior. Much behavior is acquired or shaped through experience, but not necessarily by means of reinforcement. Organisms learn different things in different ways.

 

The Power Of One, Two, Three
Charles Seife, Professor of Journalism, New York University; formerly journalist, Science Magazine; Author, Proofiness: The Dark Arts of Mathematical Deception

Sometimes even the simple act of counting can tell you something profound.

One day, back in the late 1990s, when I was a correspondent for New Scientist magazine, I got an e-mail from a flack waxing rhapsodic about an extraordinary piece of software. It was a revolutionary data-compression program so efficient that it would squash every digital file by 95% or more without losing a single bit of data. Wouldn’t my magazine jump at the chance to tell the world about the computer program that will make their hard drives hold 20 times more information than every before.

No, my magazine wouldn’t.

No such compression algorithm could possibly exist; it was the algorithmic equivalent of a perpetual motion machine. The software was a fraud.

The reason: the pigeonhole principle.

 

Watson and Crick Explain How DNA Carries Genetic Information
Gary Klein, Cognitive Psychologist; Author, Sources of Power; Streetlights and Shadows: Searching for Keys to Adaptive Decision Making

In 1953, when James Watson pushed around some two-dimensional cut-outs and was startled to find that an adenine-thymine pair had an isomorphic shape to the guanine-cytosine pair, he solved eight mysteries simultaneously. In that instant he knew the structure of DNA: a helix. He knew how many strands: two. It was a double helix. He knew what carried the information: the nucleic acids in the gene, not the protein. He knew what maintained the attraction: hydrogen bonds. He knew the arrangement: The sugar-phosphate backbone was on the outside and the nucleic acids were in the inside. He knew how the strands match: through the base pairs. He knew the arrangement: the two identical chains ran in opposite directions. And he knew how genes replicated: through a zipper-like process.

The discovery that Watson and Crick made is truly impressive, but I am also interested in what we can learn from the process by which they arrived at their discovery. On the surface, the Watson-Crick story fits in with five popular claims about innovation, as presented below. However, the actual story of their collaboration is more nuanced than these popular claims suggest.

It is important to have clear research goals. Watson and Crick had a clear goal, to describe the structure of DNA, and they succeeded.

But only the first two of their eight discoveries had to do with this goal. The others, arguably the most significant, were unexpected byproducts.

Great Architecture

Jonathan Glancey, architecture critic at the Guardian in the UK for the last fifteen years, is moving on to greener pastures, and presumably new buildings. In his final article for the newspaper he reflects on some buildings that have engendered shock and/or awe.

[div class=attrib]From the Guardian:[end-div]

Fifteen years is not a long time in architecture. It is the slowest as well as the most political of the arts. This much was clear when I joined the Guardian as its architecture and design correspondent, from the Independent, in 1997. I thought the Millennium Experience (the talk of the day) decidedly dimwitted and said so in no uncertain terms; it lacked a big idea and anything like the imagination of, say, the Great Exhibition of 1851, or the Festival of Britain in 1951.

For the macho New Labour government, newly in office and all football and testosterone, criticism of this cherished project was tantamount to sedition. They lashed out like angry cats; there were complaints from 10 Downing Street’s press office about negative coverage of the Dome. Hard to believe then, much harder now. That year’s London Model Engineer Exhibition was far more exciting; here was an enthusiastic celebration of the making of things, at a time when manufacturing was becoming increasingly looked down on.

New Labour, meanwhile, promised it would do things for architecture and urban design that Roman emperors and Renaissance princes could only have dreamed of. The north Greenwich peninsula was to become a new Florence, with trams and affordable housing. As would the Thames Gateway, that Siberia stretching – marshy, mysterious, semi-industrial – to Southend Pier and the sea. To a new, fast-breeding generation of quangocrats this land looked like a blank space on the London A-Z, ready to fill with “environmentally friendly” development. Precious little has happened there since, save for some below-standard housing, Boris Johnson’s proposal for an estuary airport and – a very good thing – an RSPB visitors’ centre designed by Van Heyningen and Haward near Purfleet on the Rainham marshes.

Labour’s promises turned out to be largely tosh, of course. Architecture and urban planning are usually best when neither hyped nor hurried. Grand plans grow best over time, as serendipity and common sense soften hard edges. In 2002, Tony Blair decided to invade Iraq – not a decision that, on the face of it, has a lot to do with architecture; but one of the articles I am most proud to have written for this paper was the story of a journey I made from one end of Iraq to the other, with Stuart Freedman, an unflappable press photographer. At the time, the Blair government was denying there would be a war, yet every Iraqi we spoke to knew the bombs were about to fall. It was my credentials as a critic and architectural historian that got me my Iraqi visa. Foreign correspondents, including several I met in Baghdad’s al-Rashid hotel, were understandably finding the terrain hard-going. But handwritten in my passport was an instruction saying: “Give this man every assistance.”

We travelled to Babylon to see Saddam’s reconstruction of the fabled walled city, and to Ur, Abraham’s home, and its daunting ziggurat and then – wonder of wonders – into the forbidden southern deserts to Eridu. Here I walked on the sand-covered remains of one of the world’s first cities. This, if anywhere, is where architecture was born. At Samarra, in northern Iraq, I climbed to the top of the wondrous spiral minaret of what was once the town’s Great Mosque. How the sun shone that day. When I got to the top, there was nothing to hang on to. I was confronted by the blazing blue sky and its gods, or God; the architecture itself was all but invisible. Saddam’s soldiers, charming recruits in starched and frayed uniforms drilled by a tough and paternal sergeant, led me through the country, through miles of unexploded war material piled high along sandy tracks, and across the paths of Shia militia.

Ten years on, Zaha Hadid, a Baghdad-born architect who has risen to stellar prominence since 2002, has won her first Iraqi commission, a new headquarters for the Iraqi National Bank in Baghdad. With luck, other inspired architects will get to work in Iraq, too, reconnecting the country with its former role as a crucible of great buildings and memorable cities.

Architecture is also the stuff of construction, engineering, maths and science. Of philosophy, sociology, Le Corbusier and who knows what else. It is also, I can’t help feeling, harder to create great buildings now than it was in the past. When Eridu or the palaces and piazzas of Renaissance Italy were shaped, architecture was the most expensive and prestigious of all cultural endeavours. Today we spread our wealth more thinly, spending ever more on disposable consumer junk, building more roads to serve ever more grim private housing estates, unsustainable supermarkets and distribution depots (and container ports and their giant ships), and the landfill sites we appear to need to shore up our insatiable, throwaway culture. Architecture has been in danger, like our indefensibly mean and horrid modern housing, of becoming little more than a commodity. Government talk of building a rash of “eco-towns” proved not just unpopular but more hot air. A policy initiative too far, the idea has effectively been dropped.

And, yet, despite all these challenges, the art form survives and even thrives. I have been moved in different ways by the magnificent Neues Museum, Berlin, a 10-year project led by David Chipperfield; by the elemental European Southern Observatory Hotel by Auer + Weber, for scientists in Chile’s Atacama Desert; and by Charles Barclay’s timber Kielder Observatory, where I spent a night in 2008 watching stars hanging above the Northumbrian forest.

I have been enchanted by the 2002 Serpentine Pavilion, a glimpse into a possible future by Toyo Ito and Cecil Balmond; by the inspiring reinvention of St Pancras station by Alastair Lansley and fellow architects; and by Blur, a truly sensational pavilion by Diller + Scofidio set on a steel jetty overlooking Lake Neuchatel at Yverdon-les-Bains. A part of Switzerland’s Expo 2002, this cat’s cradle of tensile steel was a machine for making clouds. You walked through the clouds as they appeared and, when conditions were right, watched them float away over the lake.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: The spiral minaret of the Great Mosque of Samarra, Iraq. Courtesy Reuters / Guardian.[end-div]

Suburbia as Mass Murderer

Jane Brody over at the Well blog makes a compelling case for the dismantling of suburbia. After all, these so-called “built environments” where we live, work, eat, play and raise our children, are an increasingly serious health hazard.

[div class=attrib]From the New York Times:[end-div]

Developers in the last half-century called it progress when they built homes and shopping malls far from city centers throughout the country, sounding the death knell for many downtowns. But now an alarmed cadre of public health experts say these expanded metropolitan areas have had a far more serious impact on the people who live there by creating vehicle-dependent environments that foster obesity, poor health, social isolation, excessive stress and depression.

As a result, these experts say, our “built environment” — where we live, work, play and shop — has become a leading cause of disability and death in the 21st century. Physical activity has been disappearing from the lives of young and old, and many communities are virtual “food deserts,” serviced only by convenience stores that stock nutrient-poor prepared foods and drinks.

According to Dr. Richard J. Jackson, professor and chairman of environmental health sciences at the University of California, Los Angeles, unless changes are made soon in the way many of our neighborhoods are constructed, people in the current generation (born since 1980) will be the first in America to live shorter lives than their parents do.

Although a decade ago urban planning was all but missing from public health concerns, a sea change has occurred. At a meeting of the American Public Health Association in October, Dr. Jackson said, there were about 300 presentations on how the built environment inhibits or fosters the ability to be physically active and get healthy food.

In a healthy environment, he said, “people who are young, elderly, sick or poor can meet their life needs without getting in a car,” which means creating places where it is safe and enjoyable to walk, bike, take in nature and socialize.

“People who walk more weigh less and live longer,” Dr. Jackson said. “People who are fit live longer. People who have friends and remain socially active live longer. We don’t need to prove all of this,” despite the plethora of research reports demonstrating the ill effects of current community structures.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Duke University.[end-div]

Tear-Jerker Dissected

Set aside the fact that having heard Adele’s song “Someone Like You” so often may want to make you cry from trying to escape, science has now found an answer to why the tear-jerker makes you sob.

[div class=attrib]From the Wall Street Journal:[end-div]

On Sunday night [February 12, 2012], the British singer-songwriter Adele is expected to sweep the Grammys. Three of her six nominations are for her rollicking hit “Rolling in the Deep.” But it’s her ballad “Someone Like You” that has risen to near-iconic status recently, due in large part to its uncanny power to elicit tears and chills from listeners. The song is so famously sob-inducing that “Saturday Night Live” recently ran a skit in which a group of co-workers play the tune so they can all have a good cry together.

What explains the magic of Adele’s song? Though personal experience and culture play into individual reactions, researchers have found that certain features of music are consistently associated with producing strong emotions in listeners. Combined with heartfelt lyrics and a powerhouse voice, these structures can send reward signals to our brains that rival any other pleasure.

Twenty years ago, the British psychologist John Sloboda conducted a simple experiment. He asked music lovers to identify passages of songs that reliably set off a physical reaction, such as tears or goose bumps. Participants identified 20 tear-triggering passages, and when Dr. Sloboda analyzed their properties, a trend emerged: 18 contained a musical device called an “appoggiatura.”

An appoggiatura is a type of ornamental note that clashes with the melody just enough to create a dissonant sound. “This generates tension in the listener,” said Martin Guhn, a psychologist at the University of British Columbia who co-wrote a 2007 study on the subject. “When the notes return to the anticipated melody, the tension resolves, and it feels good.”

Chills often descend on listeners at these moments of resolution. When several appoggiaturas occur next to each other in a melody, it generates a cycle of tension and release. This provokes an even stronger reaction, and that is when the tears start to flow.

[div class=attrib]Read the entire sob story here.[end-div]

[div class=attrib]Image of Adele. Courtesy of The Wall Street Journal (illustration) Associated Press (photo); Universal Music Publishing (score).[end-div]

Pop art + Money = Mind Candy

[div class=attrib]From the Guardian:[end-div]

The first pop artists were serious people. The late Richard Hamilton was being double-edged and sceptical when he called a painting Hommage à Chrysler Corp. Far from emptily celebrating what Andy Warhol called “all the great modern things”, pop art in the 1950s and early 1960s took a quizzical, sideways look at what was still a very new world of consumer goods. Claes Oldenburg made floppy, saggy sculptures of stuff, which rendered the new look worn out. Warhol painted car crashes. These artists saw modern life in the same surreal and eerie way as the science fiction writer JG Ballard does in his stories and novels.

When, then, did pop art become mind candy, bubblegum, an uncritical adoration of bright lights and synthetic colours? Probably when money got involved, and Warhol was shot, never again to be as brave as he was in the 60s, or when Jeff Koons gave Reaganomics its art, or when Damien Hirst made his tenth million. Who knows? The moment when pop art sank from radical criticism to bland adulation is impossible to pinpoint.

So here we are in Qatar, where today’s pop art guru Takashi Murakami has a new show. We’re not really there, of course, but do we need to be? Murakami is pop for the digital age, a designer of images that make more sense as screensavers than as any kind of high art. In Doha, the artist who celebrated a recent British show with a giveaway cardboard sculpture exhibits a six-metre balloon self-portrait and a 100-metre work inspired by the earthquake in Japan. This follows on from a 2010 exhibition in Versailles, no less. All over the world, in settings old and new, the bright and spectacular art of Murakami is as victorious as Twitter. It is art for computers: all stimuli, no soul.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Takashi Murakami’s six-metre balloon self-portrait, part of the artist’s latest exhibition in Qatar. Courtesy of Chika Okazumi / Guardian.[end-div]

Spooky Action at a Distance Explained

[div class=attrib]From Scientific American:[end-div]

Quantum entanglement is such a mainstay of modern physics that it is worth reflecting on how long it took to emerge. What began as a perceptive but vague insight by Albert Einstein languished for decades before becoming a branch of experimental physics and, increasingly, modern technology.

Einstein’s two most memorable phrases perfectly capture the weirdness of quantum mechanics. “I cannot believe that God plays dice with the universe” expressed his disbelief that randomness in quantum physics was genuine and impervious to any causal explanation. “Spooky action at a distance” referred to the fact that quantum physics seems to allow influences to travel faster than the speed of light. This was, of course, disturbing to Einstein, whose theory of relativity prohibited any such superluminal propagation.

These arguments were qualitative. They were targeted at the worldview offered by quantum theory rather than its predictive power. Niels Bohr is commonly seen as the patron saint of quantum physics, defending it against Einstein’s repeated onslaughts. He is usually said to be the ultimate winner in this battle of wits. However, Bohr’s writing was terribly obscure. He was known for saying “never express yourself more clearly than you are able to think,” a motto which he adhered to very closely. His arguments, like Einstein’s, were qualitative, verging on highly philosophical. The Einstein-Bohr dispute, although historically important, could not be settled experimentally—and the experiment is the ultimate judge of validity of any theoretical ideas in physics. For decades, the phenomenon was all but ignored.

All that changed with John Bell. In 1964 he understood how to convert the complaints about “dice-playing” and “spooky action at a distance” into a simple inequality involving measurements on two particles. The inequality is satisfied in a world where God does not play dice and there is no spooky action. The inequality is violated if the fates of the two particles are intertwined, so that if we measure a property of one of them, we immediately know the same property of the other one—no matter how far apart the particles are from each other. This state where particles behave like twin brothers is said to be entangled, a term introduced by Erwin Schrödinger.

[div class=attrib]Read the whole article here.[end-div]