All posts by Mike

Media Multi-Tasking, School Work and Poor Memory

It’s official — teens can’t stay off social media for more than 15 minutes. It’s no secret that many kids aged between 8 and 18 spend most of their time texting, tweeting and checking their real-time social status. The profound psychological and sociological consequences of this behavior will only start to become apparent ten to fifteen year from now. In the meantime, researchers are finding a general degradation in kids’ memory skills from using social media and multi-tasking while studying.

From Slate:

Living rooms, dens, kitchens, even bedrooms: Investigators followed students into the spaces where homework gets done. Pens poised over their “study observation forms,” the observers watched intently as the students—in middle school, high school, and college, 263 in all—opened their books and turned on their computers.

For a quarter of an hour, the investigators from the lab of Larry Rosen, a psychology professor at California State University–Dominguez Hills, marked down once a minute what the students were doing as they studied. A checklist on the form included: reading a book, writing on paper, typing on the computer—and also using email, looking at Facebook, engaging in instant messaging, texting, talking on the phone, watching television, listening to music, surfing the Web. Sitting unobtrusively at the back of the room, the observers counted the number of windows open on the students’ screens and noted whether the students were wearing earbuds.

Although the students had been told at the outset that they should “study something important, including homework, an upcoming examination or project, or reading a book for a course,” it wasn’t long before their attention drifted: Students’ “on-task behavior” started declining around the two-minute mark as they began responding to arriving texts or checking their Facebook feeds. By the time the 15 minutes were up, they had spent only about 65 percent of the observation period actually doing their schoolwork.

“We were amazed at how frequently they multitasked, even though they knew someone was watching,” Rosen says. “It really seems that they could not go for 15 minutes without engaging their devices,” adding, “It was kind of scary, actually.”

Concern about young people’s use of technology is nothing new, of course. But Rosen’s study, published in the May issue of Computers in Human Behavior, is part of a growing body of research focused on a very particular use of technology: media multitasking while learning. Attending to multiple streams of information and entertainment while studying, doing homework, or even sitting in class has become common behavior among young people—so common that many of them rarely write a paper or complete a problem set any other way.

But evidence from psychology, cognitive science, and neuroscience suggests that when students multitask while doing schoolwork, their learning is far spottier and shallower than if the work had their full attention. They understand and remember less, and they have greater difficulty transferring their learning to new contexts. So detrimental is this practice that some researchers are proposing that a new prerequisite for academic and even professional success—the new marshmallow test of self-discipline—is the ability to resist a blinking inbox or a buzzing phone.

The media multitasking habit starts early. In “Generation M2: Media in the Lives of 8- to 18-Year-Olds,” a survey conducted by the Kaiser Family Foundation and published in 2010, almost a third of those surveyed said that when they were doing homework, “most of the time” they were also watching TV, texting, listening to music, or using some other medium. The lead author of the study was Victoria Rideout, then a vice president at Kaiser and now an independent research and policy consultant. Although the study looked at all aspects of kids’ media use, Rideout told me she was particularly troubled by its findings regarding media multitasking while doing schoolwork.

“This is a concern we should have distinct from worrying about how much kids are online or how much kids are media multitasking overall. It’s multitasking while learning that has the biggest potential downside,” she says. “I don’t care if a kid wants to tweet while she’s watching American Idol, or have music on while he plays a video game. But when students are doing serious work with their minds, they have to have focus.”

For older students, the media multitasking habit extends into the classroom. While most middle and high school students don’t have the opportunity to text, email, and surf the Internet during class, studies show the practice is nearly universal among students in college and professional school. One large survey found that 80 percent of college students admit to texting during class; 15 percent say they send 11 or more texts in a single class period.

During the first meeting of his courses, Rosen makes a practice of calling on a student who is busy with his phone. “I ask him, ‘What was on the slide I just showed to the class?’ The student always pulls a blank,” Rosen reports. “Young people have a wildly inflated idea of how many things they can attend to at once, and this demonstration helps drive the point home: If you’re paying attention to your phone, you’re not paying attention to what’s going on in class.” Other professors have taken a more surreptitious approach, installing electronic spyware or planting human observers to record whether students are taking notes on their laptops or using them for other, unauthorized purposes.

Read the entire article here.

Image courtesy of Examiner.

The Academic Con Artist

Strangely we don’t normally associate the hushed halls and ivory towers of academia with lies and frauds. We are more inclined to see con artists on street corners hawking dodgy wares or doing much the same from corner offices on Wall Street, for much princelier sums, of course, and with much more catastrophic consequences.

Humans being humans, cheating does go on in academic circles as well. We know that some students cheat — they plagiarize and fabricate work, they have others write their papers. More notably, some academics do this as well, but on a grander scale. And, while much cheating is probably minor and inconsequential, some fraud is intricate and grandiose, spanning many years of work, affecting subsequent work, diverting grants and research funds, altering policy and widely held public opinion. Meet one of its principal actors — Diederik Stapel, social psychologist and academic con artist.

From the New York Times:

One summer night in 2011, a tall, 40-something professor named Diederik Stapel stepped out of his elegant brick house in the Dutch city of Tilburg to visit a friend around the corner. It was close to midnight, but his colleague Marcel Zeelenberg had called and texted Stapel that evening to say that he wanted to see him about an urgent matter. The two had known each other since the early ’90s, when they were Ph.D. students at the University of Amsterdam; now both were psychologists at Tilburg University. In 2010, Stapel became dean of the university’s School of Social and Behavioral Sciences and Zeelenberg head of the social psychology department. Stapel and his wife, Marcelle, had supported Zeelenberg through a difficult divorce a few years earlier. As he approached Zeelenberg’s door, Stapel wondered if his colleague was having problems with his new girlfriend.

Zeelenberg, a stocky man with a shaved head, led Stapel into his living room. “What’s up?” Stapel asked, settling onto a couch. Two graduate students had made an accusation, Zeelenberg explained. His eyes began to fill with tears. “They suspect you have been committing research fraud.”

Stapel was an academic star in the Netherlands and abroad, the author of several well-regarded studies on human attitudes and behavior. That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. And just days earlier, he received more media attention for a study indicating that eating meat made people selfish and less social.

His enemies were targeting him because of changes he initiated as dean, Stapel replied, quoting a Dutch proverb about high trees catching a lot of wind. When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future. As Zeelenberg pressed him, Stapel grew increasingly agitated.

Finally, Zeelenberg said: “I have to ask you if you’re faking data.”

“No, that’s ridiculous,” Stapel replied. “Of course not.”

That weekend, Zeelenberg relayed the allegations to the university rector, a law professor named Philip Eijlander, who often played tennis with Stapel. After a brief meeting on Sunday, Eijlander invited Stapel to come by his house on Tuesday morning. Sitting in Eijlander’s living room, Stapel mounted what Eijlander described to me as a spirited defense, highlighting his work as dean and characterizing his research methods as unusual. The conversation lasted about five hours. Then Eijlander politely escorted Stapel to the door but made it plain that he was not convinced of Stapel’s innocence.

That same day, Stapel drove to the University of Groningen, nearly three hours away, where he was a professor from 2000 to 2006. The campus there was one of the places where he claimed to have collected experimental data for several of his studies; to defend himself, he would need details from the place. But when he arrived that afternoon, the school looked very different from the way he remembered it being five years earlier. Stapel started to despair when he realized that he didn’t know what buildings had been around at the time of his study. Then he saw a structure that he recognized, a computer center. “That’s where it happened,” he said to himself; that’s where he did his experiments with undergraduate volunteers. “This is going to work.”

On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.

“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.”

After he got home that night, he confessed to his wife. A week later, the university suspended him from his job and held a news conference to announce his fraud. It became the lead story in the Netherlands and would dominate headlines for months. Overnight, Stapel went from being a respected professor to perhaps the biggest con man in academic science.

Read the entire article after the jump.

Image courtesy of FBI.

Lesson: Fail Often, Fail Fast

One of our favorite thinkers, Nasim Nicholas Taleb, calls this tinkering — the iterative process by which ideas and actions can take root and become successful. Evolution is a wonderful example of this tinkering — repetitive failure and incremental progress. Many entrepreneurs in Silicon Valley take this to heart.

Tech entrepreneur, Michele Serro, describes some key elements to successful tinkering below.

From the Wall Street Journal:

If there was ever a cliche about entrepreneurialism, it’s this: Joe or Jane McEntrepreneur were trying to book a flight/find flattering support garments/rent a car and were profoundly dissatisfied with the experience. Incensed, they set out to design a better way — and did, earning millions in the process.

It seems that, for entrepreneurs, it’s dissatisfaction rather than necessity that is the mother of invention. And while this cliche certainly has its foundation in truth, it’s woefully incomplete. The full truth is, the average startup iterates multiple times before they find the right product, often drawing on one or many approaches along the way before finding traction. Here are five of the most common I’ve come across within the startup community.

Algebra. There’s an old yarn you learn in film school about the power of the pithy pitch (say that five times fast). The story goes that when screenwriters were shopping the original Alien movie, they allegedly got the green light when they summed it up to studio execs by saying ”It’s Jaws. In space.”

In many ways, the same thing is happening in the startup world. “It’s Facebook FB -2.27%. But for pets,” or “It’s Artsy meets Dropbox meets Fab.” Our tendency to do this speaks to the fact that there are very few — if any — truly new ideas. Most entrepreneurs are applying old ideas to new industries, or combining two seemingly unrelated ideas (or existing businesses) together – whether they’re doing it consciously, or not.

Subtraction. Many great ideas begin with a seemingly straightforward question: “How could I make this easier?” Half the genius of some of the greatest entrepreneurs — Steve Jobs springs immediately to mind — is the ability to remove the superfluous, unnecessary or unwieldy from an existing system, product or experience. A good exercise when you are in search of an idea is simply to ask yourself “What is it about an existing product, service, or experience that could — and therefore should — be less of a hassle?”

Singularity. There’s an old saying that goes: “Figure out what you love to do and you’ll never work a day in your life.” Entrepreneurs are born out of the desire to spend one’s life pursuing a passion — assuming that they’re fortunate enough to have identified it early. The fact is that any kind of startup is really, really hard work. No matter how fast a vesting schedule or how convivial an office culture, the only thing that can truly sustain you through the bad days is having a deep, personal interest in your area of focus. The most successful entrepreneurs genuinely love what they do, and not simply because of the potential payoff. I once met a pair of British entrepreneurs living in France who loved nothing more than spending all day in a pub — meeting up with friends, watching a soccer game, and giving each other the requisite hard time about just about everything.

For their entrepreneurial class as part of their MBA coursework at Insead, they decided to draft the business plan for an English-style microbrewery in Paris — mainly because the research phase would involve a lot of sitting around in bars. But during the process of launching their fictitious company, they realized there really was an opportunity to make a living doing exactly what they loved, and went on to successfully launch seven such pubs, sprinkled all over the city.

When hiring at Doorsteps, I start by asking people what they would do with their lives if every career paid the same. If the gap between their truest desires and the job on offer is simply too wide, I encourage them to keep looking. Not because they can’t be successful with us, too, but because they’ll likely be even more successful elsewhere — when they are driven by passion as much as profit.

Optimization. Sometimes entrepreneurs benefit by letting someone else lay the groundwork for their ideas. Indeed, a great many startups are born by simply building a better mousetrap; that’s to say observing a compelling business already in existence but that’s struggling to find traction. These entrepreneurs have the ability to recognize that the idea itself is sound but the execution is flawed. In this case, they simply address the oversight of the previous version. Instagram quite famously beat Hipstamatic to the jaw dropping $1 billion dollar prize by understanding the role social needed to play in the app’s experience. By the time Hipstamatic realized their error, Instagram had almost four times the amount of users, largely muscling them out of a competitive niche market.

Read the entire article following the jump.

First Came Phishing, Now We Have Catfishing

The internet has revolutionized retailing, the music business, and the media landscape. It has anointed countless entrepreneurial millionaires and billionaires and helped launch arrays of new businesses in all spheres of life.

Of course, due to the peculiarities of human nature the internet has also become an enabler and/or a new home to less upstanding ventures such as online pornography, spamming, identify theft and phishing.

Now comes “catfishing“: posting false information online with the intent of reeling someone in (usually found on online dating sites). While this behavior is nothing new in the vast catalog of human deviousness, the internet has enabled an explosion in “catfishers“. This fascinating infographic below gives a neat summary.

Infographic courtesy of Checkmate.

What’s In a Name?

Recently we posted a fascinating story about a legal ruling in Iceland that allowed parents to set aside centuries of Icelandic history by naming their girl “Blaer” — a traditionally male name. You see Iceland has an official organization — the Iceland Naming Committee — that regulates and decides if a given name is acceptable (by Icelandic standards).

Well, this got us thinking about rules and conventions in other nations. For instance, New Zealand will not allow parents to name a child “Pluto”, however “Number 16 Bus Shelter” and “Violence” recently got the thumbs up. Some misguided or innovative (depending upon your perspective) New Zealanders have unsuccessfully tried to name their offspring: “*” (yes, asterisk), “.” (period or full-stop), “V”, and “Emperor”.

Not to be outdone, a U.S. citizen recently legally changed his name to “In God” (first name) “We Trust” (last name). Humans are indeed a strange species.

From CNN:

Lucifer cannot be born in New Zealand.

And there’s no place for Christ or a Messiah either.

In New Zealand, parents have to run by the government any name they want to bestow on their baby.

And each year, there’s a bevy of unusual ones too bizarre to pass the taste test.

The country’s Registrar of Births, Deaths and Marriages shared that growing list with CNN on Wednesday.

Four words:

What were they thinking?

In the past 12 years, the agency had to turn down not one, not two, but six sets of parents who wanted to name their child “Lucifer.”

Also shot down were parents who wanted to grace their child with the name “Messiah.” That happened twice.

“Christ,” too, was rejected.

Specific rules

As the agency put it, acceptable names must not cause offense to a reasonable person, not be unreasonably long and should not resemble an official title and rank.

It’s no surprise then that the names nixed most often since 2001 are “Justice” (62 times) and “King” (31 times).

Some of the other entries scored points in the creativity department — but clearly didn’t take into account the lifetime of pain they’d bring.

“Mafia No Fear.” “4Real.” “Anal.”

Oh, come on!

Then there were the parents who preferred brevity through punctuation. The ones who picked ‘”*” (the asterisk) or ‘”.”(period).

Slipping through

Still, some quirky names do make it through.

In 2008, the country made made international news when the naming agency allowed a set of twins to be named ‘

“Benson” and “Hedges” — a popular cigarette brand — and OK’d the names “Violence” and “Number 16 Bus Shelter.”

Asked about those examples, Michael Mead of the Internal Affairs Department (under which the agency falls) said, “All names registered with the Department since 1995 have conformed to these rules.”

And what happens when parents don’t conform?

Four years ago, a 9-year-old girl was taken away from her parents by the state so that her name could be changed from “Talula Does the Hula From Hawaii.”

Not alone

To be sure, New Zealand is not the only country to act as editor for some parent’s wacky ideas.

Sweden also has a naming law and has nixed attempts to name children “Superman,” “Metallica,” and the oh-so-easy-to-pronounce “Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116.”

In 2009, the Dominican Republic contemplated banning unusual names after a host of parents began naming their children after cars or fruit.

In the United States, however, naming fights have centered on adults.

In 2008, a judge allowed an Illinois school bus driver to legally change his first name to “In God” and his last name to “We Trust.”

But the same year, an appeals court in New Mexico ruled against a man — named Variable — who wanted to change his name to “F— Censorship!”

Here is a list of some the names banned in New Zealand since 2001 — and how many times they came up

Justice:62

King:31

Princess:28

Prince:27

Royal:25

Duke:10

Major:9

Bishop:9

Majesty:7

J:6

Lucifer:6

using brackets around middle names:4

Knight:4

Lady:3

using back slash between names:8

Judge:3

Royale:2

Messiah:2

T:2

I:2

Queen:2

II:2

Sir:2

III:2

Jr:2

E:2

V:2

Justus:2

Master:2

Constable:1

Queen Victoria:1

Regal:1

Emperor:1

Christ:1

Juztice:1

3rd:1

C J :1

G:1

Roman numerals III:1

General:1

Saint:1

Lord:1

. (full stop):1

89:1

Eminence:1

M:1

VI:1

Mafia No Fear:1

2nd:1

Majesti:1

Rogue:1

4real:1

* (star symbol):1

5th:1

S P:1

C:1

Sargent:1

Honour:1

D:1

Minister:1

MJ:1

Chief:1

Mr:1

V8:1

President:1

MC:1

Anal:1

A.J:1

Baron:1

L B:1

H-Q:1

Queen V:1

Read the entire article following the jump.

Anti-Eco-Friendly Consumption

It should come as no surprise that those who deny the science of climate change and human-propelled impact on the environment would also shirk from purchasing products and services that are friendly to the environment.

A recent study shows how extreme political persuasion sways purchasing behavior of light bulbs: conservatives are more likely to purchase incandescent bulbs, while moderates and liberals lean towards more eco-friendly bulbs.

Joe Barton, U.S. Representative from Texas, sums up the issue of light bulb choice quite neatly, “… it is about personal freedom”. All the while our children shake their heads in disbelief.

Presumably many climate change skeptics prefer to purchase items that are harmful to the environment and also to humans just to make a political statement. This might include continuing to purchase products containing dangerous levels of unpronounceable acronyms and questionable chemicals: rBGH (recombinant Bovine Growth Hormone) in milk, BPA (Bisphenol_A) in plastic utensils and bottles, KBrO3 (Potassium Bromate) in highly processed flour, BHA (Butylated Hydroxyanisole) food preservative, Azodicarbonamide in dough.

Freedom truly does come at a cost.

From the Guardian:

Eco-friendly labels on energy-saving bulbs are a turn-off for conservative shoppers, a new study has found.

The findings, published this week in the Proceedings of the National Academy of Sciences, suggest that it could be counterproductive to advertise the environmental benefits of efficient bulbs in the US. This could make it even more difficult for America to adopt energy-saving technologies as a solution to climate change.

Consumers took their ideological beliefs with them when they went shopping, and conservatives switched off when they saw labels reading “protect the environment”, the researchers said.

The study looked at the choices of 210 consumers, about two-thirds of them women. All were briefed on the benefits of compact fluorescent (CFL) bulbs over old-fashioned incandescents.

When both bulbs were priced the same, shoppers across the political spectrum were uniformly inclined to choose CFL bulbs over incandescents, even those with environmental labels, the study found.

But when the fluorescent bulb cost more – $1.50 instead of $0.50 for an incandescent – the conservatives who reached for the CFL bulb chose the one without the eco-friendly label.

“The more moderate and conservative participants preferred to bear a long-term financial cost to avoid purchasing an item associated with valuing environmental protections,” the study said.

The findings suggest the extreme political polarisation over environment and climate change had now expanded to energy-savings devices – which were once supported by right and left because of their money-saving potential.

“The research demonstrates how promoting the environment can negatively affect adoption of energy efficiency in the United States because of the political polarisation surrounding environmental issues,” the researchers said.

Earlier this year Harvard academic Theda Skocpol produced a paper tracking how climate change and the environment became a defining issue for conservatives, and for Republican-elected officials.

Conservative activists elevated opposition to the science behind climate change, and to action on climate change, to core beliefs, Skocpol wrote.

There was even a special place for incandescent bulbs. Republicans in Congress two years ago fought hard to repeal a law phasing out incandescent bulbs – even over the objections of manufacturers who had already switched their product lines to the new energy-saving technology.

Republicans at the time cast the battle of the bulb as an issue of liberty. “This is about more than just energy consumption. It is about personal freedom,” said Joe Barton, the Texas Republican behind the effort to keep the outdated bulbs burning.

Read the entire article following the jump.

Image courtesy of Housecraft.

YBAs Twenty-Five Years On

That a small group of Young British Artists (YBA) made an impact on the art scene in the UK and across the globe over the last 25 years is without question. Though, whether the public at large will, 10, 25 or 50 years from now (and beyond), recognize a Damien Hirst spin painting or Tracy Emin’s “My Bed” or a Sarah Lucas self-portrait — “The Artist Eating a Banana” springs to mind — remains an open question.

The group first came to prominence in the late 1980s, mostly through works and events designed to shock the sensibilities of the then dreadfully boring and insular British art scene. With that aim in mind they certainly succeeded, and some, notably Hirst, have since become art superstars. So, while the majority of artists never experience fame within their own lifetimes, many YBAs have managed to buck convention. Though, whether their art will live long and prosper is debatable.

Jonathan Jones over at the On Art blog, chimes in with a different and altogether kinder opinion.

From the Guardian:

It’s 25 years since an ambitious unknown called Damien Hirst curated an exhibition of his friends and contemporaries called Freeze. This is generally taken as the foundation of the art movement that by the 1990s got the label “YBA”. Promoted by exhibitions such as Brilliant!, launched into public debate by the Turner prize and eventually set in stone at the Royal Academy with Sensation, Young British Art still shapes our cultural scene. A Damien Hirst spin painting closed the Olympics.

Even where artists are obviously resisting the showmanship and saleability of the Hirst generation (and such resistance has been the key to fashionable esteem for at least a decade), that generation’s ideas – that art should be young and part of popular culture – remain dominant. Artists on this year’s Turner shortlist may hate the thought that they are YBAs but they really are, in their high valuation of youth and pop. If we are all Thatcherites now, our artists are definitely all YBAs. Except for David Hockney.

From “classic” YBAs like Sarah Lucas and Marc Quinn to this year’s art school graduates, the drive to be new, modern, young and brave that Freeze announced in 1988 still shapes British art. And where has that left us? Where is British art, after 25 years of being young?

Let’s start with the best – and the worst. None of the artists who exploded on to the scene back then were as exciting and promising as Damien Hirst. He orchestrated the whole idea of a movement, and really it was a backdrop for his own daring imagination. Hirst’s animals in formaldehyde were provocations and surrealist dreams. He spun pop art in a new, visceral direction.

Today he is a national shame – our most famous artist has become a hack painter and kitsch sculptor who goes to inordinate lengths to demonstrate his lack of talent. Never has promise been more spectacularly misleading.

And what of the mood he created? Some of the artists who appeared in Freeze, such as Mat Collishaw, still make excellent work. But as for enduring masterpieces that will stand the test of time – how many of those has British art produced since 1988?

Well – the art of Sarah Lucas is acridly memorable. That of Rachel Whiteread is profound. The works of Jake and Dinos Chapman will keep scholars chortling in the library a century or two from now.

What is an artistic masterpiece anyway? Britain has never been good at creating sublime works in marble. But consider the collection of Georgian satirical prints in the Prints and Drawings room at the British Museum. Artists such as Gillray and Rowlandson are our heritage: rude, crude and subversive. Think about Hogarth too – an edgy artist critics snootily dismiss as a so-so painter.

Face it, all ye who rail at modern British art: YBA art and its living aftermath, from pickled fish to David Shrigley, fits beautifully into the Great British tradition of Hogarthian hilarity.

The difference is that while Hogarth had a chip on his shoulder about European art lording it over local talent, the YBA revolution made London world-famous as an art city, with Glasgow coming up in the side lane.

Warts and all, this has been the best 25 years in the history of British art. It never mattered more.

Read the entire article after the jump.

Image: My Bed by Tracey Emin. Courtesy of Tracey Emin / The Saatchi Gallery.

Criminology and Brain Science

Pathological criminals and the non-criminals who seek to understand them have no doubt co-existed since humans first learned to steal from and murder one another.

So while we may be no clearer in fully understanding the underlying causes of anti-social, destructive and violent behavior many researchers continue their quests. In one camp are those who maintain that such behavior is learned or comes as a consequence of poor choices or life-events, usually traumatic, or through exposure to an acute psychological or physiological stressor. In the other camp, are those who argue that genes and their subsequent expression, especially those controlling brain function, are a principal cause.

Some recent neurological studies of criminals and psychopaths shows fascinating, though not unequivocal, results.

From the Wall Street Journal:

The scientific study of crime got its start on a cold, gray November morning in 1871, on the east coast of Italy. Cesare Lombroso, a psychiatrist and prison doctor at an asylum for the criminally insane, was performing a routine autopsy on an infamous Calabrian brigand named Giuseppe Villella. Lombroso found an unusual indentation at the base of Villella’s skull. From this singular observation, he would go on to become the founding father of modern criminology.

Lombroso’s controversial theory had two key points: that crime originated in large measure from deformities of the brain and that criminals were an evolutionary throwback to more primitive species. Criminals, he believed, could be identified on the basis of physical characteristics, such as a large jaw and a sloping forehead. Based on his measurements of such traits, Lombroso created an evolutionary hierarchy, with Northern Italians and Jews at the top and Southern Italians (like Villella), along with Bolivians and Peruvians, at the bottom.

These beliefs, based partly on pseudoscientific phrenological theories about the shape and size of the human head, flourished throughout Europe in the late 19th and early 20th centuries. Lombroso was Jewish and a celebrated intellectual in his day, but the theory he spawned turned out to be socially and scientifically disastrous, not least by encouraging early-20th-century ideas about which human beings were and were not fit to reproduce—or to live at all.

The racial side of Lombroso’s theory fell into justifiable disrepute after the horrors of World War II, but his emphasis on physiology and brain traits has proved to be prescient. Modern-day scientists have now developed a far more compelling argument for the genetic and neurological components of criminal behavior. They have uncovered, quite literally, the anatomy of violence, at a time when many of us are preoccupied by the persistence of violent outrages in our midst.

The field of neurocriminology—using neuroscience to understand and prevent crime—is revolutionizing our understanding of what drives “bad” behavior. More than 100 studies of twins and adopted children have confirmed that about half of the variance in aggressive and antisocial behavior can be attributed to genetics. Other research has begun to pinpoint which specific genes promote such behavior.

Brain-imaging techniques are identifying physical deformations and functional abnormalities that predispose some individuals to violence. In one recent study, brain scans correctly predicted which inmates in a New Mexico prison were most likely to commit another crime after release. Nor is the story exclusively genetic: A poor environment can change the early brain and make for antisocial behavior later in life.

Most people are still deeply uncomfortable with the implications of neurocriminology. Conservatives worry that acknowledging biological risk factors for violence will result in a society that takes a soft approach to crime, holding no one accountable for his or her actions. Liberals abhor the potential use of biology to stigmatize ostensibly innocent individuals. Both sides fear any seeming effort to erode the idea of human agency and free will.

It is growing harder and harder, however, to avoid the mounting evidence. With each passing year, neurocriminology is winning new adherents, researchers and practitioners who understand its potential to transform our approach to both crime prevention and criminal justice.

The genetic basis of criminal behavior is now well established. Numerous studies have found that identical twins, who have all of their genes in common, are much more similar to each other in terms of crime and aggression than are fraternal twins, who share only 50% of their genes.

In a landmark 1984 study, my colleague Sarnoff Mednick found that children in Denmark who had been adopted from parents with a criminal record were more likely to become criminals in adulthood than were other adopted kids. The more offenses the biological parents had, the more likely it was that their offspring would be convicted of a crime. For biological parents who had no offenses, 13% of their sons had been convicted; for biological parents with three or more offenses, 25% of their sons had been convicted.

As for environmental factors that affect the young brain, lead is neurotoxic and particularly damages the prefrontal region, which regulates behavior. Measured lead levels in our bodies tend to peak at 21 months—an age when toddlers are apt to put their fingers into their mouths. Children generally pick up lead in soil that has been contaminated by air pollution and dumping.

Rising lead levels in the U.S. from 1950 through the 1970s neatly track increases in violence 20 years later, from the ’70s through the ’90s. (Violence peaks when individuals are in their late teens and early 20s.) As lead in the environment fell in the ’70s and ’80s—thanks in large part to the regulation of gasoline—violence fell correspondingly. No other single factor can account for both the inexplicable rise in violence in the U.S. until 1993 and the precipitous drop since then.

Lead isn’t the only culprit. Other factors linked to higher aggression and violence in adulthood include smoking and drinking by the mother before birth, complications during birth and poor nutrition early in life.

Genetics and environment may work together to encourage violent behavior. One pioneering study in 2002 by Avshalom Caspi and Terrie Moffitt of Duke University genotyped over 1,000 individuals in a community in New Zealand and assessed their levels of antisocial behavior in adulthood. They found that a genotype conferring low levels of the enzyme monoamine oxidase A (MAOA), when combined with early child abuse, predisposed the individual to later antisocial behavior. Low MAOA has been linked to reduced volume in the amygdala—the emotional center of the brain—while physical child abuse can damage the frontal part of the brain, resulting in a double hit.

Brain-imaging studies have also documented impairments in offenders. Murderers, for instance, tend to have poorer functioning in the prefrontal cortex—the “guardian angel” that keeps the brakes on impulsive, disinhibited behavior and volatile emotions.

Read the entire article following the jump.

Image: The Psychopath Test by Jon Ronson, book cover. Courtesy of Goodreads.

Retire at 30

No tricks. No Ponzi scheme. No lottery win. No grand inheritance. It’s rather simple; it’s about simple lifestyle choices made at an early age. We excerpt part of Mister Money Moustache’s fascinating story below.

From the Washington Post:

To hundreds of thousands of devotees, he is Mister Money Mustache. And he is here to tell you that early retirement doesn’t only happen to Powerball winners and those who luck into a big inheritance. He and his wife retired from middle-income jobs before they had their son. Exasperated, as he puts it, by “a barrage of skeptical questions from high-income peers who were still in debt years after we were free from work,” he created a no-nonsense personal finance blog and started spilling his secrets. I was eager to know more. He is Pete (just Pete, for the sake of his family’s privacy). He lives in Longmont, Colo. He is ridiculously happy. And he’s sure his life could be yours. Our conversation was edited for length and clarity..

 

So you retired at 30. How did that happen?

I was probably born with a desire for efficiency — the desire to get the most fun out of any possible situation, with no resources being wasted. This applied to money too, and by age 10, I was ironing my 20 dollar bills and keeping them in a photo album, just because they seemed like such powerful and intriguing little rectangles.

But I didn’t start saving and investing particularly early, I just maintained this desire not to waste anything. So I got through my engineering degree debt-free — by working a lot and not owning a car — and worked pretty hard early on to move up a bit in the career, relocating from Canada to the United States, attracted by the higher salaries and lower cost of living.

Then my future wife and I moved in together and DIY-renovated a junky house into a nice one, kept old cars while our friends drove fancy ones, biked to work instead of driving, cooked at home and went out to restaurants less, and it all just added up to saving more than half of what we earned. We invested this surplus as we went, never inflating our already-luxurious lives, and eventually the passive income from stock dividends and a rental house was more than enough to pay for our needs (about $25,000 per year for our family of three, with a paid-off house and no other debt).

What sort of retirement income do you have?

Our bread-and-butter living expenses are paid for by a single rental house we own, which generates about $25,000 per year after expenses. We also have stock index funds and 401(k) plans, which could boost that by about 50 percent without depleting principal if we ever needed it, but, so far, we can’t seem to spend more than $25,000 no matter how much we let loose. So the dividends just keep reinvesting.

You describe the typical middle-class life as an “exploding volcano of wastefulness.” Seems like lots of personal finance folks obsess about lattes. Are you just talking about the lattes here?

The latte is just the foamy figurehead of an entire spectrum of sloppy “I deserve it” luxury spending that consumes most of our gross domestic product these days. Among my favorite targets: commuting to an office job in an F-150 pickup truck, anything involving a drive-through, paying $100 per month for the privilege of wasting four hours a night watching cable TV and the whole yoga industry. There are better, and free, ways to meet these needs, but everyone always chooses the expensive ones and then complains that life is hard these days.

Read the entire article following the jump or visit Mr. Money Moustache’s blog.

Image courtesy of Google Search.

General Relativity Lives on For Now

Since Einstein first published his elegant theory of General Relativity almost 100 years ago it has proved to be one of most powerful and enduring cornerstones of modern science. Yet theorists and researchers the world over know that it cannot possibly remain the sole answer to our cosmological questions. It answers questions about the very, very large — galaxies, stars and planets and the gravitational relationship between them. But it fails to tackle the science of the very, very small — atoms, their constituents and the forces that unite and repel them, which is addressed by the elegant and complex, but mutually incompatible Quantum Theory.

So, scientists continue to push their measurements to ever greater levels of precision across both greater and smaller distances with one aim in mind — to test the limits of each theory and to see which one breaks down first.

A recent highly precise and yet very long distance experiment, confirmed that Einstein’s theory still rules the heavens.

From ars technica:

The general theory of relativity is a remarkably successful model for gravity. However, many of the best tests for it don’t push its limits: they measure phenomena where gravity is relatively weak. Some alternative theories predict different behavior in areas subject to very strong gravity, like near the surface of a pulsar—the compact, rapidly rotating remnant of a massive star (also called a neutron star). For that reason, astronomers are very interested in finding a pulsar paired with another high-mass object. One such system has now provided an especially sensitive test of strong gravity.

The system is a binary consisting of a high-mass pulsar and a bright white dwarf locked in mutual orbit with a period of about 2.5 hours. Using optical and radio observations, John Antoniadis and colleagues measured its properties as it spirals toward merger by emitting gravitational radiation. After monitoring the system for a number of orbits, the researchers determined its behavior is in complete agreement with general relativity to a high level of precision.

The binary system was first detected in a survey of pulsars by the Green Bank Telescope (GBT). The pulsar in the system, memorably labeled PSR J0348+0432, emits radio pulses about once every 39 milliseconds (0.039 seconds). Fluctuations in the pulsar’s output indicated that it is in a binary system, though its companion lacked radio emissions. However, the GBT’s measurements were precise enough to pinpoint its location in the sky, which enabled the researchers to find the system in the archives of the Sloan Digital Sky Survey (SDSS). They determined the companion object was a particularly bright white dwarf, the remnant of the core of a star similar to our Sun. It and the pulsar are locked in a mutual orbit about 2.46 hours in length.

Following up with the Very Large Telescope (VLT) in Chile, the astronomers built up enough data to model the system. Pulsars are extremely dense, packing a star’s worth of mass into a sphere roughly 10 kilometers in radius—far too small to see directly. White dwarfs are less extreme, but they still involve stellar masses in a volume roughly equivalent to Earth’s. That means the objects in the PSR J0348+0432 system can orbit much closer to each other than stars could—as little as 0.5 percent of the average Earth-Sun separation, or 1.2 times the Sun’s radius.

The pulsar itself was interesting because of its relatively high mass: about 2.0 times that of the Sun (most observed pulsars are about 1.4 times more massive). Unlike more mundane objects, pulsar size doesn’t grow with mass; according to some models, a higher mass pulsar may actually be smaller than one with lower mass. As a result, the gravity at the surface of PSR J0348+0432 is far more intense than at a lower-mass counterpart, providing a laboratory for testing general relativity (GR). The gravitational intensity near PSR J0348+0432 is about twice that of other pulsars in binary systems, creating a more extreme environment than previously measured.

According to GR, a binary emits gravitational waves that carry energy away from the system, causing the size of the orbit to shrink. For most binaries, the effect is small, but for compact systems like the one containing PSR J0348+0432, it is measurable. The first such system was found by Russel Hulse and Joseph Taylor; its discovery won the two astronomers the Nobel Prize.

The shrinking of the orbit results in a decrease in the orbital period as the two objects revolve around each other more quickly. In this case, the researchers measured the effect by studying the change in the spectrum of light emitted by the white dwarf, as well as fluctuations in the emissions from the pulsar. (This study also helped demonstrate the two objects were in mutual orbit, rather than being coincidentally in the same part of the sky.)

To test agreement with GR, physicists established a set of observable quantities. These include the rate of orbit decrease (which is a reflection of the energy loss to gravitational radiation) and something called the Shapiro delay. The latter phenomenon occurs because light emitted from the pulsar must travel through the intense gravitational field of the pulsar when exiting the system. This effect depends on the relative orientation of the pulsar to us, but alternative models also predict different observable results.

In the case of the PSR J0348+0432 system, the change in orbital period and the Shapiro delay agreed with the predictions of GR, placing strong constraints on alternative theories. The researchers were also able to rule out energy loss from other, non-gravitational sources (rotation or electromagnetic phenomena). If the system continues as models predict, the white dwarf and pulsar will merge in about 400 million years—we don’t know what the product of that merger will be, so astronomers are undoubtedly marking their calendars now.

The results are of potential use for the Laser Interferometer Gravitational-wave Observatory (LIGO) and other ground-based gravitational-wave detectors. These instruments are sensitive to the final death spiral of binaries like the one containing PSR J0348+0432. The current detection and observation strategies involve “templates,” or theoretical models of the gravitational wave signal from binaries. All information about the behavior of close pulsar binaries helps gravitational-wave astronomers refine those templates, which should improve the chances of detection.

Of course, no theory can be “proven right” by experiment or observation—data provides evidence in support of or against the predictions of a particular model. However, the PSR J0348+0432 binary results placed stringent constraints on any alternative model to GR in the strong-gravity regime. (Certain other alternative models focus on altering gravity on large scales to explain dark energy and the acceleration expansion of the Universe.) Based on this new data, only theories that agree with GR to high precision are still standing—leaving general relativity the continuing champion theory of gravity.

Read the entire article after the jump.

Image: Artist’s impression of the PSR J0348+0432 system. The compact pulsar (with beams of radio emission) produces a strong distortion of spacetime (illustrated by the green mesh). Courtesy of Science Mag.

Google’s AI

The collective IQ of Google, the company, inched up a few notches in January of 2013 when they hired Ray Kurzweil. Over the coming years if the work of Kurzweil, and many of his colleagues, pays off the company’s intelligence may surge significantly. This time though it will be thanks to their work on artificial intelligence (AI), machine learning and (very) big data.

From  Technology Review:

When Ray Kurzweil met with Google CEO Larry Page last July, he wasn’t looking for a job. A respected inventor who’s become a machine-intelligence futurist, Kurzweil wanted to discuss his upcoming book How to Create a Mind. He told Page, who had read an early draft, that he wanted to start a company to develop his ideas about how to build a truly intelligent computer: one that could understand language and then make inferences and decisions on its own.

It quickly became obvious that such an effort would require nothing less than Google-scale data and computing power. “I could try to give you some access to it,” Page told Kurzweil. “But it’s going to be very difficult to do that for an independent company.” So Page suggested that Kurzweil, who had never held a job anywhere but his own companies, join Google instead. It didn’t take Kurzweil long to make up his mind: in January he started working for Google as a director of engineering. “This is the culmination of literally 50 years of my focus on artificial intelligence,” he says.

Kurzweil was attracted not just by Google’s computing resources but also by the startling progress the company has made in a branch of AI called deep learning. Deep-learning software attempts to mimic the activity in layers of neurons in the neocortex, the wrinkly 80 percent of the brain where thinking occurs. The software learns, in a very real sense, to recognize patterns in digital representations of sounds, images, and other data.

The basic idea—that software can simulate the neocortex’s large array of neurons in an artificial “neural network”—is decades old, and it has led to as many disappointments as breakthroughs. But because of improvements in mathematical formulas and increasingly powerful computers, computer scientists can now model many more layers of virtual neurons than ever before.

With this greater depth, they are producing remarkable advances in speech and image recognition. Last June, a Google deep-learning system that had been shown 10 million images from YouTube videos proved almost twice as good as any previous image recognition effort at identifying objects such as cats. Google also used the technology to cut the error rate on speech recognition in its latest Android mobile software. In October, Microsoft chief research officer Rick Rashid wowed attendees at a lecture in China with a demonstration of speech software that transcribed his spoken words into English text with an error rate of 7 percent, translated them into Chinese-language text, and then simulated his own voice uttering them in Mandarin. That same month, a team of three graduate students and two professors won a contest held by Merck to identify molecules that could lead to new drugs. The group used deep learning to zero in on the molecules most likely to bind to their targets.

Google in particular has become a magnet for deep learning and related AI talent. In March the company bought a startup cofounded by Geoffrey Hinton, a University of Toronto computer science professor who was part of the team that won the Merck contest. Hinton, who will split his time between the university and Google, says he plans to “take ideas out of this field and apply them to real problems” such as image recognition, search, and natural-language understanding, he says.

All this has normally cautious AI researchers hopeful that intelligent machines may finally escape the pages of science fiction. Indeed, machine intelligence is starting to transform everything from communications and computing to medicine, manufacturing, and transportation. The possibilities are apparent in IBM’s Jeopardy!-winning Watson computer, which uses some deep-learning techniques and is now being trained to help doctors make better decisions. Microsoft has deployed deep learning in its Windows Phone and Bing voice search.

Extending deep learning into applications beyond speech and image recognition will require more conceptual and software breakthroughs, not to mention many more advances in processing power. And we probably won’t see machines we all agree can think for themselves for years, perhaps decades—if ever. But for now, says Peter Lee, head of Microsoft Research USA, “deep learning has reignited some of the grand challenges in artificial intelligence.”

Building a Brain

There have been many competing approaches to those challenges. One has been to feed computers with information and rules about the world, which required programmers to laboriously write software that is familiar with the attributes of, say, an edge or a sound. That took lots of time and still left the systems unable to deal with ambiguous data; they were limited to narrow, controlled applications such as phone menu systems that ask you to make queries by saying specific words.

Neural networks, developed in the 1950s not long after the dawn of AI research, looked promising because they attempted to simulate the way the brain worked, though in greatly simplified form. A program maps out a set of virtual neurons and then assigns random numerical values, or “weights,” to connections between them. These weights determine how each simulated neuron responds—with a mathematical output between 0 and 1—to a digitized feature such as an edge or a shade of blue in an image, or a particular energy level at one frequency in a phoneme, the individual unit of sound in spoken syllables.

Programmers would train a neural network to detect an object or phoneme by blitzing the network with digitized versions of images containing those objects or sound waves containing those phonemes. If the network didn’t accurately recognize a particular pattern, an algorithm would adjust the weights. The eventual goal of this training was to get the network to consistently recognize the patterns in speech or sets of images that we humans know as, say, the phoneme “d” or the image of a dog. This is much the same way a child learns what a dog is by noticing the details of head shape, behavior, and the like in furry, barking animals that other people call dogs.

But early neural networks could simulate only a very limited number of neurons at once, so they could not recognize patterns of great complexity. They languished through the 1970s.

In the mid-1980s, Hinton and others helped spark a revival of interest in neural networks with so-called “deep” models that made better use of many layers of software neurons. But the technique still required heavy human involvement: programmers had to label data before feeding it to the network. And complex speech or image recognition required more computer power than was then available.

Finally, however, in the last decade ­Hinton and other researchers made some fundamental conceptual breakthroughs. In 2006, Hinton developed a more efficient way to teach individual layers of neurons. The first layer learns primitive features, like an edge in an image or the tiniest unit of speech sound. It does this by finding combinations of digitized pixels or sound waves that occur more often than they should by chance. Once that layer accurately recognizes those features, they’re fed to the next layer, which trains itself to recognize more complex features, like a corner or a combination of speech sounds. The process is repeated in successive layers until the system can reliably recognize phonemes or objects.

Read the entire fascinating article following the jump.

Image courtesy of Wired.

Corporate-Speak 101

We believe that corporate-speak is a dangerous starting point that may eventually lead us to Orwellian doublethink. After all what could possibly be the purpose of using the words “going forward” in place of “in the future”, if not to convince employees to believe the past never happened. Some of our favorite management buzzwords and euphemisms below.

From the Guardian:

Among the most spirit-sapping indignities of office life is the relentless battering of workers’ ears by the strangled vocabulary of management-speak. It might even seem to some innocent souls as though all you need to do to acquire a high-level job is to learn its stultifying jargon. Bureaucratese is a maddeningly viral kind of Unspeak engineered to deflect blame, complicate simple ideas, obscure problems, and perpetuate power relations. Here are some of its most dismaying manifestations.

1 Going forward

Top of many people’s hate list is this now-venerable way of saying “from now on” or “in future”. It has the rhetorical virtue of wiping clean the slate of the past (perhaps because “mistakes were made”), and implying a kind of thrustingly strategic progress, even though none is likely to be made as long as the working day is made up of funereal meetings where people say things like “going forward”.

2 Drill down

Far be it from me to suggest that managers prefer metaphors that evoke huge pieces of phallic machinery, but why else say “drill down” when you just mean “look at in detail”?

3 Action

Some people despise verbings (where a noun begins to be used as a verb) on principle, though who knows what they say instead of “texting”. In his Dictionary of Weasel Words, the doyen of management-jargon mockery Don Watson defines “to action” simply as “do”. This is not quite right, but “action” can probably always be replaced with a more specific verb, such as “reply” or “fulfil”, even if they sound less excitingly action-y. The less said of the mouth-full-of-pebbles construction “actionables”, the better.

4 End of play

The curious strain of kiddy-talk in bureaucratese perhaps stems from a hope that infantilised workers are more docile. A manager who tells you to do something “by end of play” – in other words, today – is trying to hypnotise you into thinking you are having fun. This is not a game of cricket.

5 Deliver

What you do when you’ve actioned something. “Delivering” (eg “results”) borrows the dynamic, space-traversing connotations of a postal service — perhaps a post-apocalyptic one such as that started by Kevin Costner in The Postman. Inevitably, as with “actionables”, we also have “deliverables” (“key deliverables,” Don Watson notes thoughtfully, “are the most important ones”), though by this point more sensitive subordinates might be wishing instead for deliverance.

6 Issues

Calling something a “problem” is bound to scare the horses and focus responsibility on the bosses, so let’s deploy the counselling-speak of “issues”. The critic (and managing editor of the TLS) Robert Potts translates “there are some issues around X” as “there is a problem so big that we are scared to even talk about it directly”. Though it sounds therapeutically nonjudgmental, “issues” can also be a subtly vicious way to imply personal deficiency. If you have “issues” with a certain proposal, maybe you just need to go away and work on your issues.

Read the entire article following the jump.

The Advantages of Shyness

Behavioral scientists have confirmed what shy people of the world have known for quite some time — that timidity and introversion can be beneficial traits. Yes, shyness is not a disorder!

Several studies of humans and animals show that shyness and assertiveness are both beneficial, dependent on the situational context. Researchers have shown that evolution favors both types of personality, and in fact, often rewards adaptability versus pathological extremes at either end of the behavioral spectrum.

From the New Scientist:

“Don’t be shy!” It’s an oft-heard phrase in modern western cultures where go-getters and extroverts appear to have an edge and where raising confident, assertive children sits high on the priority list for many parents. Such attitudes are understandable. Timidity really does hold individuals back. “Shy people start dating later, have sex later, get married later, have children later and get promoted later,” says Bernardo Carducci, director of the Shyness Research Institute at Indiana University Southeast in New Albany. In extreme cases shyness can even be pathological, resulting in anxiety attacks and social phobia.

In recent years it has emerged that we are not the only creatures to experience shyness. In fact, it is one of the most obvious character traits in the animal world, found in a wide variety of species from sea anemones and spiders to birds and sheep. But it is also becoming clear that in the natural world fortune doesn’t always favour the bold. Sometimes the shy, cautious individuals are luckier in love and lifespan. The inescapable conclusion is that there is no one “best” personality – each has benefits in different situations – so evolution favours both.

Should we take a lesson from these findings and re-evaluate what it means to be a shy human? Does shyness have survival value for us too? Some researchers think so and are starting to find that people who are shy, sensitive and even anxious have some surprising advantages over more go-getting types. Perhaps it is time to ditch our negative attitude to shyness and accept that it is as valuable as extroversion. Carducci certainly thinks so. “Think about what it would be like if everybody was very bold,” he says. “What would your daily life be like if everybody you encountered was like Lady Gaga?”

One of the first steps in the rehabilitation of shyness came in the 1990s, from work on salamanders. An interest in optimality – the idea that animals are as efficient as possible in their quest for food, mates and resources – led Andrew Sih at the University of California, Davis, to study the behaviour of sunfish and their prey, larval salamanders. In his experiments, he couldn’t help noticing differences between individual salamanders. Some were bolder and more active than others. They ate more and grew faster than their shyer counterparts, but there was a downside. When sunfish were around, the bold salamanders were just “blundering out there and not actually doing the sort of smart anti-predator behaviour that simple optimality theory predicted they would do”, says Sih. As a result, they were more likely to be gobbled up than their shy counterparts.

Until then, the idea that animals have personalities – consistent differences in behaviour between individuals – was considered controversial. Sih’s research forced a rethink. It also spurred further studies, to the extent that today the so-called “shy-bold continuum” has been identified in more than 100 species. In each of these, individuals range from highly “reactive” to highly “proactive”: reactive types being shy, timid, risk-averse and slow to explore novel environments, whereas proactive types are bold, aggressive, exploratory and risk-prone.

Why would these two personality types exist in nature? Sih’s study holds the key. Bold salamander larvae may risk being eaten, but their fast growth is a distinct advantage in the small streams they normally inhabit, which may dry up before more cautious individuals can reach maturity. In other words, each personality has advantages and disadvantages depending on the circumstances. Since natural environments are complex and constantly changing, natural selection may favour first one and then the other or even both simultaneously.

The idea is illustrated even more convincingly by studies of a small European bird, the great tit. The research, led by John Quinn at University College Cork in Ireland, involved capturing wild birds and putting each separately into a novel environment to assess how proactive or reactive it was. Some hunkered down in the fake tree provided and stayed there for the entire 8-minute trial; others immediately began exploring every nook and cranny of the experimental room. The birds were then released back into the wild, to carry on with the business of surviving and breeding. “If you catch those same individuals a year later, they tend to do more or less the same thing,” says Quinn. In other words, exploration is a consistent personality trait. What’s more, by continuously monitoring the birds, a team led by Niels Dingemanse at the Max Planck Institute for Ornithology in Seewiesen, Germany, observed that in certain years the environment favours bold individuals – more survive and they produce more chicks than other birds – whereas in other years the shy types do best.

A great tit’s propensity to explore is usually similar to that of its parents and a genetic component of risk-taking behaviour has been found in this and other species. Even so, nurture seems to play a part in forming animal personalities too (see “Nurturing Temperament”). Quinn’s team has also identified correlations between exploring and key survival behaviours: the more a bird likes to explore, the more willing it is to disperse, take risks and act aggressively. In contrast, less exploratory individuals were better at solving problems to find food.

Read the entire article following the jump.

Image courtesy of Psychology today.

Totalitarianism in the Age of the Internet

Google chair Eric Schmidt is in a very elite group. Not only does he run a major and very profitable U.S. corporation, and by extrapolation is thus a “googillionaire”, he’s also been to North Korea.

We excerpt below Schmidt’s recent essay, with co-author Jared Cohen, about freedom in both the real and digital worlds.

From the Wall Street Journal:

How do you explain to people that they are a YouTube sensation, when they have never heard of YouTube or the Internet? That’s a question we faced during our January visit to North Korea, when we attempted to engage with the Pyongyang traffic police. You may have seen videos on the Web of the capital city’s “traffic cops,” whose ballerina-like street rituals, featured in government propaganda videos, have made them famous online. The men and women themselves, however—like most North Koreans—have never seen a Web page, used a desktop computer, or held a tablet or smartphone. They have never even heard of Google (or Bing, for that matter).

Even the idea of the Internet has not yet permeated the public’s consciousness in North Korea. When foreigners visit, the government stages Internet browsing sessions by having “students” look at pre-downloaded and preapproved content, spending hours (as they did when we were there) scrolling up and down their screens in totalitarian unison. We ended up trying to describe the Internet to North Koreans we met in terms of its values: free expression, freedom of assembly, critical thinking, meritocracy. These are uncomfortable ideas in a society where the “Respected Leader” is supposedly the source of all information and where the penalty for defying him is the persecution of you and your family for three generations.

North Korea is at the beginning of a cat-and-mouse game that’s playing out all around the world between repressive regimes and their people. In most of the world, the spread of connectivity has transformed people’s expectations of their governments. North Korea is one of the last holdouts. Until only a few years ago, the price for being caught there with an unauthorized cellphone was the death penalty. Cellphones are now more common in North Korea since the government decided to allow one million citizens to have them; and in parts of the country near the border, the Internet is sometimes within reach as citizens can sometimes catch a signal from China. None of this will transform the country overnight, but one thing is certain: Though it is possible to curb and monitor technology, once it is available, even the most repressive regimes are unable to put it back in the box.

What does this mean for governments and would-be revolutionaries? While technology has great potential to bring about change, there is a dark side to the digital revolution that is too often ignored. There is a turbulent transition ahead for autocratic regimes as more of their citizens come online, but technology doesn’t just help the good guys pushing for democratic reform—it can also provide powerful new tools for dictators to suppress dissent.

Fifty-seven percent of the world’s population still lives under some sort of autocratic regime. In the span of a decade, the world’s autocracies will go from having a minority of their citizens online to a majority. From Tehran to Beijing, autocrats are building the technology and training the personnel to suppress democratic dissent, often with the help of Western companies.

Of course, this is no easy task—and it isn’t cheap. The world’s autocrats will have to spend a great deal of money to build systems capable of monitoring and containing dissident energy. They will need cell towers and servers, large data centers, specialized software, legions of trained personnel and reliable supplies of basic resources like electricity and Internet connectivity. Once such an infrastructure is in place, repressive regimes then will need supercomputers to manage the glut of information.

Despite the expense, everything a regime would need to build an incredibly intimidating digital police state—including software that facilitates data mining and real-time monitoring of citizens—is commercially available right now. What’s more, once one regime builds its surveillance state, it will share what it has learned with others. We know that autocratic governments share information, governance strategies and military hardware, and it’s only logical that the configuration that one state designs (if it works) will proliferate among its allies and assorted others. Companies that sell data-mining software, surveillance cameras and other products will flaunt their work with one government to attract new business. It’s the digital analog to arms sales, and like arms sales, it will not be cheap. Autocracies rich in national resources—oil, gas, minerals—will be able to afford it. Poorer dictatorships might be unable to sustain the state of the art and find themselves reliant on ideologically sympathetic patrons.

And don’t think that the data being collected by autocracies is limited to Facebook posts or Twitter comments. The most important data they will collect in the future is biometric information, which can be used to identify individuals through their unique physical and biological attributes. Fingerprints, photographs and DNA testing are all familiar biometric data types today. Indeed, future visitors to repressive countries might be surprised to find that airport security requires not just a customs form and passport check, but also a voice scan. In the future, software for voice and facial recognition will surpass all the current biometric tests in terms of accuracy and ease of use.

Today’s facial-recognition systems use a camera to zoom in on an individual’s eyes, mouth and nose, and extract a “feature vector,” a set of numbers that describes key aspects of the image, such as the precise distance between the eyes. (Remember, in the end, digital images are just numbers.) Those numbers can be fed back into a large database of faces in search of a match. The accuracy of this software is limited today (by, among other things, pictures shot in profile), but the progress in this field is remarkable. A team at Carnegie Mellon demonstrated in a 2011 study that the combination of “off-the-shelf” facial recognition software and publicly available online data (such as social-network profiles) can match a large number of faces very quickly. With cloud computing, it takes just seconds to compare millions of faces. The accuracy improves with people who have many pictures of themselves available online—which, in the age of Facebook, is practically everyone.

Dictators, of course, are not the only beneficiaries from advances in technology. In recent years, we have seen how large numbers of young people in countries such as Egypt and Tunisia, armed with little more than mobile phones, can fuel revolutions. Their connectivity has helped them to challenge decades of authority and control, hastening a process that, historically, has often taken decades. Still, given the range of possible outcomes in these situations—brutal crackdown, regime change, civil war, transition to democracy—it is also clear that technology is not the whole story.

Observers and participants alike have described the recent Arab Spring as “leaderless”—but this obviously has a downside to match its upside. In the day-to-day process of demonstrating, it was possible to retain a decentralized command structure (safer too, since the regimes could not kill the movement simply by capturing the leaders). But, over time, some sort of centralized authority must emerge if a democratic movement is to have any direction. Popular uprisings can overthrow dictators, but they’re only successful afterward if opposition forces have a plan and can execute it. Building a Facebook page does not constitute a plan.

History suggests that opposition movements need time to develop. Consider the African National Congress in South Africa. During its decades of exile from the apartheid state, the organization went through multiple iterations, and the men who would go on to become South African presidents (Nelson Mandela, Thabo Mbeki and Jacob Zuma) all had time to build their reputations, credentials and networks while honing their operational skills. Likewise with Lech Walesa and his Solidarity trade union in Eastern Europe. A decade passed before Solidarity leaders could contest seats in the Polish parliament, and their victory paved the way for the fall of communism.

Read the entire essay after the jump.

Image: North Korean students work in a computer lab. Courtesy of AP Photo/David Guttenfelder / Washington Post.

Your Genes. But Are They Your Intellectual Property?

The genetic code buried deep within your cells, described in a unique sequence encoded in your DNA, defines who you are at the most fundamental level. The 20,000 or so genes in your genome establish how you are constructed and how you function (and malfunction). These genes are common to many, but their expression belongs to only you.

Yet, companies are out to patent strings of this genetic code. While many would argue that patent ownership is a sound business strategy, in most industries, it is morally indefensible in this case. Rafts of bio-ethicists have argued the pros and cons of patenting animal and human genetic information for decades, and as we speak a case has made it to the U.S. Supreme Court. Can a company claim ownership of your genetic code? While the rights of business over those of an individual’s genetic code are dubious at best, it is clear that public consensus and a clear ethical framework, and consequently a sound legal doctrine, lag far behind the actual science.

From the Guardian

Tracey Barraclough made a grim discovery in 1998. She found she possessed a gene that predisposed her to cancer. “I was told I had up to an 85% chance of developing breast cancer and an up to 60% chance of developing ovarian cancer,” she recalls. The piece of DNA responsible for her grim predisposition is known as the BRCA1 gene.

Tracey was devastated, but not surprised. She had sought the gene test because her mother, grandmother and great-grandmother had all died of ovarian cancer in their 50s. Four months later Tracey had her womb and ovaries removed to reduce her cancer risk. A year later she had a double mastectomy.

“Deciding to embark on that was the loneliest and most agonising journey of my life,” Tracey says. “My son, Josh, was five at the time and I wanted to live for him. I didn’t want him to grow up without a mum.” Thirteen years later, Tracey describes herself as “100% happy” with her actions. “It was the right thing for me. I feel that losing my mother, grandmother and great-grandmother hasn’t been in vain.”

The BRCA1 gene that Tracey inherited is expressed in breast tissue where it helps repair damaged DNA. In its mutated form, found in a small percentage of women, damaged DNA cannot be repaired and carriers become highly susceptible to cancers of the breast and ovaries.

The discovery of BRCA1 in 1994, and a second version, BRCA2, discovered a year later, remains one of the greatest triumphs of modern genetics. It allows doctors to pinpoint women at high risk of breast or ovarian cancer in later life. Stars such as Sharon Osbourne and Christina Applegate have been among those who have had BRCA1 diagnoses and subsequent mastectomies. BRCA technology has saved many lives over the years. However, it has also triggered a major division in the medical community, a split that last week ended up before the nine justices of the US supreme court. At issue is the simple but fundamental question: should the law allow companies to patent human genes? It is a battle that has profound implications for genetic research and has embroiled scientists on both sides of the Atlantic in a major argument about the nature of scientific inquiry.

On one side, US biotechnology giant Myriad Genetics is demanding that the US supreme court back the patents it has taken out on the BRCA genes. The company believes it should be the only producer of tests to detect mutations in these genes, a business it has carried out in the United States for more than a decade.

On the other side, a group of activists, represented by lawyers from the American Civil Liberties Union, argues that it is fundamentally absurd and immoral to claim ownership of humanity’s shared genetic heritage and demands that the court ban patents. How can anyone think that any individual or company should enjoy exclusive use of naturally occurring DNA sequences pertinent to human diseases, they ask?

It is a point stressed by Gilda Witte, head of Ovarian Cancer Action in the UK. “The idea that you can hold a patent to a piece of human DNA is just wrong. More and more genes that predispose individuals to cancers and other conditions are being discovered by scientists all the time. If companies like Myriad are allowed to hold more and more patents like the ones they claim for BRCA1 and BRCA2, the cost of diagnosing disease is going to soar.”

For its part, Myriad denies it has tried to patent human DNA on its own. Instead, the company argues that its patents cover the techniques it has developed to isolate the BRCA1 and BRCA2 genes and the chemical methods it has developed to make it possible to analyse the genes in the laboratory. Mark Capone, the president of Myriad, says his company has invested $500m in developing its BRCA tests.

“It is certainly true that people will not invest in medicine unless there is some return on that investment,” said Justin Hitchcock, a UK expert on patent law and medicine. “That is why Myriad has sought these patents.”

In Britain, women such as Tracey Barraclough have been given BRCA tests for free on the NHS. In the US, where Myriad holds patents, those seeking such tests have to pay the company $4,000. It might therefore seem to be a peculiarly American debate based on the nation’s insistence on having a completely privatised health service. Professor Alan Ashworth, director of the Institute for Cancer Research, disagreed, however.

“I think that, if Myriad win this case, the impact will be retrograde for the whole of genetic research across the globe,” he said. “The idea that you can take a piece of DNA and claim that only you are allowed to test for its existence is wrong. It stinks, morally and intellectually. People are becoming easier about using and exchanging genetic information at present. Any move to back Myriad would take us back decades.”

Issuing patents is a complicated business, of course, a point demonstrated by the story of monoclonal antibodies. Developed in British university labs in the 1970s, these artificial versions of natural antibodies won a Nobel prize in 1984 for their inventors, a team led by César Milstein at Cambridge University. Monoclonal antibodies target disease sites in the human body and can be fitted with toxins to be sent like tiny Exocet missiles to carry their lethal payloads straight to a tumour.

When Milstein and his team finished their research, they decided to publish their results straight away. Once in the public domain, the work could no longer claim patent protection, a development that enraged the newly elected prime minister, Margaret Thatcher, a former patent lawyer. She, and many others, viewed the monoclonal story as a disaster that could have cost Britain billions.

But over the years this view has become less certain. “If you look at medicines based on monoclonal antibodies today, it is clear these are some of the most valuable on the market,” said Hitchcock. “But that value is based on the layers of inventiveness that have since been added to the basic concept of the monoclonal antibody and has nothing to do with the actual technique itself.”

Read the entire article following the jump.

Image: A museum visitor views a digital representation of the human genome in New York City in 2001. Courtesy of Mario Tama, Getty Images / National Geographic.

One Way Ticket to Mars

You would be rightfully mistaken for thinking this might be a lonesome bus trip to Mars, Pennsylvania or to the North American headquarters of Mars, purveyors of many things chocolaty including M&Ms, Mars Bars and Snickers, in New Jersey. This one way ticket is further afield, to the Red Planet, and comes from a company known as Mars One — estimated time of departure, 2023.

From the Guardian:

A few months before he died, Carl Sagan recorded a message of hope to would-be Mars explorers, telling them: “Whatever the reason you’re on Mars is, I’m glad you’re there. And I wish I was with you.”

On Monday, 17 years after the pioneering astronomer set out his hopeful vision of the future in 1996, a company from the Netherlands is proposing to turn Sagan’s dreams of reaching Mars into reality. The company, Mars One, plans to send four astronauts on a trip to the Red Planet to set up a human colony in 2023. But there are a couple of serious snags.

Firstly, when on Mars their bodies will have to adapt to surface gravity that is 38% of that on Earth. It is thought that this would cause such a total physiological change in their bone density, muscle strength and circulation that voyagers would no longer be able to survive in Earth’s conditions. Secondly, and directly related to the first, they will have to say goodbye to all their family and friends, as the deal doesn’t include a return ticket.

The Mars One website states that a return “cannot be anticipated nor expected”. To return, they would need a fully assembled and fuelled rocket capable of escaping the gravitational field of Mars, on-board life support systems capable of up to a seven-month voyage and the capacity either to dock with a space station orbiting Earth or perform a safe re-entry and landing.

“Not one of these is a small endeavour” the site notes, requiring “substantial technical capacity, weight and cost”.

Nevertheless, the project has already had 10,000 applicants, according to the company’s medical director, Norbert Kraft. When the official search is launched on Monday at the Hotel Pennsylvania in New York, they expect tens of thousands more hopefuls to put their names forward.

Kraft told the Guardian that the applicants so far ranged in age from 18 to at least 62 and, though they include women, they tended to be men.

The reasons they gave for wanting to go were varied, he said. One of three examples Kraft forwarded by email to the Guardian cited Sagan.

An American woman called Cynthia, who gave her age as 32, told the company that it was a “childhood imagining” of hers to go to Mars. She described a trip her mother had taken her on in the early 1990s to a lecture at the University of Wisconsin.

In a communication to Mars One, she said the lecturer had been Sagan and she had asked him if he thought humans would land on Mars in her lifetime. Cynthia said: “He in turn asked me if I wanted to be trapped in a ‘tin can spacecraft’ for the two years it would take to get there. I told him yes, he smiled, and told me in all seriousness, that yes, he absolutely believed that humans would reach Mars in my lifetime.”

She told the project: “When I first heard about the Mars One project I thought, this is my chance – that childhood dream could become a reality. I could be one of the pioneers, building the first settlement on Mars and teaching people back home that there are still uncharted territories that humans can reach for.”

The prime attributes Mars One is looking for in astronaut-settlers is resilience, adaptability, curiosity, ability to trust and resourcefulness, according to Kraft. They must also be over 18.

Professor Gerard ‘t Hooft, winner of the Nobel prize for theoretical physics in 1999 and lecturer of theoretical physics at the University of Utrecht, Holland, is an ambassador for the project. ‘T Hooft admits there are unknown health risks. The radiation is “of quite a different nature” than anything that has been tested on Earth, he told the BBC.

Founded in 2010 by Bas Lansdorp, an engineer, Mars One says it has developed a realistic road map and financing plan for the project based on existing technologies and that the mission is perfectly feasible. The website states that the basic elements required for life are already present on the planet. For instance, water can be extracted from ice in the soil and Mars has sources of nitrogen, the primary element in the air we breathe. The colony will be powered by specially adapted solar panels, it says.

In March, Mars One said it had signed a contract with the American firm Paragon Space Development Corporation to take the first steps in developing the life support system and spacesuits fit for the mission.

The project will cost a reported $6bn (£4bn), a sum Lansdorp has said he hopes will be met partly by selling broadcasting rights. “The revenue garnered by the London Olympics was almost enough to finance a mission to Mars,” Lansdorp said, in an interview with ABC News in March.

Another ambassador to the project is Paul Römer, the co-creator of Big Brother, one of the first reality TV shows and one of the most successful.

On the website, Römer gave an indication of how the broadcasting of the project might proceed: “This mission to Mars can be the biggest media event in the world,” said Römer. “Reality meets talent show with no ending and the whole world watching. Now there’s a good pitch!”

The aim is to establish a permanent human colony, according to the company’s website. The first team would land on the surface of Mars in 2023 to begin constructing the colony, with a team of four astronauts every two years after that.

The project is not without its sceptics, however, and concerns have been raised about how astronauts might get to the surface and establish a colony with all the life support and other requirements needed. There were also concerns over the health implications for the applicants.

Dr Veronica Bray, from the University of Arizona’s lunar and planetary laboratory, told BBC News that Earth was protected from solar winds by a strong magnetic field, without which it would be difficult to survive. The Martian surface is very hostile to life. There is no liquid water, the atmospheric pressure is “practically a vacuum”, radiation levels are higher and temperatures vary wildly. High radiation levels can lead to increased cancer risk, a lowered immune system and possibly infertility, she said.

To minimise radiation, the project team will cover the domes they plan to build with several metres of soil, which the colonists will have to dig up.

The mission hopes to inspire generations to “believe that all things are possible, that anything can be achieved” much like the Apollo moon landings.

“Mars One believes it is not only possible, but imperative that we establish a permanent settlement on Mars in order to accelerate our understanding of the formation of the solar system, the origins of life, and of equal importance, our place in the universe” it says.

Read the entire article following the jump.

Image: Panoramic View From ‘Rocknest’ Position of Curiosity Mars Rover. Courtesy of JPL / NASA.

Moist and Other Words We Hate

Some words give us the creeps, they raise the hair on back of our heads, they make us squirm and give us an internal shudder. “Moist” is such as word.

From Slate:

The George Saunders story “Escape From Spiderhead,” included in his much praised new book Tenth of December, is not for the squeamish or the faint of heart. The sprawling, futuristic tale delves into several potentially unnerving topics: suicide, sex, psychotropic drugs. It includes graphic scenes of self-mutilation. It employs the phrases “butt-squirm,” “placental blood,” and “thrusting penis.” At one point, Saunders relates a conversation between two characters about the application of medicinal cream to raw, chafed genitals.

Early in the story, there is a brief passage in which the narrator, describing a moment of postcoital amorousness, says, “Everything seemed moist, permeable, sayable.” This sentence doesn’t really stand out from the rest—in fact, it’s one of the less conspicuous sentences in the story. But during a recent reading of “Escape From Spiderhead” in Austin, Texas, Saunders says he encountered something unexpected. “I’d texted a cousin of mine who was coming with her kids (one of whom is in high school) just to let her know there was some rough language,” he recalls. “Afterwards she said she didn’t mind fu*k, but hated—wait for it—moist. Said it made her a little physically ill. Then I went on to Jackson, read there, and my sister Jane was in the audience—and had the same reaction. To moist.”

Mr. Saunders, say hello to word aversion.

It’s about to get really moist in here. But first, some background is in order. The phenomenon of word aversion—seemingly pedestrian, inoffensive words driving some people up the wall—has garnered increasing attention over the past decade or so. In a recent post on Language Log, University of Pennsylvania linguistics professor Mark Liberman defined the concept as “a feeling of intense, irrational distaste for the sound or sight of a particular word or phrase, not because its use is regarded as etymologically or logically or grammatically wrong, nor because it’s felt to be over-used or redundant or trendy or non-standard, but simply because the word itself somehow feels unpleasant or even disgusting.”

So we’re not talking about hating how some people say laxadaisical instead of lackadaisical or wanting to vigorously shake teenagers who can’t avoid using the word like between every other word of a sentence. If you can’t stand the word tax because you dislike paying taxes, that’s something else, too. (When recently asked about whether he harbored any word aversions, Harvard University cognition and education professor Howard Gardner offered up webinar, noting that these events take too much time to set up, often lack the requisite organization, and usually result in “a singularly unpleasant experience.” All true, of course, but that sort of antipathy is not what word aversion is all about.)

Word aversion is marked by strong reactions triggered by the sound, sight, and sometimes even the thought of certain words, according to Liberman. “Not to the things that they refer to, but to the word itself,” he adds. “The feelings involved seem to be something like disgust.”

Participants on various message boards and online forums have noted serious aversions to, for instance, squab, cornucopia, panties, navel, brainchild, crud, slacks, crevice, and fudge, among numerous others. Ointment, one Language Log reader noted in 2007, “has the same mouth-feel as moist, yet it’s somehow worse.” In response to a 2009 post on the subject by Ben Zimmer, one commenter confided: “The word meal makes me wince. Doubly so when paired with hot.” (Nineteen comments later, someone agreed, declaring: “Meal is a repulsive word.”) In many cases, real-life word aversions seem no less bizarre than when the words mattress and tin induce freak-outs on Monty Python’s Flying Circus. (The Monty Python crew knew a thing or two about annoying sounds.)

Jason Riggle, a professor in the department of linguistics at the University of Chicago, says word aversions are similar to phobias. “If there is a single central hallmark to this, it’s probably that it’s a more visceral response,” he says. “The [words] evoke nausea and disgust rather than, say, annoyance or moral outrage. And the disgust response is triggered because the word evokes a highly specific and somewhat unusual association with imagery or a scenario that people would typically find disgusting—but don’t typically associate with the word.” These aversions, Riggle adds, don’t seem to be elicited solely by specific letter combinations or word characteristics. “If we collected enough of [these words], it might be the case that the words that fall in this category have some properties in common,” he says. “But it’s not the case that words with those properties in common always fall in the category.”

So back to moist. If pop cultural references, Internet blog posts, and social media are any indication, moist reigns supreme in its capacity to disgust a great many of us. Aversion to the word has popped up on How I Met Your Mother and Dead Like Me. VH1 declared that using the word moist is enough to make a man “undateable.” In December, Huffington Post’s food section published a piece suggesting five alternatives to the word moist so the site could avoid its usage when writing about various cakes. Readers of The New Yorker flocked to Facebook and Twitter to choose moist as the one word they would most like to be eliminated from the English language. In a survey of 75 Mississippi State University students from 2009, moist placed second only to vomit as the ugliest word in the English language. In a 2011 follow-up survey of 125 students, moist pulled into the ugly-word lead—vanquishing a greatest hits of gross that included phlegm, ooze, mucus, puke, scab, and pus. Meanwhile, there are 7,903 people on Facebook who like the “interest” known as “I Hate the Word Moist.” (More than 5,000 other Facebook users give the thumbs up to three different moist-hatred Facebook pages.)

Being grossed out by the word moist is not beyond comprehension. It’s squishy-seeming, and, to some, specifically evocative of genital regions and undergarments. These qualities are not unusual when it comes to word aversion. Many hated words refer to “slimy things, or gross things, or names for garments worn in potentially sexual areas, or anything to do with food, or suckling, or sexual overtones,” says Riggle. But other averted words are more confounding, notes Liberman. “There is a list of words that seem to have sexual connotations that are among the words that elicit this kind of reaction—moist being an obvious one,” he says. “But there are other words like luggage, and pugilist, and hardscrabble, and goose pimple, and squab, and so on, which I guess you could imagine phonic associations between those words and something sexual, but it certainly doesn’t seem obvious.”

So then the question becomes: What is it about certain words that makes certain people want to hurl?

Riggle thinks the phenomenon may be dependent on social interactions and media coverage. “Given that, as far back as the aughts, there were comedians making jokes about hating [moist], people who were maybe prone to have that kind of reaction to one of these words, surely have had it pointed out to them that it’s an icky word,” he says. “So, to what extent is it really some sort of innate expression that is independently arrived at, and to what extent is it sort of socially transmitted? Disgust is really a very social emotion.”

And in an era of YouTube, Twitter, Vine, BuzzFeed top-20 gross-out lists, and so on, trends, even the most icky ones, spread fast. “There could very well be a viral aspect to this, where either through the media or just through real-world personal connections, the reaction to some particular word—for example, moist—spreads,” says Liberman. “But that’s the sheerest speculation.”

Words do have the power to disgust and repulse, though—that, at least, has been demonstrated in scholarly investigations. Natasha Fedotova, a Ph.D. student studying psychology at the University of Pennsylvania, recently conducted research examining the extent to which individuals connect the properties of an especially repellent thing to the word that represents it. “For instance,” she says, “the word rat, which stands for a disgusting animal, can contaminate an edible object [such as water] if the two touch. This result cannot be explained solely in terms of the tendency of the word to act as a reminder of the disgusting entity because the effect depends on direct physical contact with the word.” Put another way, if you serve people who are grossed out by rats Big Macs on plates that have the word rat written on them, some people will be less likely to want to eat the portion of the burger that touched the word. Humans, in these instances, go so far as to treat gross-out words “as though they can transfer negative properties through physical contact,” says Fedotova.

Product marketers and advertisers are, not surprisingly, well aware of these tendencies, even if they haven’t read about word aversion (and even though they’ve been known to slip up on the word usage front from time to time, to disastrous effect). George Tannenbaum, an executive creative director at the advertising agency R/GA, says those responsible for creating corporate branding strategies know that consumers are an easily skeeved-out bunch. “Our job as communicators and agents is to protect brands from their own linguistic foibles,” he says. “Obviously there are some words that are just ugly sounding.”

Sometimes, because the stakes are so high, Tannenbaum says clients can be risk averse to an extreme. He recalled working on an ad for a health club that included the word pectoral, which the client deemed to be dangerously close to the word pecker. In the end, after much consideration, they didn’t want to risk any pervy connotations. “We took it out,” he says.

Read the entire article following the jump.

Image courtesy of keep-calm-o-matic.

Idyllic Undeveloped Land: Only 1,200 Light Years Away

Humans may soon make their only home irreversibly uninhabitable. Fortunately, astronomers have recently discovered a couple of exo-planets capable of sustaining life. Unfortunately, they are a little too distant — using current technology it would take humans around 26 million years. But, we can still dream.

From the New York Times:

Astronomers said Thursday that they had found the most Earth-like worlds yet known in the outer cosmos, a pair of planets that appear capable of supporting life and that orbit a star 1,200 light-years from here, in the northern constellation Lyra.

They are the two outermost of five worlds circling a yellowish star slightly smaller and dimmer than our Sun, heretofore anonymous and now destined to be known in the cosmic history books as Kepler 62, after NASA’s Kepler spacecraft, which discovered them. These planets are roughly half again as large as Earth and are presumably balls of rock, perhaps covered by oceans with humid, cloudy skies, although that is at best a highly educated guess.

Nobody will probably ever know if anything lives on these planets, and the odds are that humans will travel there only in their faster-than-light dreams, but the news has sent astronomers into heavenly raptures. William Borucki of NASA’s Ames Research Center, head of the Kepler project, described one of the new worlds as the best site for Life Out There yet found in Kepler’s four-years-and-counting search for other Earths. He treated his team to pizza and beer on his own dime to celebrate the find (this being the age of sequestration). “It’s a big deal,” he said.

Looming brightly in each other’s skies, the two planets circle their star at distances of 37 million and 65 million miles, about as far apart as Mercury and Venus in our solar system. Most significantly, their orbits place them both in the “Goldilocks” zone of lukewarm temperatures suitable for liquid water, the crucial ingredient for Life as We Know It.

Goldilocks would be so jealous.

Previous claims of Goldilocks planets with “just so” orbits snuggled up to red dwarf stars much dimmer and cooler than the Sun have had uncertainties in the size and mass and even the existence of these worlds, said David Charbonneau of the Harvard-Smithsonian Center for Astrophysics, an exoplanet hunter and member of the Kepler team.

“This is the first planet that ticks both boxes,” Dr. Charbonneau said, speaking of the outermost planet, Kepler 62f. “It’s the right size and the right temperature.” Kepler 62f is 40 percent bigger than Earth and smack in the middle of the habitable zone, with a 267-day year. In an interview, Mr. Borucki called it the best planet Kepler has found.

Its mate, known as Kepler 62e, is slightly larger — 60 percent bigger than Earth — and has a 122-day orbit, placing it on the inner edge of the Goldilocks zone. It is warmer but also probably habitable, astronomers said.

The Kepler 62 system resembles our own solar system, which also has two planets in the habitable zone: Earth — and Mars, which once had water and would still be habitable today if it were more massive and had been able to hang onto its primordial atmosphere.

The Kepler 62 planets continue a string of breakthroughs in the last two decades in which astronomers have gone from detecting the first known planets belonging to other stars, or exoplanets, broiling globs of gas bigger than Jupiter, to being able to discern smaller and smaller more moderate orbs — iceballs like Neptune and, now, bodies only a few times the mass of Earth, known technically as super-Earths. Size matters in planetary affairs because we can’t live under the crushing pressure of gas clouds on a world like Jupiter. Life as We Know It requires solid ground and liquid water — a gentle terrestrial environment, in other words.

Kepler 62’s newfound worlds are not quite small enough to be considered strict replicas of Earth, but the results have strengthened the already strong conviction among astronomers that the galaxy is littered with billions of Earth-size planets, perhaps as many as one per star, and that astronomers will soon find Earth 2.0, as they call it — our lost twin bathing in the rays of an alien sun.

“Kepler and other experiments are finding planets that remind us more and more of home,” said Geoffrey Marcy, a longtime exoplanet hunter at the University of California, Berkeley, and Kepler team member. “It’s an amazing moment in science. We haven’t found Earth 2.0 yet, but we can taste it, smell it, right there on our technological fingertips.”

Read the entire article following the jump.

Image: The Kepler 62 system: homes away from home. Courtesy of JPL-Caltech/Ames/NASA.

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

Financial Apocalypse and Economic Collapse via Excel

It’s long been known that Microsoft Powerpoint fuels corporate mediocrity and causes brain atrophy if used by creative individuals. Now we discover that another flashship product from the Seattle software maker, this time Excel, is to blame for some significant stresses on the global financial system.

From ars technica:

An economics paper claiming that high levels of national debt led to low or negative economic growth could turn out to be deeply flawed as a result of, among other things, an incorrect formula in an Excel spreadsheet. Microsoft’s PowerPoint has been considered evil thanks to the proliferation of poorly presented data and dull slides that are created with it. Might Excel also deserve such hyperbolic censure?

The paper, Growth in a Time of Debt, was written by economists Carmen Reinhart and Kenneth Rogoff and published in 2010. Since publication, it has been cited abundantly by the world’s press politicians, including one-time vice president nominee Paul Ryan (R-WI). The link it draws between high levels of debt and negative average economic growth has been used by right-leaning politicians to justify austerity budgets: slashing government expenditure and reducing budget deficits in a bid to curtail the growth of debt.

This link was always controversial, with many economists proposing that the correlation between high debt and low growth was just as likely to have a causal link in the other direction to that proposed by Reinhart and Rogoff: it’s not that high debt causes low growth, but rather that low growth leads to high debt.

However, the underlying numbers and the existence of the correlation was broadly accepted, due in part to Reinhart and Rogoff’s paper not including the source data they used to draw their inferences.

A new paper, however, suggests that the data itself is in error. Thomas Herndon, Michael Ash, and Robert Pollin of the University of Massachusetts, Amherst, tried to reproduce the Reinhart and Rogoff result with their own data, but they couldn’t. So they asked for the original spreadsheets that Reinhart and Rogoff used to better understand what they were doing. Their results, published as “Does High Public Debt Consistently Stifle Economic Growth? A Critique of Reinhart and Rogoff,” suggest that the pro-austerity paper was flawed. A comprehensive assessment of the new paper can be found at the Rortybomb economics blog.

It turns out that the Reinhart and Rogoff spreadsheet contained a simple coding error. The spreadsheet was supposed to calculate average values across twenty countries in rows 30 to 49, but in fact it only calculated values in 15 countries in rows 30 to 44. Instead of the correct formula AVERAGE(L30:L49), the incorrect AVERAGE(L30:L44) was used.

There was also a pair of important, but arguably more subjective, errors in the way the data was processed. Reinhart and Rogoff excluded data for some countries in the years immediately after World War II. There might be a reason for this; there might not. The original paper doesn’t justify the exclusion.

The original paper also used an unusual scheme for weighting data. The UK’s 19-year stretch of high debt and moderate growth (during the period between 1946 and 1964, the debt-to-GDP ratio was above 90 percent, and growth averaged 2.4 percent) is conflated into a single data point and treated as equivalent to New Zealand’s single year of debt above 90 percent, during which it experienced growth of -7.6. Some kind of weighting system might be justified, with Herndon, Ash, and Pollin speculating that there is a serial correlation between years.

Recalculating the data to remove these three issues turns out to provide much weaker evidence for austerity. Although growth is higher in countries with a debt ratio of less than 30 percent (averaging 4.2 percent), there’s no point at which it falls off a cliff and inevitably turns negative. For countries with a debt of between 30 and 60 percent, average growth was 3.1 percent, between 60 and 90 it was 3.2 percent, and above 90 percent it was 2.2 percent. Lower than the low debt growth, but far from the -0.1 percent growth the original paper claimed.

As such, the argument that high levels of debt should be avoided and the justification for austerity budgets substantially evaporates. Whether politicians actually used this paper to shape their beliefs or merely used its findings to give cover for their own pre-existing beliefs is hard to judge.

Excel, of course, isn’t the only thing to blame here. But it played a role. Excel is used extensively in fields such as economics and finance, because it’s an extremely useful tool that can be deceptively simple to use, making it apparently perfect for ad hoc calculations. However, spreadsheet formulae are notoriously fiddly to work with and debug, and Excel has long-standing deficiencies when it comes to certain kinds of statistical analysis.

It’s unlikely that this is the only occasion on which improper use of Excel has produced a bad result with far-reaching consequences. Bruno Iksil, better known as the “London Whale,” racked up billions of dollars of losses for bank JPMorgan. The post mortem of his trades revealed extensive use of Excel, including manual copying and pasting between workbooks and a number of formula errors that resulted in underestimation of risk.

Read the entire article following the jump.

Image: Default Screen of Microsoft Excel 2013, component of Microsoft Office 2013. Courtesy of Microsoft / Wikipedia.

Off World Living

Will humanity ever transcend gravity to become a space-faring race? A simple napkin-based calculation will give you the answer.

From Scientific American:

Optimistic visions of a human future in space seem to have given way to a confusing mix of possibilities, maybes, ifs, and buts. It’s not just the fault of governments and space agencies, basic physics is in part the culprit. Hoisting mass away from Earth is tremendously difficult, and thus far in fifty years we’ve barely managed a total the equivalent of a large oil-tanker. But there’s hope.

Back in the 1970?s the physicist Gerard O’Neill and his students investigated concepts of vast orbital structures capable of sustaining entire human populations. It was the tail end of the Apollo era, and despite the looming specter of budget restrictions and terrestrial pessimism there was still a sense of what might be, what could be, and what was truly within reach.

The result was a series of blueprints for habitats that solved all manner of problems for space life, from artificial gravity (spin up giant cylinders), to atmospheres, and radiation (let the atmosphere shield you). They’re pretty amazing, and they’ve remained perhaps one of the most optimistic visions of a future where we expand beyond the Earth.

But there’s a lurking problem, and it comes down to basic physics. It is awfully hard to move stuff from the surface of our planet into orbit or beyond. O’Neill knew this, as does anyone else who’s thought of grand space schemes. The solution is to ‘live of the land’, extracting raw materials from either the Moon with its shallower gravity well, or by processing asteroids. To get to that point though we’d still have to loft an awful lot of stuff into space – the basic tools and infrastructure have to start somewhere.

And there’s the rub. To put it into perspective I took a look at the amount of ‘stuff’ we’ve managed to get off Earth in the past 50-60 years. It’s actually pretty hard to evaluate, lots of the mass we send up comes back down in short order – either as spent rocket stages or as short-lived low-altitude satellites. But we can still get a feel for it.

To start with, a lower limit on the mass hoisted to space is the present day artificial satellite population. Altogether there are in excess of about 3,000 satellites up there, plus vast amounts of small debris. Current estimates suggest this amounts to a total of around 6,000 metric tons. The biggest single structure is the International Space Station, currently coming in at about 450 metric tons (about 992,000 lb for reference).

These numbers don’t reflect launch mass – the total of a rocket + payload + fuel. To put that into context, a fully loaded Saturn V was about 2,000 metric tons, but most of that was fuel.

When the Space Shuttle flew it amounted to about 115 metric tons (Shuttle + payload) making it into low-Earth orbit. Since there were 135 launches of the Shuttle that amounts to a total hoisted mass of about 15,000 metric tons over a 30 year period.

Read the entire article after the jump.

Image: A pair of O’Neill cylinders. NASA ID number AC75-1085. Courtesy of NASA / Wikipedia.

Getting to the Bottom of It: Crimes of Fashion

Living in the West we are generally at liberty to wear what we wish, certainly in private, and usually in public — subject to public norms of course. That said, one can make a good case for punishing offenders of all genders who enact “crimes of fashion”.

From the Telegraph:

One of the lesser-known effects of the double-dip recession is that young men have been unable to afford belts. All over the Western world we have had to witness exposed bottoms, thanks to lack of funds to pop out and buy a belt or a pair of braces, although many people have tried to convince me that this is actually a conscious “fashion’” choice.

A town in Louisiana has fought back against this practice and is now imposing fines for those who choose to fly their trousers at half-mast.  What a shame this new law is, as these poor chaps are exactly that – poor.  They can’t afford a belt!  Fining them isn’t going to help their finances, is it?

These weird people who try to tell me boys actually choose to wear their trousers in this style have said that it harks back to the American prisons, when fashion accessories such as belts were whipped off the inmates in case they did anything foolish with them.  Like wearing a brown one with black shoes.

There is also a school of thought that showing the posterior was a sign to others that you were open to “advances”.  I cited this to a group of boys at a leading school recently and the look of horror that came over their faces was interesting to note.

It’s not just the chaps and belt-makers that are suffering from this recession. Women seem to be unable to afford tops that cover their bra straps. You only have to walk down any high street: you may as well be in a lingerie department.  Showing your underwear is clearly a sign that you are poor – in need of charity, sympathy and probably state-funded assistance.

To play devil’s advocate for one second, say these economic sufferers are actually making a conscious choice to show the rest of us their pants, then maybe Louisiana has the right idea. Fines are perhaps the best way to go. Here is a suggested menu of fines, which you’ll be pleased to know I have submitted to local councils the length and breadth of the nation.

For him

Trousers around bottom – £25 [$37.50]

Brown shoes with a suit – £35 [$52.50]

Tie length too short – £15 [$22.50]

Top button undone when wearing a tie – £20 [$30]

For her

Open toed shoes at formal evening events – £15 [$22.50]

Bra straps on show – £25 [$37.50]

Skirts that are shorter than the eyelashes – £20 [$30]

Too much cleavage as well as too much leg on display – £25 [$37.50]

Wearing heels that you haven’t learned to walk in yet – £12 [$18]

Read the entire article after the jump.

Ray Kurzweil and Living a Googol Years

By all accounts serial entrepreneur, inventor and futurist Ray Kurzweil is Google’s most famous employee, eclipsing even co-founders Larry Page and Sergei Brin. As an inventor he can lay claim to some impressive firsts, such as the flatbed scanner, optical character recognition and the music synthesizer. As a futurist, for which he is now more recognized in the public consciousness, he ponders longevity, immortality and the human brain.

From the Wall Street Journal:

Ray Kurzweil must encounter his share of interviewers whose first question is: What do you hope your obituary will say?

This is a trick question. Mr. Kurzweil famously hopes an obituary won’t be necessary. And in the event of his unexpected demise, he is widely reported to have signed a deal to have himself frozen so his intelligence can be revived when technology is equipped for the job.

Mr. Kurzweil is the closest thing to a Thomas Edison of our time, an inventor known for inventing. He first came to public attention in 1965, at age 17, appearing on Steve Allen’s TV show “I’ve Got a Secret” to demonstrate a homemade computer he built to compose original music in the style of the great masters.

In the five decades since, he has invented technologies that permeate our world. To give one example, the Web would hardly be the store of human intelligence it has become without the flatbed scanner and optical character recognition, allowing printed materials from the pre-digital age to be scanned and made searchable.

If you are a musician, Mr. Kurzweil’s fame is synonymous with his line of music synthesizers (now owned by Hyundai). As in: “We’re late for the gig. Don’t forget the Kurzweil.”

If you are blind, his Kurzweil Reader relieved one of your major disabilities—the inability to read printed information, especially sensitive private information, without having to rely on somebody else.

In January, he became an employee at Google. “It’s my first job,” he deadpans, adding after a pause, “for a company I didn’t start myself.”

There is another Kurzweil, though—the one who makes seemingly unbelievable, implausible predictions about a human transformation just around the corner. This is the Kurzweil who tells me, as we’re sitting in the unostentatious offices of Kurzweil Technologies in Wellesley Hills, Mass., that he thinks his chances are pretty good of living long enough to enjoy immortality. This is the Kurzweil who, with a bit of DNA and personal papers and photos, has made clear he intends to bring back in some fashion his dead father.

Mr. Kurzweil’s frank efforts to outwit death have earned him an exaggerated reputation for solemnity, even caused some to portray him as a humorless obsessive. This is wrong. Like the best comedians, especially the best Jewish comedians, he doesn’t tell you when to laugh. Of the pushback he receives from certain theologians who insist death is necessary and ennobling, he snarks, “Oh, death, that tragic thing? That’s really a good thing.”

“People say, ‘Oh, only the rich are going to have these technologies you speak of.’ And I say, ‘Yeah, like cellphones.’ “

To listen to Mr. Kurzweil or read his several books (the latest: “How to Create a Mind”) is to be flummoxed by a series of forecasts that hardly seem realizable in the next 40 years. But this is merely a flaw in my brain, he assures me. Humans are wired to expect “linear” change from their world. They have a hard time grasping the “accelerating, exponential” change that is the nature of information technology.

“A kid in Africa with a smartphone is walking around with a trillion dollars of computation circa 1970,” he says. Project that rate forward, and everything will change dramatically in the next few decades.

“I’m right on the cusp,” he adds. “I think some of us will make it through”—he means baby boomers, who can hope to experience practical immortality if they hang on for another 15 years.

By then, Mr. Kurzweil expects medical technology to be adding a year of life expectancy every year. We will start to outrun our own deaths. And then the wonders really begin. The little computers in our hands that now give us access to all the world’s information via the Web will become little computers in our brains giving us access to all the world’s information. Our world will become a world of near-infinite, virtual possibilities.

How will this work? Right now, says Mr. Kurzweil, our human brains consist of 300 million “pattern recognition” modules. “That’s a large number from one perspective, large enough for humans to invent language and art and science and technology. But it’s also very limiting. Maybe I’d like a billion for three seconds, or 10 billion, just the way I might need a million computers in the cloud for two seconds and can access them through Google.”

We will have vast new brainpower at our disposal; we’ll also have a vast new field in which to operate—virtual reality. “As you go out to the 2040s, now the bulk of our thinking is out in the cloud. The biological portion of our brain didn’t go away but the nonbiological portion will be much more powerful. And it will be uploaded automatically the way we back up everything now that’s digital.”

“When the hardware crashes,” he says of humanity’s current condition, “the software dies with it. We take that for granted as human beings.” But when most of our intelligence, experience and identity live in cyberspace, in some sense (vital words when thinking about Kurzweil predictions) we will become software and the hardware will be replaceable.

Read the entire article after the jump.

Cheap Hydrogen

Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.

This new method of production appears to be less costly, less dangerous and also more environmentally sound.

From the Independent:

Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.

The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.

Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.

Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.

In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.

Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.

Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already  offers potential and promise and can help in making the storage of green  energy cheaper.”

Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.

But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.

Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate  the atoms.

Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”

Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it  currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen  to generate our electrical power  at home. It would also allow us to  significantly reduce the country’s  carbon footprint.”

Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”

Read the entire article following the jump.

The Digital Afterlife and i-Death

Leave it to Google to help you auto-euthanize and die digitally. The presence of our online selves after death was of limited concern until recently. However, with the explosion of online media and social networks our digital tracks remain preserved and scattered across drives and backups in distributed, anonymous data centers. Physical death does not change this.

[A case in point: your friendly editor at theDiagonal was recently asked to befriend a colleague via LinkedIn. All well and good, except that the colleague had passed-away two years earlier.]

So, armed with Google’s new Inactive Account Manager, death — at least online — may be just a couple of clicks away. By corollary it would be a small leap indeed to imagine an enterprising company charging an annual fee to a dearly-departed member to maintain a digital afterlife ad infinitum.

From the Independent:

The search engine giant Google has announced a new feature designed to allow users to decide what happens to their data after they die.

The feature, which applies to the Google-run email system Gmail as well as Google Plus, YouTube, Picasa and other tools, represents an attempt by the company to be the first to deal with the sensitive issue of data after death.

In a post on the company’s Public Policy Blog Andreas Tuerk, Product Manager, writes: “We hope that this new feature will enable you to plan your digital afterlife – in a way that protects your privacy and security – and make life easier for your loved ones after you’re gone.”

Google says that the new account management tool will allow users to opt to have their data deleted after three, six, nine or 12 months of inactivity. Alternatively users can arrange for certain contacts to be sent data from some or all of their services.

The California-based company did however stress that individuals listed to receive data in the event of ‘inactivity’ would be warned by text or email before the information was sent.

Social Networking site Facebook already has a function that allows friends and family to “memorialize” an account once its owner has died.

Read the entire article following the jump.

Tracking and Monetizing Your Every Move

Your movements are valuable — but not in the way you may think. Mobile technology companies are moving rapidly to exploit the vast amount of data collected from the billions of mobile devices. This data is extremely valuable to an array of organizations, including urban planners, retailers, and travel and transportation marketers. And, of course, this raises significant privacy concerns. Many believe that when the data is used collectively it preserves user anonymity. However, if correlated with other data sources it could be used to discover a range of unintended and previously private information, relating both to individuals and to groups.

From MIT Technology Review:

Wireless operators have access to an unprecedented volume of information about users’ real-world activities, but for years these massive data troves were put to little use other than for internal planning and marketing.

This data is under lock and key no more. Under pressure to seek new revenue streams (see “AT&T Looks to Outside Developers for Innovation”), a growing number of mobile carriers are now carefully mining, packaging, and repurposing their subscriber data to create powerful statistics about how people are moving about in the real world.

More comprehensive than the data collected by any app, this is the kind of information that, experts believe, could help cities plan smarter road networks, businesses reach more potential customers, and health officials track diseases. But even if shared with the utmost of care to protect anonymity, it could also present new privacy risks for customers.

Verizon Wireless, the largest U.S. carrier with more than 98 million retail customers, shows how such a program could come together. In late 2011, the company changed its privacy policy so that it could share anonymous and aggregated subscriber data with outside parties. That made possible the launch of its Precision Market Insights division last October.

The program, still in its early days, is creating a natural extension of what already happens online, with websites tracking clicks and getting a detailed breakdown of where visitors come from and what they are interested in.

Similarly, Verizon is working to sell demographics about the people who, for example, attend an event, how they got there or the kinds of apps they use once they arrive. In a recent case study, says program spokeswoman Debra Lewis, Verizon showed that fans from Baltimore outnumbered fans from San Francisco by three to one inside the Super Bowl stadium. That information might have been expensive or difficult to obtain in other ways, such as through surveys, because not all the people in the stadium purchased their own tickets and had credit card information on file, nor had they all downloaded the Super Bowl’s app.

Other telecommunications companies are exploring similar ideas. In Europe, for example, Telefonica launched a similar program last October, and the head of this new business unit gave the keynote address at new industry conference on “big data monetization in telecoms” in January.

“It doesn’t look to me like it’s a big part of their [telcos’] business yet, though at the same time it could be,” says Vincent Blondel, an applied mathematician who is now working on a research challenge from the operator Orange to analyze two billion anonymous records of communications between five million customers in Africa.

The concerns about making such data available, Blondel says, are not that individual data points will leak out or contain compromising information but that they might be cross-referenced with other data sources to reveal unintended details about individuals or specific groups (see “How Access to Location Data Could Trample Your Privacy”).

Already, some startups are building businesses by aggregating this kind of data in useful ways, beyond what individual companies may offer. For example, AirSage, an Atlanta, Georgia, a company founded in 2000, has spent much of the last decade negotiating what it says are exclusive rights to put its hardware inside the firewalls of two of the top three U.S. wireless carriers and collect, anonymize, encrypt, and analyze cellular tower signaling data in real time. Since AirSage solidified the second of these major partnerships about a year ago (it won’t specify which specific carriers it works with), it has been processing 15 billion locations a day and can account for movement of about a third of the U.S. population in some places to within less than 100 meters, says marketing vice president Andrea Moe.

As users’ mobile devices ping cellular towers in different locations, AirSage’s algorithms look for patterns in that location data—mostly to help transportation planners and traffic reports, so far. For example, the software might infer that the owners of devices that spend time in a business park from nine to five are likely at work, so a highway engineer might be able to estimate how much traffic on the local freeway exit is due to commuters.

Other companies are starting to add additional layers of information beyond cellular network data. One customer of AirSage is a relatively small San Francisco startup, Streetlight Data which recently raised $3 million in financing backed partly by the venture capital arm of Deutsche Telekom.

Streetlight buys both cellular network and GPS navigation data that can be mined for useful market research. (The cellular data covers a larger number of people, but the GPS data, collected by mapping software providers, can improve accuracy.) Today, many companies already build massive demographic and behavioral databases on top of U.S. Census information about households to help retailers choose where to build new stores and plan marketing budgets. But Streetlight’s software, with interactive, color-coded maps of neighborhoods and roads, offers more practical information. It can be tied to the demographics of people who work nearby, commute through on a particular highway, or are just there for a visit, rather than just supplying information about who lives in the area.

Read the entire article following the jump.

Image: mobile devices. Courtesy of W3.org

Dark Lightning

It’s fascinating how a seemingly well-understood phenomenon, such as lightning, can still yield enormous surprises. Researchers have found that visible flashes of lightning can also be accompanied by non-visible, and more harmful, radiation such as x- and gamma-rays.

From the Washington Post:

A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.

But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.

Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.

Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.

The only way to determine whether an airplane had been struck by dark lightning, Dwyer says, “would be to use a radiation detector. Right in the middle of [a flash], a very brief bluish-purple glow around the plane might be perceptible. Inside an aircraft, a passenger would probably not be able to feel or hear much of anything, but the radiation dose could be significant.”

However, because there’s only about one dark lightning occurrence for every thousand visible flashes and because pilots take great pains to avoid thunderstorms, Dwyer says, the risk of injury is quite limited. No one knows for sure if anyone has ever been hit by dark lightning.

About 25 million visible thunderbolts hit the United States every year, killing about 30 people and many farm animals, says John Jensenius, a lightning safety specialist with the National Weather Service in Gray, Maine. Worldwide, thunderstorms produce about a billion or so lightning bolts annually.

Read the entire article after the jump.

Image: Lightning in Foshan, China. Courtesy of Telegraph.

The Dangerous World of Pseudo-Academia

Pseudoscience can be fun — for comedic purposes only of course. But when it is taken seriously and dogmatically, as it often is by a significant number of people, it imperils rational dialogue and threatens real scientific and cultural progress. There is no end to the lengthy list of fake scientific claims and theories — some of our favorites include: the moon “landing” conspiracy, hollow Earth, Bermuda triangle, crop circles, psychic surgery, body earthing, room temperature fusion, perpetual and motion machines.

Fun aside, pseudoscience can also be harmful and dangerous particularly when those duped by the dubious practice are harmed physically, medically or financially. Which brings us to a recent, related development aimed at duping academics. Welcome to the world of pseudo-academia.

From the New York Times:

The scientists who were recruited to appear at a conference called Entomology-2013 thought they had been selected to make a presentation to the leading professional association of scientists who study insects.

But they found out the hard way that they were wrong. The prestigious, academically sanctioned conference they had in mind has a slightly different name: Entomology 2013 (without the hyphen). The one they had signed up for featured speakers who were recruited by e-mail, not vetted by leading academics. Those who agreed to appear were later charged a hefty fee for the privilege, and pretty much anyone who paid got a spot on the podium that could be used to pad a résumé.

“I think we were duped,” one of the scientists wrote in an e-mail to the Entomological Society.

Those scientists had stumbled into a parallel world of pseudo-academia, complete with prestigiously titled conferences and journals that sponsor them. Many of the journals and meetings have names that are nearly identical to those of established, well-known publications and events.

Steven Goodman, a dean and professor of medicine at Stanford and the editor of the journal Clinical Trials, which has its own imitators, called this phenomenon “the dark side of open access,” the movement to make scholarly publications freely available.

The number of these journals and conferences has exploded in recent years as scientific publishing has shifted from a traditional business model for professional societies and organizations built almost entirely on subscription revenues to open access, which relies on authors or their backers to pay for the publication of papers online, where anyone can read them.

Open access got its start about a decade ago and quickly won widespread acclaim with the advent of well-regarded, peer-reviewed journals like those published by the Public Library of Science, known as PLoS. Such articles were listed in databases like PubMed, which is maintained by the National Library of Medicine, and selected for their quality.

But some researchers are now raising the alarm about what they see as the proliferation of online journals that will print seemingly anything for a fee. They warn that nonexperts doing online research will have trouble distinguishing credible research from junk. “Most people don’t know the journal universe,” Dr. Goodman said. “They will not know from a journal’s title if it is for real or not.”

Researchers also say that universities are facing new challenges in assessing the résumés of academics. Are the publications they list in highly competitive journals or ones masquerading as such? And some academics themselves say they have found it difficult to disentangle themselves from these journals once they mistakenly agree to serve on their editorial boards.

The phenomenon has caught the attention of Nature, one of the most competitive and well-regarded scientific journals. In a news report published recently, the journal noted “the rise of questionable operators” and explored whether it was better to blacklist them or to create a “white list” of those open-access journals that meet certain standards. Nature included a checklist on “how to perform due diligence before submitting to a journal or a publisher.”

Jeffrey Beall, a research librarian at the University of Colorado in Denver, has developed his own blacklist of what he calls “predatory open-access journals.” There were 20 publishers on his list in 2010, and now there are more than 300. He estimates that there are as many as 4,000 predatory journals today, at least 25 percent of the total number of open-access journals.

“It’s almost like the word is out,” he said. “This is easy money, very little work, a low barrier start-up.”

Journals on what has become known as “Beall’s list” generally do not post the fees they charge on their Web sites and may not even inform authors of them until after an article is submitted. They barrage academics with e-mail invitations to submit articles and to be on editorial boards.

One publisher on Beall’s list, Avens Publishing Group, even sweetened the pot for those who agreed to be on the editorial board of The Journal of Clinical Trails & Patenting, offering 20 percent of its revenues to each editor.

One of the most prolific publishers on Beall’s list, Srinubabu Gedela, the director of the Omics Group, has about 250 journals and charges authors as much as $2,700 per paper. Dr. Gedela, who lists a Ph.D. from Andhra University in India, says on his Web site that he “learnt to devise wonders in biotechnology.”

Read the entire article following the jump.

Image courtesy of University of Texas.

Looking for Alien Engineering Work

We haven’t yet found any aliens inhabiting exoplanets orbiting distant stars. We haven’t received any intelligently manufactured radio signals from deep space. And, unless you subscribe to the conspiracy theories surrounding Roswell Area 51, it’s unlikely that we’ve been visited by an extra-terrestrial intelligence.

Most reasonable calculations suggest that the universe should be teeming with life beyond our small, blue planet. So, where are all the aliens and why haven’t we been contacted yet? Not content to wait, some astronomers believe we should be looking for evidence of distant alien engineering projects.

From the New Scientist:

ALIENS: where are you? Our hopes of finding intelligent companionship seem to be constantly receding. Mars and Venus are not the richly populated realms we once guessed at. The icy seas of the outer solar system may hold life, but almost certainly no more than microbes. And the search for radio signals from more distant extraterrestrials has so frustrated some astronomers that they are suggesting we shout out an interstellar “Hello”, in the hope of prodding the dozy creatures into a response.

So maybe we need to think along different lines. Rather than trying to intercept alien communications, perhaps we should go looking for alien artefacts.

There have already been a handful of small-scale searches, but now three teams of astronomers are setting out to scan a much greater volume of space (see diagram). Two groups hope to see the shadows of alien industry in fluctuating starlight. The third, like archaeologists sifting through a midden heap on Earth, is hunting for alien waste.

What they’re after is something rather grander than flint arrowheads or shards of pottery. Something big. Planet-sized power stations. Star-girdling rings or spheres. Computers the size of a solar system. Perhaps even an assembly of hardware so vast it can darken an entire galaxy.

It might seem crazy to even entertain the notion of such stupendous celestial edifices, let alone go and look for them. Yet there is a simple rationale. Unless tool-users are always doomed to destroy themselves, any civilisation out there is likely to be far older and far more advanced than ours.

Humanity has already covered vast areas of Earth’s surface with roads and cities, and begun sending probes to other planets. If we can do all this in a matter of centuries, what could more advanced civilisations do over many thousands or even millions of years?

In 1960, the physicist Freeman Dyson pointed out that if alien civilisations keep growing and expanding, they will inevitably consume ever more energy – and the biggest source of energy in any star system is the star itself. Our total power consumption today is equivalent to about 0.01 per cent of the sunlight falling on Earth, so solar power could easily supply all our needs. If energy demand keeps growing at 1 per cent a year, however, then in 1000 years we’d need more energy than strikes the surface of the planet. Other energy sources, such as nuclear fusion, cannot solve the problem because the waste heat would fry the planet.

In a similar position, alien civilisations could start building solar power plants, factories and even habitats in space. With material mined from asteroids, then planets, and perhaps even the star itself, they could really spread out. Dyson’s conclusion was that after thousands or millions of years, the star might be entirely surrounded by a vast artificial sphere of solar panels.

The scale of a Dyson sphere is almost unimaginable. A sphere with a radius similar to that of Earth’s orbit would have more than a hundred million times the surface area of Earth. Nobody thinks building it would be easy. A single shell is almost certainly out, as it would be under extraordinary stresses and gravitationally unstable. A more plausible option is a swarm: many huge power stations on orbits that do not intersect, effectively surrounding the star. Dyson himself does not like to speculate on the details, or on the likelihood of a sphere being built. “We have no way of judging,” he says. The crucial point is that if any aliens have built Dyson spheres, there is a chance we could spot them.

A sphere would block the sun’s light, making it invisible to our eyes, but the sphere would still emit waste heat in the form of infrared radiation. So, as Carl Sagan pointed out in 1966, if infrared telescopes spot a warm object but nothing shows up at visible wavelengths, it could be a Dyson sphere.

Some natural objects can produce the same effect. Very young and very old stars are often surrounded by dust and gas, which blocks their light and radiates infrared. But the infrared spectrum of these objects should be a giveaway. Silicate minerals in dust produce a distinctive broad peak in the spectrum, and molecules in a warm gas would produce bright or dark spectral lines at specific wavelengths. By contrast, waste heat from a sphere should have a smooth, featureless thermal spectrum. “We would be hoping that the spectrum looks boring,” says Matt Povich at the California State Polytechnic University in Pomona. “The more boring the better.”

Our first good view of the sky at the appropriate wavelengths came when the Infrared Astronomical Satellite surveyed the skies for 10 months in 1983, and a few astronomers have sifted through its data. Vyacheslav Slysh at the Space Research Institute in Moscow made the first attempt in 1985, and Richard Carrigan at Fermilab in Illinois published the latest search in 2009. “I wanted to get into the mode of the British Museum, to go and look for artefacts,” he says.

Carrigan found no persuasive sources, but the range of his search was limited. It would have detected spheres around sunlike stars only within 1000 light years of Earth. This is a very small part of the Milky Way, which is 100,000 light years across.

One reason few have joined Carrigan in the hunt for artefacts is the difficulty of getting funding for such projects. Then last year, the Templeton Foundation – an organisation set up by a billionaire to fund research into the “big questions” – invited proposals for its New Frontiers programme, specifically requesting research that would not normally be funded because of its speculative nature. A few astronomers jumped at the chance to look for alien contraptions and, in October, the programme approved three separate searches. The grants are just a couple of hundred thousand dollars each, but they do not have to fund new telescopes, only new analysis.

One team, led by Jason Wright at Pennsylvania State University in University Park, will look for the waste heat of Dyson spheres by analysing data from two space-based infrared observatories, the Wide-field Infrared Survey Explorer (WISE) and the Spitzer space telescope, launched in 2009 and 2003. Povich, a member of this team, is looking specifically within the Milky Way. Thanks to the data from Spitzer and WISE, Povich should be able to scan a volume of space thousands of times larger than previous searches like Carrigan’s. “For example, if you had a sun-equivalent star, fully enclosed in a Dyson sphere, we should be able to detect it almost anywhere in the galaxy.”

Even such a wide-ranging hunt may not be ambitious enough, according to Wright. He suspects that interstellar travel will prove no harder than constructing a sphere. An alien civilisation with such a high level of technology would spread out and colonise the galaxy in a few million years, building spheres as they go. “I would argue that it’s very hard for a spacefaring civilisation to die out. There are too many lifeboats,” says Wright. “Once you have self-sufficient colonies, you will take over the galaxy – you can’t even try to stop it because you can’t coordinate the actions of the colonies.”

If this had happened in the Milky Way, there should be spheres everywhere. “To find one or a few Dyson spheres in our galaxy would be very strange,” says Wright.

Read the entire article after the jump.

Image: 2001: A Space Odyssey, The Monolith. Courtesy of Daily Galaxy.

The Cycle of Dispossession and Persecution

In 2010, novelist Iain Banks delivered his well-crafted and heart-felt view of a very human problem — our inability to learn from past mistakes. Courageously for someone in the public eye he did something non-trivial, however small, about an all too common one. We excerpt his essay below.

From Guardian:

I support the Boycott, Divestment and Sanctions (BDS) campaign because, especially in our instantly connected world, an injustice committed against one, or against one group of people, is an injustice against all, against every one of us; a collective injury.

My particular reason for participating in the cultural boycott of Israel is that, first of all, I can; I’m a writer, a novelist, and I produce works that are, as a rule, presented to the international market. This gives me a small extra degree of power over that which I possess as a (UK) citizen and a consumer. Secondly, where possible when trying to make a point, one ought to be precise, and hit where it hurts. The sports boycott of South Africa when it was still run by the racist apartheid regime helped to bring the country to its senses because the ruling Afrikaaner minority put so much store in their sporting prowess. Rugby and cricket in particular mattered to them profoundly, and their teams’ generally elevated position in the international league tables was a matter of considerable pride. When they were eventually isolated by the sporting boycott – as part of the wider cultural and trade boycott – they were forced that much more persuasively to confront their own outlaw status in the world.

A sporting boycott of Israel would make relatively little difference to the self-esteem of Israelis in comparison to South Africa; an intellectual and cultural one might help make all the difference, especially now that the events of the Arab spring and the continuing repercussions of the attack on the Gaza-bound flotilla peace convoy have threatened both Israel’s ability to rely on Egypt’s collusion in the containment of Gaza, and Turkey’s willingness to engage sympathetically with the Israeli regime at all. Feeling increasingly isolated, Israel is all the more vulnerable to further evidence that it, in turn, like the racist South African regime it once supported and collaborated with, is increasingly regarded as an outlaw state.

I was able to play a tiny part in South Africa’s cultural boycott, ensuring that – once it thundered through to me that I could do so – my novels weren’t sold there (while subject to an earlier contract, under whose terms the books were sold in South Africa, I did a rough calculation of royalties earned each year and sent that amount to the ANC). Since the 2010 attack on the Turkish-led convoy to Gaza in international waters, I’ve instructed my agent not to sell the rights to my novels to Israeli publishers. I don’t buy Israeli-sourced products or food, and my partner and I try to support Palestinian-sourced products wherever possible.

It doesn’t feel like much, and I’m not completely happy doing even this; it can sometimes feel like taking part in collective punishment (although BDS is, by definition, aimed directly at the state and not the people), and that’s one of the most damning charges that can be levelled at Israel itself: that it engages in the collective punishment of the Palestinian people within Israel, and the occupied territories, that is, the West Bank and – especially – the vast prison camp that is Gaza. The problem is that constructive engagement and reasoned argument demonstrably have not worked, and the relatively crude weapon of boycott is pretty much all that’s left. (To the question, “What about boycotting Saudi Arabia?” – all I can claim is that cutting back on my consumption of its most lucrative export was a peripheral reason for giving up the powerful cars I used to drive, and for stopping flying, some years ago. I certainly wouldn’t let a book of mine be published there either, although – unsurprisingly, given some of the things I’ve said about that barbaric excuse for a country, not to mention the contents of the books themselves – the issue has never arisen, and never will with anything remotely resembling the current regime in power.)

As someone who has always respected and admired the achievements of the Jewish people – they’ve probably contributed even more to world civilisation than the Scots, and we Caledonians are hardly shy about promoting our own wee-but-influential record and status – and has felt sympathy for the suffering they experienced, especially in the years leading up to and then during the second world war and the Holocaust, I’ll always feel uncomfortable taking part in any action that – even if only thanks to the efforts of the Israeli propaganda machine – may be claimed by some to target them, despite the fact that the state of Israel and the Jewish people are not synonymous. Israel and its apologists can’t have it both ways, though: if they’re going to make the rather hysterical claim that any and every criticism of Israeli domestic or foreign policy amounts to antisemitism, they have to accept that this claimed, if specious, indivisibility provides an opportunity for what they claim to be the censure of one to function as the condemnation of the other.

Read the entire essay after the jump.