Seeking Clues to Suicide

Suicide still ranks highly in many cultures as one of the commonest ways to die. The statistics are sobering — in 2012, more U.S. soldiers committed suicide than died in combat. Despite advances in the treatment of mental illness, little has made a dent in the annual increase in the numbers of those who take their lives. Psychologist Matthew Nock hopes to change this through some innovative research.

From the New York Times:

For reasons that have eluded people forever, many of us seem bent on our own destruction. Recently more human beings have been dying by suicide annually than by murder and warfare combined. Despite the progress made by science, medicine and mental-health care in the 20th century — the sequencing of our genome, the advent of antidepressants, the reconsidering of asylums and lobotomies — nothing has been able to drive down the suicide rate in the general population. In the United States, it has held relatively steady since 1942. Worldwide, roughly one million people kill themselves every year. Last year, more active-duty U.S. soldiers killed themselves than died in combat; their suicide rate has been rising since 2004. Last month, the Centers for Disease Control and Prevention announced that the suicide rate among middle-aged Americans has climbed nearly 30 percent since 1999. In response to that widely reported increase, Thomas Frieden, the director of the C.D.C., appeared on PBS NewsHour and advised viewers to cultivate a social life, get treatment for mental-health problems, exercise and consume alcohol in moderation. In essence, he was saying, keep out of those demographic groups with high suicide rates, which include people with a mental illness like a mood disorder, social isolates and substance abusers, as well as elderly white males, young American Indians, residents of the Southwest, adults who suffered abuse as children and people who have guns handy.

But most individuals in every one of those groups never have suicidal thoughts — even fewer act on them — and no data exist to explain the difference between those who will and those who won’t. We also have no way of guessing when — in the next hour? in the next decade? — known risk factors might lead to an attempt. Our understanding of how suicidal thinking progresses, or how to spot and halt it, is little better now than it was two and a half centuries ago, when we first began to consider suicide a medical rather than philosophical problem and physicians prescribed, to ward it off, buckets of cold water thrown at the head.

“We’ve never gone out and observed, as an ecologist would or a biologist would go out and observe the thing you’re interested in for hours and hours and hours and then understand its basic properties and then work from that,” Matthew K. Nock, the director of Harvard University’s Laboratory for Clinical and Developmental Research, told me. “We’ve never done it.”

It was a bright December morning, and we were in his office on the 12th floor of the building that houses the school’s psychology department, a white concrete slab jutting above its neighbors like a watchtower. Below, Cambridge looked like a toy city — gabled roofs and steeples, a ribbon of road, windshields winking in the sun. Nock had just held a meeting with four members of his research team — he in his swivel chair, they on his sofa — about several of the studies they were running. His blue eyes matched his diamond-plaid sweater, and he was neatly shorn and upbeat. He seemed more like a youth soccer coach, which he is on Saturday mornings for his son’s first-grade team, than an expert in self-destruction.

At the meeting, I listened to Nock and his researchers discuss a study they were collaborating on with the Army. They were calling soldiers who had recently attempted suicide and asking them to explain what they had done and why. Nock hoped that sifting through the interview transcripts for repeated phrasings or themes might suggest predictive patterns that he could design tests to catch. A clinical psychologist, he had trained each of his researchers how to ask specific questions over the telephone. Adam Jaroszewski, an earnest 29-year-old in tortoiseshell glasses, told me that he had been nervous about calling subjects in the hospital, where they were still recovering, and probing them about why they tried to end their lives: Why that moment? Why that method? Could anything have happened to make them change their minds? Though the soldiers had volunteered to talk, Jaroszewski worried about the inflections of his voice: how could he put them at ease and sound caring and grateful for their participation without ceding his neutral scientific tone? Nock, he said, told him that what helped him find a balance between empathy and objectivity was picturing Columbo, the frumpy, polite, persistently quizzical TV detective played by Peter Falk. “Just try to be really, really curious,” Nock said.

That curiosity has made Nock, 39, one of the most original and influential suicide researchers in the world. In 2011, he received a MacArthur genius award for inventing new ways to investigate the hidden workings of a behavior that seems as impossible to untangle, empirically, as love or dreams.

Trying to study what people are thinking before they try to kill themselves is like trying to examine a shadow with a flashlight: the minute you spotlight it, it disappears. Researchers can’t ethically induce suicidal thinking in the lab and watch it develop. Uniquely human, it can’t be observed in other species. And it is impossible to interview anyone who has died by suicide. To understand it, psychologists have most often employed two frustratingly imprecise methods: they have investigated the lives of people who have killed themselves, and any notes that may have been left behind, looking for clues to what their thinking might have been, or they have asked people who have attempted suicide to describe their thought processes — though their mental states may differ from those of people whose attempts were lethal and their recollections may be incomplete or inaccurate. Such investigative methods can generate useful statistics and hypotheses about how a suicidal impulse might start and how it travels from thought to action, but that’s not the same as objective evidence about how it unfolds in real time.

Read the entire article here.

Image: 2007 suicide statistics for 15-24 year-olds. Courtesy of Crimson White, UA.

Circadian Rhythm in Vegetables

The vegetables you eat may be better for you based on how and when they are exposed to light. Just as animals adhere to circadian rhythms, research shows that some plants may generate different levels of healthy nutritional metabolites based the light cycle as well.

From ars technica:

When you buy vegetables at the grocery store, they are usually still alive. When you lock your cabbage and carrots in the dark recess of the refrigerator vegetable drawer, they are still alive. They continue to metabolize while we wait to cook them.

Why should we care? Well, plants that are alive adjust to the conditions surrounding them. Researchers at Rice University have shown that some plants have circadian rhythms, adjusting their production of certain chemicals based on their exposure to light and dark cycles. Understanding and exploiting these rhythms could help us maximize the nutritional value of the vegetables we eat.

According to Janet Braam, a professor of biochemistry at Rice, her team’s initial research looked at how Arabidopsis, a common plant model for scientists, responded to light cycles. “It adjusts its defense hormones before the time of day when insects attack,” Braam said. Arabidopsis is in the same plant family as the cruciforous vegetables—broccoli, cabbage, and kale—so Braam and her colleagues decided to look for a similar light response in our foods.

They bought some grocery store cabbage and brought it back to the lab so they could subject the cabbage to the same tests they gave their model plant, which involved offering up living, leafy vegetables to a horde of hungry caterpillars. First, half the cabbages were exposed to a normal light and dark cycle, the same schedule as the caterpillars, while the other half were exposed to the opposite light cycle.

The caterpillars tend to feed in the late afternoon, according to Braam, so the light signals the plants to increase production of glucosinolates, a chemical that the insects don’t like. The study found that cabbages that adjusted to the normal light cycle had far less insect damage than the jet-lagged cabbages.

While it’s cool to know that cabbages are still metabolizing away and responding to light stimulus days after harvest, Braam said that this process could affect the nutritional value of the cabbage. “We eat cabbage, in part, because these glucosinolates are anti-cancer compounds,” Braam said.

Glucosinolates are only found in the cruciform vegetable family, but the Rice team wanted to see if other vegetables demonstrated similar circadian rhythms. They tested spinach, lettuce, zucchini, blueberries, carrots, and sweet potatoes. “Luckily, our caterpillar isn’t picky,” Braam said. “It’ll eat just about anything.”

Just like with the cabbage, the caterpillars ate far less of the vegetables trained on the normal light schedule. Even the fruits and roots increased production of some kind of anti-insect compound in response to light stimulus.

Metabolites affected by circadian rhythms could include vitamins and antioxidants. The Rice team is planning follow-up research to begin exploring how the cycling phenomenon affects known nutrients and if the magnitude of the shifts are large enough to have an impact on our diets. “We’ve uncovered some very basic stimuli, but we haven’t yet figured out how to amplify that for human nutrition,” Braam said.

Read the entire article here.

UnGoogleable: The Height of Cool

So, it is no longer a surprise — our digital lives are tracked, correlated, stored and examined. The NSA (National Security Agency) does it to determine if you are an unsavory type; Google does it to serve you better information and ads; and, a whole host of other companies do it to sell you more things that you probably don’t need and for a price that you can’t afford. This of course raises deep and troubling questions about privacy. With this in mind, some are taking ownership of the issue and seeking to erase themselves from the vast digital Orwellian eye. However, to some being untraceable online is a fashion statement, rather than a victory for privacy.

From the Guardian:

“The chicest thing,” said fashion designer Phoebe Philo recently, “is when you don’t exist on Google. God, I would love to be that person!”

Philo, creative director of Céline, is not that person. As the London Evening Standard put it: “Unfortunately for the famously publicity-shy London designer – Paris born, Harrow-on-the-Hill raised – who has reinvented the way modern women dress, privacy may well continue to be a luxury.” Nobody who is oxymoronically described as “famously publicity-shy” will ever be unGoogleable. And if you’re not unGoogleable then, if Philo is right, you can never be truly chic, even if you were born in Paris. And if you’re not truly chic, then you might as well die – at least if you’re in fashion.

If she truly wanted to disappear herself from Google, Philo could start by changing her superb name to something less diverting. Prize-winning novelist AM Homes is an outlier in this respect. Google “am homes” and you’re in a world of blah US real estate rather than cutting-edge literature. But then Homes has thought a lot about privacy, having written a play about the most famously private person in recent history, JD Salinger, and had him threaten to sue her as a result.

And Homes isn’t the only one to make herself difficult to detect online. UnGoogleable bands are 10 a penny. The New York-based band !!! (known verbally as “chick chick chick” or “bang bang bang” – apparently “Exclamation point, exclamation point, exclamation point” proved too verbose for their meagre fanbase) must drive their business manager nuts. As must the band Merchandise, whose name – one might think – is a nominalist satire of commodification by the music industry. Nice work, Brad, Con, John and Rick.

 

If Philo renamed herself online as Google Maps or @, she might make herself more chic.

Welcome to anonymity chic – the antidote to an online world of exhibitionism. But let’s not go crazy: anonymity may be chic, but it is no business model. For years XXX Porn Site, my confusingly named alt-folk combo, has remained undiscovered. There are several bands called Girls (at least one of them including, confusingly, dudes) and each one has worried – after a period of chic iconoclasm – that such a putatively cool name means no one can find them online.

But still, maybe we should all embrace anonymity, given this week’s revelations that technology giants cooperated in Prism, a top-secret system at the US National Security Agency that collects emails, documents, photos and other material for secret service agents to review. It has also been a week in which Lindsay Mills, girlfriend of NSA whistleblower Edward Snowden, has posted on her blog (entitled: “Adventures of a world-traveling, pole-dancing super hero” with many photos showing her performing with the Waikiki Acrobatic Troupe) her misery that her fugitive boyfriend has fled to Hong Kong. Only a cynic would suggest that this blog post might help the Waikiki Acrobating Troupe veteran’s career at this – serious face – difficult time. Better the dignity of silent anonymity than using the internet for that.

Furthermore, as social media diminishes us with not just information overload but the 24/7 servitude of liking, friending and status updating, this going under the radar reminds us that we might benefit from withdrawing the labour on which the founders of Facebook, Twitter and Instagram have built their billions. “Today our intense cultivation of a singular self is tied up in the drive to constantly produce and update,” argues Geert Lovink, research professor of interactive media at the Hogeschool van Amsterdam and author of Networks Without a Cause: A Critique of Social Media. “You have to tweet, be on Facebook, answer emails,” says Lovink. “So the time pressure on people to remain present and keep up their presence is a very heavy load that leads to what some call the psychopathology of online.”

Internet evangelists such as Clay Shirky and Charles Leadbeater hoped for something very different from this pathologised reality. In Shirky’s Here Comes Everybody and Leadbeater’s We-Think, both published in 2008, the nascent social media were to echo the anti-authoritarian, democratising tendencies of the 60s counterculture. Both men revelled in the fact that new web-based social tools helped single mothers looking online for social networks and pro-democracy campaigners in Belarus. Neither sufficiently realised that these tools could just as readily be co-opted by The Man. Or, if you prefer, Mark Zuckerberg.

Not that Zuckerberg is the devil in this story. Social media have changed the way we interact with other people in line with what the sociologist Zygmunt Bauman wrote in Liquid Love. For us “liquid moderns”, who have lost faith in the future, cannot commit to relationships and have few kinship ties, Zuckerberg created a new way of belonging, one in which we use our wits to create provisional bonds loose enough to stop suffocation, but tight enough to give a needed sense of security now that the traditional sources of solace (family, career, loving relationships) are less reliable than ever.

Read the entire article here.

The Mother of All Storms

Some regions of our planet are home to violent and destructive storms. However, one look at a recent mega-storm on Saturn may put it all in perspective — it could be much, much worse.

From ars technica:

Jupiter’s Great Red Spot may get most of the attention, but it’s hardly the only big weather event in the Solar System. Saturn, for example, has an odd hexagonal pattern in the clouds at its north pole, and when the planet tilted enough to illuminate it, the light revealed a giant hurricane embedded in the center of the hexagon. Scientists think the immense storm may have been there for years.

But Saturn is also home to transient storms that show up sporadically. The most notable of these are the Great White Spots, which can persist for months and alter the weather on a planetary scale. Great White Spots are rare, with only six having been observed since 1876. When one formed in 2010, we were lucky enough to have the Cassini orbiter in place to watch it from close up. Even though the head of the storm was roughly 7,000 km across, Cassini’s cameras were able to image it at resolutions where each pixel was only 14 km across, allowing an unprecedented view into the storm’s dynamics.

The storm turned out to be very violent, with convective features as big as 3,000 km across that could form and dissipate in as little as 10 hours. Winds of over 400 km/hour were detected, and the pressure gradient between the storm and the unaffected areas nearby was twice that of the one observed in the Great Red Spot of Jupiter. By carefully mapping the direction of the winds, the authors were able to conclude that the head of the White Spot was an anti-cyclone, with winds orbiting around a central feature.

Convection that brings warm material up from the depths of Saturn’s atmosphere appears to be key to driving these storms. The authors built an atmospheric model that could reproduce the White Spot and found that shutting down the energy injection from the lower atmosphere was enough to kill the storm. In addition, observations suggest that many areas of the storm contain freshly condensed particles, which may represent material that was brought up from the lower atmosphere and then condensed when it reached the cooler upper layers.

The Great White spot was an anticyclone, and the authors’ model suggests that there’s only a very narrow band of winds on Saturn that enable the formation of a Great White Spot. The convective activity won’t trigger a White Spot anywhere outside the range of 31.5° and 32.4°N, which probably goes a long way toward explaining why the storms are so rare.

Read the entire article here.

Image: The huge storm churning through the atmosphere in Saturn’s northern hemisphere overtakes itself as it encircles the planet in this true-color view from NASA’s Cassini spacecraft. Courtesy of NASA/JPL.

Technology and Kids

There is no doubting that technology’s grasp finds us at increasingly younger ages. No longer is it just our teens constantly mesmerized by status updates on their mobiles, and not just our “in-betweeners” addicted to “facetiming” with their BFFs. Now our technologies are fast becoming the tools of choice for our kindergarteners and pre-K kids. Some parents lament.

From New York Times:

A few months ago, I attended my daughter Josie’s kindergarten open house, the highlight of which was a video slide show featuring our moppets using iPads to practice their penmanship. Parental cooing ensued.

I happened to be sitting next to the teacher, and I asked her about the rumor I’d heard: that next year, every elementary-school kid in town would be provided his or her own iPad. She said this pilot program was being introduced only at the newly constructed school three blocks from our house, which Josie will attend next year. “You’re lucky,” she observed wistfully.

This seemed to be the consensus around the school-bus stop. The iPads are coming! Not only were our kids going to love learning, they were also going to do so on the cutting edge of innovation. Why, in the face of this giddy chatter, was I filled with dread?

It’s not because I’m a cranky Luddite. I swear. I recognize that iPads, if introduced with a clear plan, and properly supervised, can improve learning and allow students to work at their own pace. Those are big ifs in an era of overcrowded classrooms. But my hunch is that our school will do a fine job. We live in a town filled with talented educators and concerned parents.

Frankly, I find it more disturbing that a brand-name product is being elevated to the status of mandatory school supply. I also worry that iPads might transform the classroom from a social environment into an educational subway car, each student fixated on his or her personalized educational gadget.

But beneath this fretting is a more fundamental beef: the school system, without meaning to, is subverting my parenting, in particular my fitful efforts to regulate my children’s exposure to screens. These efforts arise directly from my own tortured history as a digital pioneer, and the war still raging within me between harnessing the dazzling gifts of technology versus fighting to preserve the slower, less convenient pleasures of the analog world.

What I’m experiencing is, in essence, a generational reckoning, that queasy moment when those of us whose impatient desires drove the tech revolution must face the inheritors of this enthusiasm: our children.

It will probably come as no surprise that I’m one of those annoying people fond of boasting that I don’t own a TV. It makes me feel noble to mention this — I am feeling noble right now! — as if I’m taking a brave stand against the vulgar superficiality of the age. What I mention less frequently is the reason I don’t own a TV: because I would watch it constantly.

My brothers and I were so devoted to television as kids that we created an entire lexicon around it. The brother who turned on the TV, and thus controlled the channel being watched, was said to “emanate.” I didn’t even know what “emanate” meant. It just sounded like the right verb.

This was back in the ’70s. We were latchkey kids living on the brink of a brave new world. In a few short years, we’d hurtled from the miraculous calculator (turn it over to spell out “boobs”!) to arcades filled with strobing amusements. I was one of those guys who spent every spare quarter mastering Asteroids and Defender, who found in video games a reliable short-term cure for the loneliness and competitive anxiety that plagued me. By the time I graduated from college, the era of personal computers had dawned. I used mine to become a closet Freecell Solitaire addict.

Midway through my 20s I underwent a reformation. I began reading, then writing, literary fiction. It quickly became apparent that the quality of my work rose in direct proportion to my ability filter out distractions. I’ve spent the past two decades struggling to resist the endless pixelated enticements intended to capture and monetize every spare second of human attention.

Has this campaign succeeded? Not really. I’ve just been a bit slower on the uptake than my contemporaries. But even without a TV or smartphones, our household can feel dominated by computers, especially because I and my wife (also a writer) work at home. We stare into our screens for hours at a stretch, working and just as often distracting ourselves from work.

Read the entire article here.

Image courtesy of Wired.

Technology and Employment

Technology is altering the lives of us all. Often it is a positive influence, offering its users tremendous benefits from time-saving to life-extension. However, the relationship of technology to our employment is more complex and usually detrimental.

Many traditional forms of employment have already disappeared thanks to our technological tools; still many other jobs have changed beyond recognition, requiring new skills and knowledge. And this may be just the beginning.

From Technology Review:

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

Read the entire article here.

Image: Industrial robots. Courtesy of Techjournal.

What Makes Us Human

Psychologist Jerome Kagan leaves no stone unturned in his quest to determine what makes us distinctly human. His latest book, The Human Spark: The science of human development, comes up with some fresh conclusions.

From the New Scientist:

What is it that makes humans special, that sets our species apart from all others? It must be something connected with intelligence – but what exactly? People have asked these questions for as long as we can remember. Yet the more we understand the minds of other animals, the more elusive the answers to these questions have become.

The latest person to take up the challenge is Jerome Kagan, a former professor at Harvard University. And not content with pinning down the “human spark” in the title of his new book, he then tries to explain what makes each of us unique.

As a pioneer in the science of developmental psychology, Kagan has an interesting angle. A life spent investigating how a fertilised egg develops into an adult human being provides him with a rich understanding of the mind and how it differs from that of our closest animal cousins.

Human and chimpanzee infants behave in remarkably similar ways for the first four to six months, Kagan notes. It is only during the second year of life that we begin to diverge profoundly. As the toddler’s frontal lobes expand and the connections between the brain sites increase, the human starts to develop the talents that set our species apart. These include “the ability to speak a symbolic language, infer the thoughts and feelings of others, understand the meaning of a prohibited action, and become conscious of their own feelings, intentions and actions”.

Becoming human, as Kagan describes it, is a complex dance of neurobiological changes and psychological advances. All newborns possess the potential to develop the universal human properties “inherent in their genomes”. What makes each of us individual is the unique backdrop of genetics, epigenetics, and the environment against which this development plays out.

Kagan’s research highlighted the role of temperament, which he notes is underpinned by at least 1500 genes, affording huge individual variation. This variation, in turn, influences the way we respond to environmental factors including family, social class, culture and historical era.

But what of that human spark? Kagan seems to locate it in a quartet of qualities: language, consciousness, inference and, especially, morality. This is where things start to get weird. He would like you to believe that morality is uniquely human, which, of course, bolsters his argument. Unfortunately, it also means he has to deny that a rudimentary morality has evolved in other social animals whose survival also depends on cooperation.

Instead, Kagan argues that morality is a distinctive property of our species, just as “fish do not have lungs”. No mention of evolution. So why are we moral, then? “The unique biology of the human brain motivates children and adults to act in ways that will allow them to arrive at the judgement that they are a good person.” That’s it?

Warming to his theme, Kagan argues that in today’s world, where traditional moral standards have been eroded and replaced by a belief in the value of wealth and celebrity, it is increasingly difficult to see oneself as a good person. He thinks this mismatch between our moral imperative and Western culture helps explain the “modern epidemic” of mental illness. Unwittingly, we have created an environment in which the human spark is fading.

Some of Kagan’s ideas are even more outlandish, surely none more so than the assertion that a declining interest in natural sciences may be a consequence of mothers becoming less sexually mysterious than they once were. More worryingly, he doesn’t seem to believe that humans are subject to the same forces of evolution as other animals.

Read the entire article here.

Sci-Fi Begets Cli-Fi

The world of fiction is populated with hundreds of different genres — most of which were invented by clever marketeers anxious to ensure vampire novels (teen / horror) don’t live next to classic works (literary) on real or imagined (think Amazon) book shelves. So, it should come as no surprise to see a new category recently emerge: cli-fi.

Short for climate fiction, cli-fi novels explore the dangers of environmental degradation and apocalyptic climate change. Not light reading for your summer break at the beach. But, then again, more books in this category may get us to think often and carefully about preserving our beaches — and the rest of the planet — for our kids.

From the Guardian:

A couple of days ago Dan Bloom, a freelance news reporter based in Taiwan, wrote on the Teleread blog that his word had been stolen from him. In 2012 Bloom had “produced and packaged” a novella called Polar City Red, about climate refugees in a post-apocalyptic Alaska in the year 2075. Bloom labelled the book “cli-fi” in the press release and says he coined that term in 2007, cli-fi being short for “climate fiction”, described as a sub-genre of sci-fi. Polar City Red bombed, selling precisely 271 copies, until National Public Radio (NPR) and the Christian Science Monitor picked up on the term cli-fi last month, writing Bloom out of the story. So Bloom has blogged his reply on Teleread, saying he’s simply pleased the term is now out there – it has gone viral since the NPR piece by Scott Simon. It’s not quite as neat as that – in recent months the term has been used increasingly in literary and environmental circles – but there’s no doubt it has broken out more widely. You can search for cli-fi on Amazon, instantly bringing up a plethora of books with titles such as 2042: The Great Cataclysm, or Welcome to the Greenhouse. Twitter has been abuzz.

Whereas 10 or 20 years ago it would have been difficult to identify even a handful of books that fell under this banner, there is now a growing corpus of novels setting out to warn readers of possible environmental nightmares to come. Barbara Kingsolver’s Flight Behaviour, the story of a forest valley filled with an apparent lake of fire, is shortlisted for the 2013 Women’s prize for fiction. Meanwhile, there’s Nathaniel Rich’s Odds Against Tomorrow, set in a future New York, about a mathematician who deals in worst-case scenarios. In Liz Jensen’s 2009 eco-thriller The Rapture, summer temperatures are asphyxiating and Armageddon is near; her most recent book, The Uninvited, features uncanny warnings from a desperate future. Perhaps the most high-profile cli-fi author is Margaret Atwood, whose 2009 The Year of the Flood features survivors of a biological catastrophe also central to her 2003 novel Oryx and Crake, a book Atwood sometimes preferred to call “speculative fiction”.

Engaging with this subject in fiction increases debate about the issue; finely constructed, intricate narratives help us broaden our understanding and explore imagined futures, encouraging us to think about the kind of world we want to live in. This can often seem difficult in our 24?hour news-on-loop society where the consequences of climate change may appear to be everywhere, but intelligent discussion of it often seems to be nowhere. Also, as the crime genre can provide the dirty thrill of, say, reading about a gruesome fictional murder set on a street the reader recognises, the best cli-fi novels allow us to be briefly but intensely frightened: climate chaos is closer, more immediate, hovering over our shoulder like that murderer wielding his knife. Outside of the narrative of a novel the issue can seem fractured, incoherent, even distant. As Gregory Norminton puts it in his introduction to an anthology on the subject, Beacons: Stories for Our Not-So-Distant Future: “Global warming is a predicament, not a story. Narrative only comes in our response to that predicament.” Which is as good an argument as any for engaging with those stories.

All terms are reductive, all labels simplistic – clearly, the likes of Kingsolver, Jensen and Atwood have a much broader canvas than this one issue. And there’s an argument for saying this is simply rebranding: sci-fi writers have been engaging with the climate-change debate for longer than literary novelists – Snow by Adam Roberts comes to mind – and I do wonder whether this is a term designed for squeamish writers and critics who dislike the box labelled “science fiction”. So the term is certainly imperfect, but it’s also valuable. Unlike sci-fi, cli-fi writing comes primarily from a place of warning rather than discovery. There are no spaceships hovering in the sky; no clocks striking 13. On the contrary, many of the horrors described seem oddly familiar.

Read the entire article after the jump.

Image: Aftermath of Superstorm Sandy. Courtesy of the Independent.

Us and Them: Group Affinity Begins Early

Research shows how children as young as four years empathize with some but not others. It’s all about the group: which peer group you belong to versus the rest. Thus, the uphill struggle to instill tolerance in the next generation needs to begin very early in life.

From the WSJ:

Here’s a question. There are two groups, Zazes and Flurps. A Zaz hits somebody. Who do you think it was, another Zaz or a Flurp?

It’s depressing, but you have to admit that it’s more likely that the Zaz hit the Flurp. That’s an understandable reaction for an experienced, world-weary reader of The Wall Street Journal. But here’s something even more depressing—4-year-olds give the same answer.

In my last column, I talked about some disturbing new research showing that preschoolers are already unconsciously biased against other racial groups. Where does this bias come from?

Marjorie Rhodes at New York University argues that children are “intuitive sociologists” trying to make sense of the social world. We already know that very young children make up theories about everyday physics, psychology and biology. Dr. Rhodes thinks that they have theories about social groups, too.

In 2012 she asked young children about the Zazes and Flurps. Even 4-year-olds predicted that people would be more likely to harm someone from another group than from their own group. So children aren’t just biased against other racial groups: They also assume that everybody else will be biased against other groups. And this extends beyond race, gender and religion to the arbitrary realm of Zazes and Flurps.

In fact, a new study in Psychological Science by Dr. Rhodes and Lisa Chalik suggests that this intuitive social theory may even influence how children develop moral distinctions.

Back in the 1980s, Judith Smetana and colleagues discovered that very young kids could discriminate between genuinely moral principles and mere social conventions. First, the researchers asked about everyday rules—a rule that you can’t be mean to other children, for instance, or that you have to hang up your clothes. The children said that, of course, breaking the rules was wrong. But then the researchers asked another question: What would you think if teachers and parents changed the rules to say that being mean and dropping clothes were OK?

Children as young as 2 said that, in that case, it would be OK to drop your clothes, but not to be mean. No matter what the authorities decreed, hurting others, even just hurting their feelings, was always wrong. It’s a strikingly robust result—true for children from Brazil to Korea. Poignantly, even abused children thought that hurting other people was intrinsically wrong.

This might leave you feeling more cheerful about human nature. But in the new study, Dr. Rhodes asked similar moral questions about the Zazes and Flurps. The 4-year-olds said it would always be wrong for Zazes to hurt the feelings of others in their group. But if teachers decided that Zazes could hurt Flurps’ feelings, then it would be OK to do so. Intrinsic moral obligations only extended to members of their own group.

The 4-year-olds demonstrate the deep roots of an ethical tension that has divided philosophers for centuries. We feel that our moral principles should be universal, but we simultaneously feel that there is something special about our obligations to our own group, whether it’s a family, clan or country.

Read the entire article after the jump.

Image: Us and Them, Pink Floyd. Courtesy of Pink Floyd / flickr.

Kolmanskop, Namibian Ghost Town

Ghost towns have a peculiar fascination. They hold the story of a once glorious past and show us how the future took a very different and unexpected turn. Many ghost towns were abandoned by their residents as the economic fortunes of the local area took a turn for the worst — some from exhausted natural resources such as over-exploited mines, others from re-routed transportation, natural disaster or changes in demographics. One such town to have suffered from the inevitable boom and bust cycle of mining — in this case diamonds — is Kolmanskop in Namibia. The town is now being swallowed whole by the ever-shifting sands of the nearby Namib desert, which makes the eerie landscape a photographer’s paradise.

From Atlas Obscura:

People flocked to what became known as Kolmanskop, Namibia, after the discovery of diamonds in the area in 1908. As people arrived with high hopes, houses and other key buildings were built. The new town, which was German-influenced, saw the construction of ballrooms, casinos, theaters, ice factories, and hospitals, as well as the first X-ray station in the southern hemisphere.

Prior to World War I, over 2000 pounds of diamonds were sifted from the sands of the Namib desert. During the war, however, the price of diamonds dropped considerably. On top of this, larger diamonds were later found south of Kolmanskop, in Oranjemund. People picked up and chased after the precious stones. By 1956, the town was completely abandoned.

Today, the eerie ghost town is a popular tourist destination. Guided tours take visitors around the town and through the houses which, today, are filled only with sand.

Read the entire article here.

Image: Kolmanskop. Courtesy of Damien du Toit (coda). See more images from the flickr-stream here.

The College Application Essay

Most U.S. high school seniors have now finished their last days of the last year through the production line that is the educational system. Most will also have a college, and courses, selected from one of the thousands of U.S. institutions that offer further education. Competition to enter many of these colleges is steep and admissions offices use a variety of techniques and measurements to filter applicants and to gauge a prospective student’s suitability. One such measure is the college entrance essay, which still features quite prominently alongside GPA, SAT, and ACT scores and, of course, the parental bank balance.

The New York Times recently featured several student essays that diverged from the norm — these were honest and risky, open and worldly. We excerpt below one such essay for Antioch College by Julian Cranberg:

Ever since I took my first PSAT as a first-semester junior, I have received a constant flow of magazines, brochures, booklets, postcards, etc. touting the virtues of various colleges. Simultaneously, my email account has been force-fed a five-per-week diet of newsletters, college “quizzes,” virtual campus tour links, application calendars, and invitations to “exclusive” over-the-phone question-and-answer sessions. I am a one-year veteran of college advertising.

They started out by sending me friendly yet impersonal compliments, such as “We’re impressed by your academic record,” or “You’ve impressed us, Julian.” One of the funniest yet most disturbing letters I received was printed on a single sheet of paper inside a priority DHL envelope, telling me I received it in this fashion because I was a “priority” to that college. Now, as application time is rolling around, they’ve become a bit more aggressive, hence “REMINDER – University of X Application Due” or “Important Deadline Notice”..

How is it that while I can only send one application to any school to which I am applying, it is okay for any school to send unbridled truckloads of mail my way, applying for my attention? If I have not already made it clear, it’s an annoyance, and, in fact, turns me and undoubtedly others off to applying to these certain schools. However, this annoyance is easy to ignore, and, if I wanted to, I could easily forget all about these mailings after recycling them or deleting them from my email. But beneath the simple annoyance of these mailings lies a pressing and unchallenged issue..

What do these colleges want to get out of these advertisements? For one reason or another, they want my application. This doesn’t mean that their only objective is to craft a better and more diverse incoming class. The more applications a college receives, the more selective they are considered, and the higher they are ranked. This outcome is no doubt figured into their calculations, if it is not, in some cases, the primary driving force behind their mailings..

And these mailings are expensive. Imagine what it would cost to mail a school magazine, with $2.39 postage, to thousands of students across the country every week. The combined postage charge of everything I have received from various colleges must be above $200. Small postcards and envelopes add up fast, especially considering the colossal pool of potential applicants to which they are being sent. Although vastly aiding the United States Postal Service in its time of need, it is nauseating to imagine the volume of money spent on this endeavor. Why, in an era of record-high student loan debt and unemployment, are colleges not reallocating these ludicrous funds to aid their own students instead of extending their arms far and wide to students they have never met? I understand where the colleges are coming from. The precedent that schools should send mailings to students to “inform” them of what they have to offer has been set, and in this competitive world of colleges vying for the most applications, I only see more mailings to come in the future. It’s strange that the college process is always presented as a competition between students to get into the same colleges. It seems that another battle is also happening, where colleges are competing for the applications of the students..

High school seniors aren’t stupid. Neither are admissions offices. Don’t seniors want to go to school somewhere where they will fit and thrive and not just somewhere that is selective and will look good? Don’t applications offices want a pool of people who truly believe they would thrive in that college’s environment, and not have to deal with the many who thought those guys tossing the frisbee in the picture on the postcard they sent them looked pretty cool? I think it’s time to rethink what applying to college really means, for the folks on both sides, before we hit the impending boom in competition that I see coming. And let’s start by eliminating these silly mailings. Maybe we as seniors would then follow suit and choose intelligently where to apply.

More from the New York Times:

“I wonder if Princeton should be poorer.”

If you’re a high school senior trying to seduce the admissions officer reading your application essay, this may not strike you as the ideal opening line. But Shanti Kumar, a senior at the Bronx High School of Science, went ahead anyway when the university prompted her to react in writing to the idea of “Princeton in the nation’s service and in the service of all nations.”

Back in January, when I asked high school seniors to send in college application essays about money, class, working and the economy, I wasn’t sure what, if anything, would come in over the transom.

But 66 students submitted essays, and with the help of Harry Bauld, the author of “On Writing the College Application Essay,” we’ve selected four to publish in full online and in part in this column. That allowed us to be slightly more selective than Princeton itself was last year.

What these four writers have in common is an appetite for risk. Not only did they talk openly about issues that are emotionally complex and often outright taboo, but they took brave and counterintuitive positions on class, national identity and the application process itself. For anyone looking to inspire their own children or grandchildren who are seeking to go to college in the fall of 2014, these four essays would be a good place to start.

Perhaps the most daring essay of all came from Julian Cranberg, a 17-year-old from Brookline, Mass. One of the first rules of the college admissions process is that you don’t write about the college admissions process.

But Mr. Cranberg thumbed his nose at that convention, taking on the tremendous cost of the piles of mail schools send to potential students, and the waste that results from the effort. He figured that he received at least $200 worth of pitches in the past year or so.

“Why, in an era of record-high student loan debt and unemployment, are colleges not reallocating these ludicrous funds to aid their own students instead of extending their arms far and wide to students they have never met?” he asked in the essay.

Antioch College seemed to think that was a perfectly reasonable question and accepted him, though he will attend Oberlin College instead, to which he did not submit the essay.

“It’s a bold move to critique the very institution he was applying to,” said Mr. Bauld, who also teaches English at Horace Mann School in New York City. “But here’s somebody who knows he can make it work with intelligence and humor.”

Read the entire article here.

Amazon All the Time and Google Toilet Paper

Soon courtesy of Amazon, Google and other retail giants, and of course lubricated by the likes of the ubiquitous UPS and Fedex trucks, you may be able to dispense with the weekly or even daily trip to the grocery store. Amazon is expanding a trial of its same-day grocery delivery service, and others are following suit in select local and regional tests.

You may recall the spectacular implosion of the online grocery delivery service Webvan — a dot.com darling — that came and went in the blink of an internet eye, finally going bankrupt in 2001. Well, times have changed and now avaricious Amazon and its peers have their eyes trained on your groceries.

So now all you need to do is find a service to deliver your kids to and from school, an employer who will let you work from home, convince your spouse that “staycations” are cool, use Google Street View to become a virtual tourist, and you will never, ever, ever, EVER need to leave your house again!

From Slate:

The other day I ran out of toilet paper. You know how that goes. The last roll in the house sets off a ticking clock; depending on how many people you live with and their TP profligacy, you’re going to need to run to the store within a few hours, a day at the max, or you’re SOL. (Unless you’re a man who lives alone, in which case you can wait till the next equinox.) But it gets worse. My last roll of toilet paper happened to coincide with a shortage of paper towels, a severe run on diapers (you know, for kids!), and the last load of dishwashing soap. It was a perfect storm of household need. And, as usual, I was busy and in no mood to go to the store.

This quotidian catastrophe has a happy ending. In April, I got into the “pilot test” for Google Shopping Express, the search company’s effort to create an e-commerce service that delivers goods within a few hours of your order. The service, which is currently being offered in the San Francisco Bay Area, allows you to shop online at Target, Walgreens, Toys R Us, Office Depot, and several smaller, local stores, like Blue Bottle Coffee. Shopping Express combines most of those stores’ goods into a single interface, which means you can include all sorts of disparate items in the same purchase. Shopping Express also offers the same prices you’d find at the store. After you choose your items, you select a delivery window—something like “Anytime Today” or “Between 2 p.m. and 6 p.m.”—and you’re done. On the fateful day that I’d run out of toilet paper, I placed my order at around noon. Shortly after 4, a green-shirted Google delivery guy strode up to my door with my goods. I was back in business, and I never left the house.

Google is reportedly thinking about charging $60 to $70 a year for the service, making it a competitor to Amazon’s Prime subscription plan. But at this point the company hasn’t finalized pricing, and during the trial period, the whole thing is free. I’ve found it easy to use, cheap, and reliable. Similar to my experience when I first got Amazon Prime, it has transformed how I think about shopping. In fact, in the short time I’ve been using it, Shopping Express has replaced Amazon as my go-to source for many household items. I used to buy toilet paper, paper towels, and diapers through Amazon’s Subscribe & Save plan, which offers deep discounts on bulk goods if you choose a regular delivery schedule. I like that plan when it works, but subscribing to items whose use is unpredictable—like diapers for a newborn—is tricky. I often either run out of my Subscribe & Save items before my next delivery, or I get a new delivery while I still have a big load of the old stuff. Shopping Express is far simpler. You get access to low-priced big-box-store goods without all the hassle of big-box stores—driving, parking, waiting in line. And you get all the items you want immediately.

After using it for a few weeks, it’s hard to escape the notion that a service like Shopping Express represents the future of shopping. (Also the past of shopping—the return of profitless late-1990s’ services like Kozmo and WebVan, though presumably with some way of making money this time.) It’s not just Google: Yesterday, Reuters reported that Amazon is expanding AmazonFresh, its grocery delivery service, to big cities beyond Seattle, where it has been running for several years. Amazon’s move confirms the theory I floated a year ago, that the e-commerce giant’s long-term goal is to make same-day shipping the norm for most of its customers.

Amazon’s main competitive disadvantage, today, is shipping delays. While shopping online makes sense for many purchases, the vast majority of the world’s retail commerce involves stuff like toilet paper and dishwashing soap—items that people need (or think they need) immediately. That explains why Wal-Mart sells half a trillion dollars worth of goods every year, and Amazon sells only $61 billion. Wal-Mart’s customers return several times a week to buy what they need for dinner, and while they’re there, they sometimes pick up higher-margin stuff, too. By offering same-day delivery on groceries and household items, Amazon and Google are trying to edge in on that market.

As I learned while using Shopping Express, the plan could be a hit. If done well, same-day shipping erases the distinctions between the kinds of goods we buy online and those we buy offline. Today, when you think of something you need, you have to go through a mental checklist: Do I need it now? Can it wait two days? Is it worth driving for? With same-day shipping, you don’t have to do that. All shopping becomes online shopping.

Read the entire article here.

Image: Webvan truck. Courtesy of Wikipedia.

Stale Acronym Soup

If you have ever typed (sorry, tweeted) the acronyms LOL or YOLO then you are guilty as charged of  language pollution. The most irritating thumbspeak below.

From the Guardian:

Thanks to the on-the-hoof style of chat-rooms and the curtailed nature of the text message and tweet, online abbreviations are now an established part of written English. The question of which is the most irritating, however, is a matter of scholarly debate. Here, by way of opening the discussion, are 10 contenders.

Linguists like to make a distinction between the denotative function of a sign – what it literally means – and the connotative, which is (roughly) what it tells you by implication. The denotative meanings of these abbreviations vary over a wide range. But pretty much all of them connote one thing, which is: “I am a douchebag.”

1) LOL

This is the daddy of them all. In the last decade it has effortlessly overtaken “The cheque’s in the post” and “I love you” as the most-often-told lie in human history. Out loud? Really? And, to complicate things, people are now saying LOL out loud, which is especially banjaxing since you can’t simultaneously say “LOL” and laugh aloud unless you can laugh through your arse. Or say “LOL” through your arse, I suppose, which makes a sort of pun because, linguistically speaking, LOL is now a form of phatic communication. See what I did there? Mega-LOL!

2) YOLO

You Only Live Once. But not for very much longer if you use this abbreviation anywhere near me when I’m holding a claw-hammer. This, as the distinguished internet scholar Matt Muir puts it, is “carpe diem for people with an IQ in double figures”. A friend of mine reports her children using this out loud. This has to end.

3) TBH

To Be Honest. We expect you to be honest, not to make some weary three-fingered gesture of reluctance at having to pony up an uncomfortable truth for an audience who probably can’t really take it. It’s out of the same drawer as “frankly” and “with respect”, and it should be returned to that drawer forthwith.

4) IMHO

In My Humble Opinion. The H in this acronym is always redundant, and the M is usually redundant too: it’s generally an opinion taken off-the-peg from people you follow on Twitter and by whom you hope to be retweeted.

5) JFGI

Just Fucking Google It. Well, charming. Glad I came to you for help. A wittier and more passive-aggressive version of this rude put-down is the website www.lmgtfy.com, which allows you to send your interlocutor a custom-made link saying “Let Me Google That For You” and doing so. My friend Stefan Magdalinski once sent me there, and I can say from first-hand experience that he’s a complete asshole.

6) tl;dr

It stands for “too long; didn’t read”. This abbreviation’s only redeeming feature is that it contains that murmuring under-butler of punctuation marks, the semicolon. On the other hand, it announces that the user is taking time out of his or her life to tell the world not that he disagrees with something, but that he’s ignorant of it. In your face, people who know stuff! In an ideal world there would be a one-character riposte that would convey that you’d stopped reading halfway through your interlocutor’s tedious five-character put-down.

Read the entire article here.

Great Literature and Human Progress

Professor of Philosophy Gregory Currie tackles a thorny issue in his latest article. The question he seeks to answer is, “does great literature make us better?” It’s highly likely that a poll of most nations would show the majority of people  believe that literature does in fact propel us in a forward direction, intellectually, morally, emotionally and culturally. It seem like a no-brainer. But where is the hard evidence?

From the New York Times:

You agree with me, I expect, that exposure to challenging works of literary fiction is good for us. That’s one reason we deplore the dumbing-down of the school curriculum and the rise of the Internet and its hyperlink culture. Perhaps we don’t all read very much that we would count as great literature, but we’re apt to feel guilty about not doing so, seeing it as one of the ways we fall short of excellence. Wouldn’t reading about Anna Karenina, the good folk of Middlemarch and Marcel and his friends expand our imaginations and refine our moral and social sensibilities?

If someone now asks you for evidence for this view, I expect you will have one or both of the following reactions. First, why would anyone need evidence for something so obviously right? Second, what kind of evidence would he want? Answering the first question is easy: if there’s no evidence – even indirect evidence – for the civilizing value of literary fiction, we ought not to assume that it does civilize. Perhaps you think there are questions we can sensibly settle in ways other than by appeal to evidence: by faith, for instance. But even if there are such questions, surely no one thinks this is one of them.

What sort of evidence could we present? Well, we can point to specific examples of our fellows who have become more caring, wiser people through encounters with literature. Indeed, we are such people ourselves, aren’t we?

I hope no one is going to push this line very hard. Everything we know about our understanding of ourselves suggests that we are not very good at knowing how we got to be the kind of people we are. In fact we don’t really know, very often, what sorts of people we are. We regularly attribute our own failures to circumstance and the failures of others to bad character. But we can’t all be exceptions to the rule (supposing it is a rule) that people do bad things because they are bad people.

We are poor at knowing why we make the choices we do, and we fail to recognize the tiny changes in circumstances that can shift us from one choice to another. When it comes to other people, can you be confident that your intelligent, socially attuned and generous friend who reads Proust got that way partly because of the reading? Might it not be the other way around: that bright, socially competent and empathic people are more likely than others to find pleasure in the complex representations of human interaction we find in literature?

There’s an argument we often hear on the other side, illustrated earlier this year by a piece on The New Yorker’s Web site. Reminding us of all those cultured Nazis, Teju Cole notes the willingness of a president who reads novels and poetry to sign weekly drone strike permissions. What, he asks, became of “literature’s vaunted power to inspire empathy?” I find this a hard argument to like, and not merely because I am not yet persuaded by the moral case against drones. No one should be claiming that exposure to literature protects one against moral temptation absolutely, or that it can reform the truly evil among us. We measure the effectiveness of drugs and other medical interventions by thin margins of success that would not be visible without sophisticated statistical techniques; why assume literature’s effectiveness should be any different?

We need to go beyond the appeal to common experience and into the territory of psychological research, which is sophisticated enough these days to make a start in testing our proposition.

Psychologists have started to do some work in this area, and we have learned a few things so far. We know that if you get people to read a short, lowering story about a child murder they will afterward report feeling worse about the world than they otherwise would. Such changes, which are likely to be very short-term, show that fictions press our buttons; they don’t show that they refine us emotionally or in any other way.

We have learned that people are apt to pick up (purportedly) factual information stated or implied as part of a fictional story’s background. Oddly, people are more prone to do that when the story is set away from home: in a study conducted by Deborah Prentice and colleagues and published in 1997, Princeton undergraduates retained more from a story when it was set at Yale than when it was set on their own campus (don’t worry Princetonians, Yalies are just as bad when you do the test the other way around). Television, with its serial programming, is good for certain kinds of learning; according to a study from 2001 undertaken for the Kaiser Foundation, people who regularly watched the show “E.R.” picked up a good bit of medical information on which they sometimes acted. What we don’t have is compelling evidence that suggests that people are morally or socially better for reading Tolstoy.

Not nearly enough research has been conducted; nor, I think, is the relevant psychological evidence just around the corner. Most of the studies undertaken so far don’t draw on serious literature but on short snatches of fiction devised especially for experimental purposes. Very few of them address questions about the effects of literature on moral and social development, far too few for us to conclude that literature either does or doesn’t have positive moral effects.

There is a puzzling mismatch between the strength of opinion on this topic and the state of the evidence. In fact I suspect it is worse than that; advocates of the view that literature educates and civilizes don’t overrate the evidence — they don’t even think that evidence comes into it. While the value of literature ought not to be a matter of faith, it looks as if, for many of us, that is exactly what it is.

Read the entire article here.

Image: The Odyssey, Homer. Book cover. Courtesy of Goodreads.com

Worst Job in the World

Would you rather be a human automaton inside a Chinese factory making products for your peers or a banquet attendant in ancient Rome? Thanks to Lapham’s Quarterly for this disturbing infographic, which shows how times may not have changed as much as we would have believed for the average worker over the last 2,000 years.

Visit the original infographic here.

Infographic courtesy of Lapham’s Quarterly.

Self-Assured Destruction (SAD)

The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.

Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.

From TomDispatch:

 

What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.

For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.

And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.

The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.

There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.

In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”

Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.

Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.

So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.

Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.

And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.

So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.

The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.

We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.

What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.

So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.

Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.

And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.

Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.

Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.

Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.

At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.

Read the entire article here.

Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.

Law, Common Sense and Your DNA

Paradoxically the law and common sense often seem to be at odds. Justice may still be blind, at least in most open democracies, but there seems to be no question as to the stupidity of much of our law.

Some examples: in Missouri it’s illegal to drive with an uncaged bear in the car; in Maine, it’s illegal to keep Christmas decorations up after January 14th; in New Jersey, it’s illegal to wear a bulletproof vest while committing murder; in Connecticut, a pickle is not an official, legal pickle unless it can bounce; in Louisiana, you can be fined $500 for instructing a pizza delivery service to deliver pizza to a friend unknowingly.

So, today we celebrate a victory for common sense and justice over thoroughly ill-conceived and badly written law — the U.S. Supreme Court unanimously struck down laws granting patents to corporations for human genes.

Unfortunately though, due to the extremely high financial stakes this is not likely to be the last we hear about big business seeking to patent or control the building blocks to life.

From the WSJ:

The Supreme Court unanimously ruled Thursday that human genes isolated from the body can’t be patented, a victory for doctors and patients who argued that such patents interfere with scientific research and the practice of medicine.

The court was handing down one of its most significant rulings in the age of molecular medicine, deciding who may own the fundamental building blocks of life.

The case involved Myriad Genetics Inc., which holds patents related to two genes, known as BRCA1 and BRCA2, that can indicate whether a woman has a heightened risk of developing breast cancer or ovarian cancer.

Justice Clarence Thomas, writing for the court, said the genes Myriad isolated are products of nature, which aren’t eligible for patents.

“Myriad did not create anything,” Justice Thomas wrote in an 18-page opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”

Even if a discovery is brilliant or groundbreaking, that doesn’t necessarily mean it’s patentable, the court said.

However, the ruling wasn’t a complete loss for Myriad. The court said that DNA molecules synthesized in a laboratory were eligible for patent protection. Myriad’s shares soared after the court’s ruling.

The court adopted the position advanced by the Obama administration, which argued that isolated forms of naturally occurring DNA weren’t patentable, but artificial DNA molecules were.

Myriad also has patent claims on artificial genes, known as cDNA.

The high court’s ruling was a win for a coalition of cancer patients, medical groups and geneticists who filed a lawsuit in 2009 challenging Myriad’s patents. Thanks to those patents, the Salt Lake City company has been the exclusive U.S. commercial provider of genetic tests for breast cancer and ovarian cancer.

“Today, the court struck down a major barrier to patient care and medical innovation,” said Sandra Park of the American Civil Liberties Union, which represented the groups challenging the patents. “Because of this ruling, patients will have greater access to genetic testing and scientists can engage in research on these genes without fear of being sued.”

Myriad didn’t immediately respond to a request for comment.

The challengers argued the patents have allowed Myriad to dictate the type and terms of genetic screening available for the diseases, while also dissuading research by other laboratories.

Read the entire article here.

Image: Gene showing the coding region in a segment of eukaryotic DNA. Courtesy of Wikipedia.

Innocent Until Proven Guilty, But Always Under Suspicion

It is strange to see the reaction to a remarkable disclosure such as that by the leaker / whistleblower Edward Snowden about the National Security Agency (NSA) peering into all our daily, digital lives. One strange reaction comes from the political left: the left desires a broad and activist government, ready to protect us all, but decries the NSA’s snooping. Another odd reaction comes from the political right: the right wants government out of people’s lives, but yet embraces the idea that the NSA should be looking for virtual skeletons inside people’s digital closets.

But let’s humanize this for a second. Somewhere inside the bowels of the NSA there is (or was) a person, or a small group of people, who actively determines what to look for in your digital communications trail. This person sets some parameters in a computer program and the technology does the rest, sifting through vast mountains of data looking for matches and patterns. Perhaps today that filter may have been set to contain certain permutations of data: zone of originating call, region of the recipient, keywords or code words embedded in the data traffic. However, tomorrow a rather zealous NSA employee may well set the filter to look for different items: keywords highlighting a particular political affiliation, preference for certain TV shows or bars, likes and dislikes of certain foods or celebrities.

We have begun the slide down a very dangerous, slippery slope that imperils our core civil liberties. The First Amendment protects our speech and assembly, but now we know that someone or some group may be evaluating the quality of that speech and determining a course of action if they disagree or if they find us assembling with others with whom they disagree. The Fourth Amendment prohibits unreasonable search — well, it looks like this one is falling by the wayside in light of the NSA program. We presume the secret FISA court, overseeing the secret program determines in secret what may or may not be deemed “reasonable”.

Regardless of Edward Snowden’s motivations (and his girl friend’s reaction), this event raises extremely serious issues that citizens must contemplate and openly discuss. It raises questions about the exercise of power, about government overreach and about the appropriate balance between security and privacy. It also raises questions about due process and about the long held right that presumes us to be innocent first and above all else. It raises a fundamental question about U.S. law and the Constitution and to whom it does and does not apply.

The day before the PRISM program exploded in the national consciousness only a handful of people — in secret — were determining answers to these constitutional and societal questions. Now, thanks to Mr.Snowden we can all participate in that debate, and rightly so — while being watched of course.

From Slate:

Every April, I try to wade through mounds of paperwork to file my taxes. Like most Americans, I’m trying to follow the law and pay all of the taxes that I owe without getting screwed in the process. I try and make sure that every donation I made is backed by proof, every deduction is backed by logic and documentation that I’ll be able to make sense of seven years. Because, like many Americans, I completely and utterly dread the idea of being audited. Not because I’ve done anything wrong, but the exact opposite. I know that I’m filing my taxes to the best of my ability and yet, I also know that if I became a target of interest from the IRS, they’d inevitably find some checkbox I forgot to check or some subtle miscalculation that I didn’t see. And so what makes an audit intimidating and scary is not because I have something to hide but because proving oneself to be innocent takes time, money, effort, and emotional grit.

Sadly, I’m getting to experience this right now as Massachusetts refuses to believe that I moved to New York mid-last-year. It’s mind-blowing how hard it is to summon up the paperwork that “proves” to them that I’m telling the truth. When it was discovered that Verizon (and presumably other carriers) was giving metadata to government officials, my first thought was: Wouldn’t it be nice if the government would use that metadata to actually confirm that I was in NYC, not Massachusetts? But that’s the funny thing about how data is used by our current government. It’s used to create suspicion, not to confirm innocence.

The frameworks of “innocent until proven guilty” and “guilty beyond a reasonable doubt” are really, really important to civil liberties, even if they mean that some criminals get away. These frameworks put the burden on the powerful entity to prove that someone has done something wrong. Because it’s actually pretty easy to generate suspicion, even when someone is wholly innocent. And still, even with this protection, innocent people are sentenced to jail and even given the death penalty. Because if someone has a vested interest in you being guilty, it’s not impossible to paint that portrait, especially if you have enough data.

It’s disturbing to me how often I watch as someone’s likeness is constructed in ways that contorts the image of who they are. This doesn’t require a high-stakes political issue. This is playground stuff. In the world of bullying, I’m astonished at how often schools misinterpret situations and activities to construct narratives of perpetrators and victims. Teens get really frustrated when they’re positioned as perpetrators, especially when they feel as though they’ve done nothing wrong. Once the stakes get higher, all hell breaks loose. In Sticks and Stones, Slate senior editor Emily Bazelon details how media and legal involvement in bullying cases means that they often spin out of control, such as they did in South Hadley. I’m still bothered by the conviction of Dharun Ravi in the highly publicized death of Tyler Clementi. What happens when people are tarred and feathered as symbols for being imperfect?

Of course, it’s not just one’s own actions that can be used against one’s likeness. Guilt-through-association is a popular American pastime. Remember how the media used Billy Carter to embarrass Jimmy Carter? Of course, it doesn’t take the media or require an election cycle for these connections to be made. Throughout school, my little brother had to bear the brunt of teachers who despised me because I was a rather rebellious student. So when the Boston Marathon bombing occurred, it didn’t surprise me that the media went hogwild looking for any connection to the suspects. Over and over again, I watched as the media took friendships and song lyrics out of context to try to cast the suspects as devils. By all accounts, it looks as though the brothers are guilty of what they are accused of, but that doesn’t make their friends and other siblings evil or justify the media’s decision to portray the whole lot in such a negative light.

So where does this get us? People often feel immune from state surveillance because they’ve done nothing wrong. This rhetoric is perpetuated on American TV. And yet the same media who tells them they have nothing to fear will turn on them if they happen to be in close contact with someone who is of interest to—or if they themselves are the subject of—state interest. And it’s not just about now, but it’s about always.

And here’s where the implications are particularly devastating when we think about how inequality, racism, and religious intolerance play out. As a society, we generate suspicion of others who aren’t like us, particularly when we believe that we’re always under threat from some outside force. And so the more that we live in doubt of other people’s innocence, the more that we will self-segregate. And if we’re likely to believe that people who aren’t like us are inherently suspect, we won’t try to bridge those gaps. This creates societal ruptures and undermines any ability to create a meaningful republic. And it reinforces any desire to spy on the “other” in the hopes of finding something that justifies such an approach. But, like I said, it doesn’t take much to make someone appear suspect.

Read the entire article here.

Image: U.S. Constitution. Courtesy of Wikipedia.

Living Long and Prospering on Ikaria

It’s safe to suggest that most of us above a certain age — let’s say 30 — wish to stay young. It is also safer to suggest, in the absence of a solution to this first wish, that many of us wish to age gracefully and happily. Yet for most of us, especially in the West, we age in a less dignified manner in combination with colorful medicines, lengthy tubes, and unpronounceable procedures. We are collectively living longer. But, the quality of those extra years leaves much to be desired.

In a quest to understand the process of aging more thoroughly researchers regularly descend on areas the world over that are known to have higher than average populations of healthy older people. These have become known as “Blue Zones”. One such place is a small, idyllic (there’s a clue right there) Greek island called Ikaria.

From the Guardian:

Gregoris Tsahas has smoked a packet of cigarettes every day for 70 years. High up in the hills of Ikaria, in his favourite cafe, he draws on what must be around his half-millionth fag. I tell him smoking is bad for the health and he gives me an indulgent smile, which suggests he’s heard the line before. He’s 100 years old and, aside from appendicitis, has never known a day of illness in his life.

Tsahas has short-cropped white hair, a robustly handsome face and a bone-crushing handshake. He says he drinks two glasses of red wine a day, but on closer interrogation he concedes that, like many other drinkers, he has underestimated his consumption by a couple of glasses.

The secret of a good marriage, he says, is never to return drunk to your wife. He’s been married for 60 years. “I’d like another wife,” he says. “Ideally one about 55.”

Tsahas is known at the cafe as a bit of a gossip and a joker. He goes there twice a day. It’s a 1km walk from his house over uneven, sloping terrain. That’s four hilly kilometres a day. Not many people half his age manage that far in Britain.

In Ikaria, a Greek island in the far east of the Mediterranean, about 30 miles from the Turkish coast, characters such as Gregoris Tsahas are not exceptional. With its beautiful coves, rocky cliffs, steep valleys and broken canopy of scrub and olive groves, Ikaria looks similar to any number of other Greek islands. But there is one vital difference: people here live much longer than the population on other islands and on the mainland. In fact, people here live on average 10 years longer than those in the rest of Europe and America – around one in three Ikarians lives into their 90s. Not only that, but they also have much lower rates of cancer and heart disease, suffer significantly less depression and dementia, maintain a sex life into old age and remain physically active deep into their 90s. What is the secret of Ikaria? What do its inhabitants know that the rest of us don’t?

The island is named after Icarus, the young man in Greek mythology who flew too close to the sun and plunged into the sea, according to legend, close to Ikaria. Thoughts of plunging into the sea are very much in my mind as the propeller plane from Athens comes in to land. There is a fierce wind blowing – the island is renowned for its wind – and the aircraft appears to stall as it turns to make its final descent, tipping this way and that until, at the last moment, the pilot takes off upwards and returns to Athens. Nor are there any ferries, owing to a strike. “They’re always on strike,” an Athenian back at the airport tells me.

Stranded in Athens for the night, I discover that a fellow thwarted passenger is Dan Buettner, author of a book called The Blue Zones, which details the five small areas in the world where the population outlive the American and western European average by around a decade: Okinawa in Japan, Sardinia, the Nicoya peninsula in Costa Rica, Loma Linda in California and Ikaria.

Tall and athletic, 52-year-old Buettner, who used to be a long-distance cyclist, looks a picture of well-preserved youth. He is a fellow with National Geographic magazine and became interested in longevity while researching Okinawa’s aged population. He tells me there are several other passengers on the plane who are interested in Ikaria’s exceptional demographics. “It would have been ironic, don’t you think,” he notes drily, “if a group of people looking for the secret of longevity crashed into the sea and died.”

Chatting to locals on the plane the following day, I learn that several have relations who are centenarians. One woman says her aunt is 111. The problem for demographers with such claims is that they are often very difficult to stand up. Going back to Methuselah, history is studded with exaggerations of age. In the last century, longevity became yet another battleground in the cold war. The Soviet authorities let it be known that people in the Caucasus were living deep into their hundreds. But subsequent studies have shown these claims lacked evidential foundation.

Since then, various societies and populations have reported advanced ageing, but few are able to supply convincing proof. “I don’t believe Korea or China,” Buettner says. “I don’t believe the Hunza Valley in Pakistan. None of those places has good birth certificates.”

However, Ikaria does. It has also been the subject of a number of scientific studies. Aside from the demographic surveys that Buettner helped organise, there was also the University of Athens’ Ikaria Study. One of its members, Dr Christina Chrysohoou, a cardiologist at the university’s medical school, found that the Ikarian diet featured a lot of beans and not much meat or refined sugar. The locals also feast on locally grown and wild greens, some of which contain 10 times more antioxidants than are found in red wine, as well as potatoes and goat’s milk.

Chrysohoou thinks the food is distinct from that eaten on other Greek islands with lower life expectancy. “Ikarians’ diet may have some differences from other islands’ diets,” she says. “The Ikarians drink a lot of herb tea and small quantities of coffee; daily calorie consumption is not high. Ikaria is still an isolated island, without tourists, which means that, especially in the villages in the north, where the highest longevity rates have been recorded, life is largely unaffected by the westernised way of living.”

But she also refers to research that suggests the Ikarian habit of taking afternoon naps may help extend life. One extensive study of Greek adults showed that regular napping reduced the risk of heart disease by almost 40%. What’s more, Chrysohoou’s preliminary studies revealed that 80% of Ikarian males between the ages of 65 and 100 were still having sex. And, of those, a quarter did so with “good duration” and “achievement”. “We found that most males between 65 and 88 reported sexual activity, but after the age of 90, very few continued to have sex.”

Read the entire article here.

Image: Agios Giorgis Beach, Ikaria. Courtesy of Island-Ikaria travel guide.

Iain (M.) Banks

On June 9, 2013 we lost Iain Banks to cancer. He was a passionate human(ist) and a literary great.

Luckily he left us with a startling collection of resonant and complex works. Most notably his series of Culture novels that prophesied a distant future, which one day will surely bear his name as a founding member. Mr.Banks, you will be greatly missed.

From the Guardian

The writer Iain Banks, who has died aged 59, had already prepared his many admirers for his death. On 3 April he announced on his website that he had inoperable gall bladder cancer, giving him, at most, a year to live. The announcement was typically candid and rueful. It was also characteristic in another way: Banks had a large web-attentive readership who liked to follow his latest reflections as well as his writings. Particularly in his later years, he frequently projected his thoughts via the internet. There can have been few novelists of recent years who were more aware of what their readers thought of their books; there is a frequent sense in his novels of an author teasing, testing and replying to a readership with which he was pretty familiar.

His first published novel, The Wasp Factory, appeared in 1984, when he was 30 years old, though it had been rejected by six publishers before being accepted by Macmillan. It was an immediate succès de scandale. The narrator is the 16-year-old Frank Cauldhame, who lives with his taciturn father in an isolated house on the north-east coast of Scotland. Frank lives in a world of private rituals, some of which involve torturing animals, and has committed several murders. The explanation of his isolation and his obsessiveness is shockingly revealed in one of the culminating plot twists for which Banks was to become renowned.

It was followed by Walking on Glass (1985), composed of three separate narratives whose connections are deliberately made obscure until near the end of the novel. One of these seems to be a science fiction narrative and points the way to Banks’s strong interest in this genre. Equally, multiple narration would continue to feature in his work.

The next year’s novel, The Bridge, featured three separate stories told in different styles: one a realist narrative about Alex, a manager in an engineering company, who crashes his car on the Forth road bridge; another the story of John Orr, an amnesiac living on a city-sized version of the bridge; and a third, the first-person narrative of the Barbarian, retelling myths and legends in colloquial Scots. In combining fantasy and allegory with minutely located naturalistic narrative, it was clearly influenced by Alasdair Gray’s Lanark (1981). It remained the author’s own avowed favourite.

His first science fiction novel, Consider Phlebas, was published in 1987, though he had drafted it soon after completing The Wasp Factory. In it he created The Culture, a galaxy-hopping society run by powerful but benevolent machines and possessed of what its inventor called “well-armed liberal niceness”. It would feature in most of his subsequent sci-fi novels. Its enemies are the Idirans, a religious, humanoid race who resent the benign powers of the Culture. In this conflict, good and ill are not simply apportioned. Banks provided a heady mix of, on the one hand, action and intrigue on a cosmic scale (his books were often called “space operas”), and, on the other, ruminations on the clash of ideas and ideologies.

For the rest of his career literary novels would alternate with works of science fiction, the latter appearing under the name “Iain M Banks” (the “M” standing for Menzies). Banks sometimes spoke of his science fiction books as a writerly vacation from the demands of literary fiction, where he could “pull out the stops”, as he himself put it. Player of Games (1988) was followed by Use of Weapons (1990). The science fiction employed some of the narrative trickery that characterised his literary fiction: Use of Weapons, for instance, featured two interleaved narratives, one of which moved forward in time and the other backwards. Their connectedness only became clear with a final, somewhat outrageous, twist of the narrative. His many fans came to relish these tricks.

Read the entire article here.

Image: Iain Banks. Courtesy of BBC.

MondayMap: The Double Edge of Climate Change

So the changing global climate will imperil our coasts, flood low-lying lands, fuel more droughts, increase weather extremes, and generally make the planet more toasty. But, a new study — for the first time — links increasing levels of CO2 to an increase in global vegetation. Perhaps this portends our eventual fate — ceding the Earth back to the plants — unless humans make some drastic behavioral changes.

From the New Scientist:

The planet is getting lusher, and we are responsible. Carbon dioxide generated by human activity is stimulating photosynthesis and causing a beneficial greening of the Earth’s surface.

For the first time, researchers claim to have shown that the increase in plant cover is due to this “CO2 fertilisation effect” rather than other causes. However, it remains unclear whether the effect can counter any negative consequences of global warming, such as the spread of deserts.

Recent satellite studies have shown that the planet is harbouring more vegetation overall, but pinning down the cause has been difficult. Factors such as higher temperatures, extra rainfall, and an increase in atmospheric CO2 – which helps plants use water more efficiently – could all be boosting vegetation.

To home in on the effect of CO2, Randall Donohue of Australia’s national research institute, the CSIRO in Canberra, monitored vegetation at the edges of deserts in Australia, southern Africa, the US Southwest, North Africa, the Middle East and central Asia. These are regions where there is ample warmth and sunlight, but only just enough rainfall for vegetation to grow, so any change in plant cover must be the result of a change in rainfall patterns or CO2 levels, or both.

If CO2 levels were constant, then the amount of vegetation per unit of rainfall ought to be constant, too. However, the team found that this figure rose by 11 per cent in these areas between 1982 and 2010, mirroring the rise in CO2 (Geophysical Research Letters, doi.org/mqx). Donohue says this lends “strong support” to the idea that CO2 fertilisation drove the greening.

Climate change studies have predicted that many dry areas will get drier and that some deserts will expand. Donohue’s findings make this less certain.

However, the greening effect may not apply to the world’s driest regions. Beth Newingham of the University of Idaho, Moscow, recently published the result of a 10-year experiment involving a greenhouse set up in the Mojave desert of Nevada. She found “no sustained increase in biomass” when extra CO2 was pumped into the greenhouse. “You cannot assume that all these deserts respond the same,” she says. “Enough water needs to be present for the plants to respond at all.”

The extra plant growth could have knock-on effects on climate, Donohue says, by increasing rainfall, affecting river flows and changing the likelihood of wildfires. It will also absorb more CO2 from the air, potentially damping down global warming but also limiting the CO2 fertilisation effect itself.

Read the entire article here.

Image: Global vegetation mapped: Normalized Difference Vegetation Index (NDVI) from Nov. 1, 2007, to Dec. 1, 2007, during autumn in the Northern Hemisphere. This monthly average is based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The greenness values depict vegetation density; higher values (dark greens) show land areas with plenty of leafy green vegetation, such as the Amazon Rainforest. Lower values (beige to white) show areas with little or no vegetation, including sand seas and Arctic areas. Areas with moderate amounts of vegetation are pale green. Land areas with no data appear gray, and water appears blue. Courtesy of NASA.

Cos Things Break, Don’t They

Most things, natural or manufactured, break after a while. And, most photographers spend an inordinate amount of time ensuring that their subject — usually an object — is represented in the best possible wholesome light, literally and metaphorically. However, for one enterprising photographer it’s all about things in their broken form, albeit displayed exquisitely in a collage of their constituent pieces.

From the Guardian:

Canadian photographer Todd McLellan makes visible the inner workings of everyday products by dismantling, carefully arranging the components and photographing them. His book, Things Come Apart, presents a unique view of items such as chainsaws and iPods, transforming ordinary objects into works of art.

See the entire galley here.

Image: Raleigh bicycle from the 80s. Number of parts: 893. Courtesy of Todd McLellan/Thames & Hudson / Guardian.

PRISM

From the news reports first aired a couple of days ago and posted here, we now know the U.S. National Security Agency (NSA) has collected and is collecting vast amounts of data related to our phone calls. But, it seems that this is only the very tip of a very large, nasty iceberg. Our government is also sifting though our online communications as well — email, online chat, photos, videos, social networking data.

From the Washington Post:

Through a top-secret program authorized by federal judges working under the Foreign Intelligence Surveillance Act (FISA), the U.S. intelligence community can gain access to the servers of nine Internet companies for a wide range of digital data. Documents describing the previously undisclosed program, obtained by The Washington Post, show the breadth of U.S. electronic surveillance capabilities in the wake of a widely publicized controversy over warrantless wiretapping of U.S. domestic telephone communications in 2005.

Read the entire article here.

Image: From the PRISM Powerpoint presentation – The PRISM program collects a wide range of data from the nine companies, although the details vary by provider. Courtesy of Washington Post.

The Death of Photojournalism

Really, it was only a matter of time. First, digital cameras killed off their film-dependent predecessors and then dealt a death knell for Kodak. Now social media and the #hashtag is doing the same to the professional photographer.

Camera-enabled smartphones are ubiquitous, making everyone a photographer. And, with almost everyone jacked into at least one social network or photo-sharing site it takes only one point and a couple of clicks to get a fresh image posted to the internet. Ironically, the newsprint media, despite being in the business of news, have failed to recognize this news until recently.

So, now with an eye to cutting costs, and making images more immediate and compelling — via citizens — news organizations are re-tooling their staffs in four ways: first, fire the photographers; second, re-train reporters to take photographs with their smartphones; third, video, video, video; fourth, rely on the ever willing public to snap images, post, tweet, #hashtag and like — for free of course.

From Cult of Mac:

The Chicago Sun-Times, one of the remnants of traditional paper journalism, has let go its entire photography staff of 28 people. Now its reporters will start receiving “iPhone photography basics” training to start producing their own photos and videos.

The move is part of a growing trend towards publications using the iPhone as a replacement for fancy, expensive DSLRs. It’s a also a sign of how traditional journalism is being changed by technology like the iPhone and the advent of digital publishing.

Screen Shot 2013-05-31 at 1.58.39 PM

When Hurricane Sandy hit New York City, reporters for Time used the iPhone to take photos on the field and upload to the publication’s Instagram account. Even the cover photo used on the corresponding issue of Time was taken on an iPhone.

Sun-Times photographer Alex Garcia argues that the “idea that freelancers and reporters could replace a photo staff with iPhones is idiotic at worst, and hopelessly uninformed at best.” Garcia believes that reporters are incapable of writing articles and also producing quality media, but she’s fighting an uphill battle.

Big newspaper companies aren’t making anywhere near the amount of money they used to due to the popularity of online publications and blogs. Free news is a click away nowadays. Getting rid of professional photographers and equipping reporters with iPhones is another way to cut costs.

The iPhone has a better camera than most digital point-and-shoots, and more importantly, it is in everyone’s pocket. It’s a great camera that’s always with you, and that makes it an invaluable tool for any journalist. There will always be a need for videographers and pro photographers that can make studio-level work, but the iPhone is proving to be an invaluable tool for reporters in the modern world.

Read the entire article here.

Image: Kodak 1949-56 Retina IIa 35mm Camera. Courtesy of Wikipedia / Kodak.

Surveillance of the People for the People

The U.S. government is spying on your phone calls with the hushed assistance of companies like Verizon. While the National Security Agency (NSA) may not be listening to your actual conversations (yet), its agents are actively gathering data about your calls: who you call, from where you call, when you call, how long the call lasts.

Here’s the top secret court order delineating the government’s unfettered powers of domestic surveillance.

The price of freedom is becoming ever more expensive, and with broad clandestine activities like this underway — with no specific target — our precious freedoms continue to erode. Surely, this must delight our foes, who will gain relish from the self-inflicted curtailment of civil liberties — the societal consequences are much more far-reaching than those from any improvised explosive device (IED) however heinous and destructive.

From the Guardian:

The National Security Agency is currently collecting the telephone records of millions of US customers of Verizon, one of America’s largest telecoms providers, under a top secret court order issued in April.

The order, a copy of which has been obtained by the Guardian, requires Verizon on an “ongoing, daily basis” to give the NSA information on all telephone calls in its systems, both within the US and between the US and other countries.

The document shows for the first time that under the Obama administration the communication records of millions of US citizens are being collected indiscriminately and in bulk – regardless of whether they are suspected of any wrongdoing.

The secret Foreign Intelligence Surveillance Court (Fisa) granted the order to the FBI on April 25, giving the government unlimited authority to obtain the data for a specified three-month period ending on July 19.

Under the terms of the blanket order, the numbers of both parties on a call are handed over, as is location data, call duration, unique identifiers, and the time and duration of all calls. The contents of the conversation itself are not covered.

The disclosure is likely to reignite longstanding debates in the US over the proper extent of the government’s domestic spying powers.

Under the Bush administration, officials in security agencies had disclosed to reporters the large-scale collection of call records data by the NSA, but this is the first time significant and top-secret documents have revealed the continuation of the practice on a massive scale under President Obama.

The unlimited nature of the records being handed over to the NSA is extremely unusual. Fisa court orders typically direct the production of records pertaining to a specific named target who is suspected of being an agent of a terrorist group or foreign state, or a finite set of individually named targets.

The Guardian approached the National Security Agency, the White House and the Department of Justice for comment in advance of publication on Wednesday. All declined. The agencies were also offered the opportunity to raise specific security concerns regarding the publication of the court order.

The court order expressly bars Verizon from disclosing to the public either the existence of the FBI’s request for its customers’ records, or the court order itself.

“We decline comment,” said Ed McFadden, a Washington-based Verizon spokesman.

The order, signed by Judge Roger Vinson, compels Verizon to produce to the NSA electronic copies of “all call detail records or ‘telephony metadata’ created by Verizon for communications between the United States and abroad” or “wholly within the United States, including local telephone calls”.

The order directs Verizon to “continue production on an ongoing daily basis thereafter for the duration of this order”. It specifies that the records to be produced include “session identifying information”, such as “originating and terminating number”, the duration of each call, telephone calling card numbers, trunk identifiers, International Mobile Subscriber Identity (IMSI) number, and “comprehensive communication routing information”.

The information is classed as “metadata”, or transactional information, rather than communications, and so does not require individual warrants to access. The document also specifies that such “metadata” is not limited to the aforementioned items. A 2005 court ruling judged that cell site location data – the nearest cell tower a phone was connected to – was also transactional data, and so could potentially fall under the scope of the order.

While the order itself does not include either the contents of messages or the personal information of the subscriber of any particular cell number, its collection would allow the NSA to build easily a comprehensive picture of who any individual contacted, how and when, and possibly from where, retrospectively.

It is not known whether Verizon is the only cell-phone provider to be targeted with such an order, although previous reporting has suggested the NSA has collected cell records from all major mobile networks. It is also unclear from the leaked document whether the three-month order was a one-off, or the latest in a series of similar orders.

Read the entire article here.

Beware! RoboBee May Be Watching You

History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.

So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.

From Scientific American:

We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.

The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.

When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.

“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”

The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.

Read the entire article here.

Image courtesy of Micro Air Vehicles Project, Harvard.

Leadership and the Tyranny of Big Data

“There are three kinds of lies: lies, damned lies, and statistics”, goes the adage popularized by author Mark Twain.

Most people take for granted that numbers can be persuasive — just take a look at your bank balance. Also, most accept the notion that data can be used, misused, misinterpreted, re-interpreted and distorted to support or counter almost any argument. Just listen to a politician quote polling numbers and then hear an opposing politician make a contrary argument using the very same statistics. Or, better still, familiarize yourself with pseudo-science of economics.

Authors Kenneth Cukier (data editor for The Economist) and Viktor Mayer-Schönberger (professor of Internet governance) examine this phenomenon in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think. They eloquently present the example of Robert McNamara, U.S. defense secretary during the Vietnam war, who in(famously) used his detailed spreadsheets — including daily body count — to manage and measure progress. Following the end of the war, many U.S. generals later described this over-reliance on numbers as misguided dictatorship that led many to make ill-informed decisions — based solely on numbers — and to fudge their figures.

This classic example leads them to a timely and important caution: as the range and scale of big data becomes ever greater, and while it may offer us great benefits, it can and will be used to mislead.

From Technology review:

Big data is poised to transform society, from how we diagnose illness to how we educate children, even making it possible for a car to drive itself. Information is emerging as a new economic input, a vital resource. Companies, governments, and even individuals will be measuring and optimizing everything possible.

But there is a dark side. Big data erodes privacy. And when it is used to make predictions about what we are likely to do but haven’t yet done, it threatens freedom as well. Yet big data also exacerbates a very old problem: relying on the numbers when they are far more fallible than we think. Nothing underscores the consequences of data analysis gone awry more than the story of Robert McNamara.

McNamara was a numbers guy. Appointed the U.S. secretary of defense when tensions in Vietnam rose in the early 1960s, he insisted on getting data on everything he could. Only by applying statistical rigor, he believed, could decision makers understand a complex situation and make the right choices. The world in his view was a mass of unruly information that—if delineated, denoted, demarcated, and quantified—could be tamed by human hand and fall under human will. McNamara sought Truth, and that Truth could be found in data. Among the numbers that came back to him was the “body count.”

McNamara developed his love of numbers as a student at Harvard Business School and then as its youngest assistant professor at age 24. He applied this rigor during the Second World War as part of an elite Pentagon team called Statistical Control, which brought data-driven decision making to one of the world’s largest bureaucracies. Before this, the military was blind. It didn’t know, for instance, the type, quantity, or location of spare airplane parts. Data came to the rescue. Just making armament procurement more efficient saved $3.6 billion in 1943. Modern war demanded the efficient allocation of resources; the team’s work was a stunning success.

At war’s end, the members of this group offered their skills to corporate America. The Ford Motor Company was floundering, and a desperate Henry Ford II handed them the reins. Just as they knew nothing about the military when they helped win the war, so too were they clueless about making cars. Still, the so-called “Whiz Kids” turned the company around.

McNamara rose swiftly up the ranks, trotting out a data point for every situation. Harried factory managers produced the figures he demanded—whether they were correct or not. When an edict came down that all inventory from one car model must be used before a new model could begin production, exasperated line managers simply dumped excess parts into a nearby river. The joke at the factory was that a fellow could walk on water—atop rusted pieces of 1950 and 1951 cars.

McNamara epitomized the hyper-rational executive who relied on numbers rather than sentiments, and who could apply his quantitative skills to any industry he turned them to. In 1960 he was named president of Ford, a position he held for only a few weeks before being tapped to join President Kennedy’s cabinet as secretary of defense.

As the Vietnam conflict escalated and the United States sent more troops, it became clear that this was a war of wills, not of territory. America’s strategy was to pound the Viet Cong to the negotiation table. The way to measure progress, therefore, was by the number of enemy killed. The body count was published daily in the newspapers. To the war’s supporters it was proof of progress; to critics, evidence of its immorality. The body count was the data point that defined an era.

McNamara relied on the figures, fetishized them. With his perfectly combed-back hair and his flawlessly knotted tie, McNamara felt he could comprehend what was happening on the ground only by staring at a spreadsheet—at all those orderly rows and columns, calculations and charts, whose mastery seemed to bring him one standard deviation closer to God.

In 1977, two years after the last helicopter lifted off the rooftop of the U.S. embassy in Saigon, a retired Army general, Douglas Kinnard, published a landmark survey called The War Managers that revealed the quagmire of quantification. A mere 2 percent of America’s generals considered the body count a valid way to measure progress. “A fake—totally worthless,” wrote one general in his comments. “Often blatant lies,” wrote another. “They were grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara,” said a third.

Read the entire article after the jump.

Image: Robert McNamara at a cabinet meeting, 22 Nov 1967. Courtesy of Wikipedia / Public domain.

MondayMap: Your Taxes and Google Street View

The fear of an annual tax audit brings many people to their knees. It’s one of many techniques that government authorities use to milk their citizens of every last penny of taxes. Well, authorities now have an even more powerful weapon to add to their tax collecting arsenal — Google Street View. And, if you are reading this from Lithuania you will know what we are talking about.

From the Wall Street Journal:

One day last summer, a woman was about to climb into a hammock in the front yard of a suburban house here when a photographer for the Google Inc. Street View service snapped her picture.

The apparently innocuous photograph is now being used as evidence in a tax-evasion case brought by Lithuanian authorities against the undisclosed owners of the home.

Some European countries have been going after Google, complaining that the search giant is invading the privacy of their citizens. But tax inspectors here have turned to the prying eyes of Street View for their own purposes.

After Google’s car-borne cameras were driven through the Vilnius area last year, the tax men in this small Baltic nation got busy. They have spent months combing through footage looking for unreported taxable wealth.

“We were very impressed,” said Modestas Kaseliauskas, head of the State Tax Authority. “We realized that we could do more with less and in shorter time.”

More than 100 people have been identified so far after investigators compared Street View images of about 500 properties with state property registries looking for undeclared construction.

Two recent cases netted $130,000 in taxes and penalties after investigators found houses photographed by Google that weren’t on official maps.

From aerial surveillance to dedicated iPhone apps, cash-strapped governments across Europe are employing increasingly unconventional measures against tax cheats to raise revenue. In some countries, authorities have tried to enlist citizens to help keep watch. Customers in Greece, for instance, are insisting on getting receipts for what they buy.

For Lithuania, which only two decades ago began its transition away from communist central planning and remains one of the poorest countries in the European Union, Street View has been a big help. After the global financial crisis struck in 2008, belt tightening cut the tax authority’s budget by a third. A quarter of its employees were let go, leaving it with fewer resources just as it was being asked to do more.

“We were pressured to increase tax revenue,” said the authority’s Mr. Kaseliauskas.

Street View has let Mr. Kaseliauskas’s team see things it would have otherwise missed. Its images are better—and cheaper—than aerial photos, which authorities complain often aren’t clear enough to be useful.

Sitting in their city office 10 miles away, they were able to detect that, contrary to official records, the house with the hammock existed and that, in one photograph, three cars were parked in the driveway.

An undeclared semidetached house owned by the former board chairman of Bank Snoras, Raimundas Baranauskas, was recently identified using Street View and is estimated by the government to be worth about $260,000. Authorities knew Mr. Baranauskas owned land there, but not buildings. A quick look online led to the discovery of several houses on his land, in a quiet residential street of Vilnius.

Read the entire article here.

Image courtesy of (who else?), Google Maps.

Ai Weiwei – China’s Warhol

Artist Ai Weiwei has suffered at the hands of the Chinese authorities much more so than Andy Warhol’s brushes with surveillance from the FBI. Yet the two are remarkably similar: brash and polarizing views, distinctive art and creative processes, masterful self-promotion, savvy media manipulation and global ubiquity. This is all the more astounding given Ai Weiwei’s arrest, detentions and prohibition on travel outside of Beijing. He’s even made it to the Venice Biennale this year — only his art of course.

From the= Guardian:

To some, he is verging on a saint and martyr, singlehandedly standing against the forces of Chinese political repression. For others he is a canny manipulator, utterly in control of his reputation and place in the art world and market. For others still, he is all these things: an artist who outdoes even Andy Warhol in his ubiquity, his nimbleness at self-promotion and his use of every medium at his disposal to promulgate his work and his activism.

Whatever your views on the Chinese artist Ai Weiwei, one thing is clear: he is everywhere, from the Hampstead theatre in London, where Howard Brenton’s play about the 81 days Ai spent in detention in 2011 is underway, to the web, where his the video for his heavy metal song Dumbass is circulating, to the Venice Biennale, where not one but three of his large-scale works are on display – perhaps the most exposure for any single artist at the international festival.

One of the works, Bang, a forest of hundreds of tangled wooden stools, is the most prominent piece in the German national pavilion. Then, in the Zuecca Project Space on the island of Giudecca, is his installation Straight: 150 tons of crushed rebar from schools flattened in the Sichuan earthquake of 2008, recovered by the artist and his team, who bought the crumpled steel rods as scrap before painstakingly straightening them and piling them up in a wave-like sculptural arrangement.

By far the most revealing about Ai’s own experience, though, is the third piece, SACRED. Situated in the church of Sant’Antonin, it consists of six large iron boxes, into which visitors can peek to see sculptures recreating scenes from the artist’s detention. Here is a miniature Ai being interrogated; here a miniature Ai showers or sits on the lavatory while two uniformed guards stand over him. Other scenes show him sleeping and eating – always in the same tiny space, always under double guard. (The music video refers to some of these scenes with a lightly satirical tone that is absent from the sculpture.)

According to Greg Hilty of London’s Lisson Gallery, under whose auspices SACRED is being shown, and who saw Ai in China a week ago, the work is a form of “therapy or exorcism – it was something he had to get out. It is an experience that we might see as newsworthy, but for him, he was the one in it.”

Read the entire article here.

Image: Waking nightmare … Ai Weiwei’s Entropy (Sleep), from SACRED (2013). Courtesy of David Levene / Guardian.

Dead Man Talking

Graham is a man very much alive. But, his mind has convinced him that his brain is dead and that he killed it.

From the New Scientist:

Name: Graham
Condition: Cotard’s syndrome

“When I was in hospital I kept on telling them that the tablets weren’t going to do me any good ’cause my brain was dead. I lost my sense of smell and taste. I didn’t need to eat, or speak, or do anything. I ended up spending time in the graveyard because that was the closest I could get to death.”

Nine years ago, Graham woke up and discovered he was dead.

He was in the grip of Cotard’s syndrome. People with this rare condition believe that they, or parts of their body, no longer exist.

For Graham, it was his brain that was dead, and he believed that he had killed it. Suffering from severe depression, he had tried to commit suicide by taking an electrical appliance with him into the bath.

Eight months later, he told his doctor his brain had died or was, at best, missing. “It’s really hard to explain,” he says. “I just felt like my brain didn’t exist any more. I kept on telling the doctors that the tablets weren’t going to do me any good because I didn’t have a brain. I’d fried it in the bath.”

Doctors found trying to rationalise with Graham was impossible. Even as he sat there talking, breathing – living – he could not accept that his brain was alive. “I just got annoyed. I didn’t know how I could speak or do anything with no brain, but as far as I was concerned I hadn’t got one.”

Baffled, they eventually put him in touch with neurologists Adam Zeman at the University of Exeter, UK, and Steven Laureys at the University of Liège in Belgium.

“It’s the first and only time my secretary has said to me: ‘It’s really important for you to come and speak to this patient because he’s telling me he’s dead,'” says Laureys.

Limbo state

“He was a really unusual patient,” says Zeman. Graham’s belief “was a metaphor for how he felt about the world – his experiences no longer moved him. He felt he was in a limbo state caught between life and death”.

No one knows how common Cotard’s syndrome may be. A study published in 1995 of 349 elderly psychiatric patients in Hong Kong found two with symptoms resembling Cotard’s (General Hospital Psychiatry, DOI: 10.1016/0163-8343(94)00066-M). But with successful and quick treatments for mental states such as depression – the condition from which Cotard’s appears to arise most often – readily available, researchers suspect the syndrome is exceptionally rare today. Most academic work on the syndrome is limited to single case studies like Graham.

Some people with Cotard’s have reportedly died of starvation, believing they no longer needed to eat. Others have attempted to get rid of their body using acid, which they saw as the only way they could free themselves of being the “walking dead”.

Graham’s brother and carers made sure he ate, and looked after him. But it was a joyless existence. “I didn’t want to face people. There was no point,” he says, “I didn’t feel pleasure in anything. I used to idolise my car, but I didn’t go near it. All the things I was interested in went away.”

Even the cigarettes he used to relish no longer gave him a hit. “I lost my sense of smell and my sense of taste. There was no point in eating because I was dead. It was a waste of time speaking as I never had anything to say. I didn’t even really have any thoughts. Everything was meaningless.”

Low metabolism

A peek inside Graham’s brain provided Zeman and Laureys with some explanation. They used positron emission tomography to monitor metabolism across his brain. It was the first PET scan ever taken of a person with Cotard’s. What they found was shocking: metabolic activity across large areas of the frontal and parietal brain regions was so low that it resembled that of someone in a vegetative state.

Graham says he didn’t really have any thoughts about his future during that time. “I had no other option other than to accept the fact that I had no way to actually die. It was a nightmare.”

Graveyard haunt

This feeling prompted him on occasion to visit the local graveyard. “I just felt I might as well stay there. It was the closest I could get to death. The police would come and get me, though, and take me back home.”

There were some unexplained consequences of the disorder. Graham says he used to have “nice hairy legs”. But after he got Cotard’s, all the hairs fell out. “I looked like a plucked chicken! Saves shaving them I suppose…”

It’s nice to hear him joke. Over time, and with a lot of psychotherapy and drug treatment, Graham has gradually improved and is no longer in the grip of the disorder. He is now able to live independently. “His Cotard’s has ebbed away and his capacity to take pleasure in life has returned,” says Zeman.

“I couldn’t say I’m really back to normal, but I feel a lot better now and go out and do things around the house,” says Graham. “I don’t feel that brain-dead any more. Things just feel a bit bizarre sometimes.” And has the experience changed his feeling about death? “I’m not afraid of death,” he says. “But that’s not to do with what happened – we’re all going to die sometime. I’m just lucky to be alive now.”

Read the entire article here.

Image courtesy of Wikimedia / Public domain.