All posts by Mike

Surveillance, British Style

While the revelations about the National Security Agency (NSA) snooping on private communications of U.S. citizens are extremely troubling, the situation could be much worse. Cast a sympathetic thought to the Her Majesty’s subjects in the United Kingdom of Great Britain and Northern Island, where almost everyone eavesdrops on everyone else. While the island nation of 60 million covers roughly the same area as Michigan, it is swathed in over 4 million CCTV (closed circuit television) surveillance cameras.

From Slate:

We adore the English here in the States. They’re just so precious! They call traffic circles “roundabouts,” prostitutes “prozzies,” and they have a queen. They’re ever so polite and carry themselves with such admirable poise. We love their accents so much, we use them in historical films to give them a bit more gravitas. (Just watch The Last Temptation of Christ to see what happens when we don’t: Judas doesn’t sound very intimidating with a Brooklyn accent.)

What’s not so cute is the surveillance society they’ve built—but the U.S. government seems pretty enamored with it.

The United Kingdom is home to an intense surveillance system. Most of the legal framework for this comes from the Regulation of Investigatory Powers Act, which dates all the way back to the year 2000. RIPA is meant to support criminal investigation, preventing disorder, public safety, public health, and, of course, “national security.” If this extremely broad application of law seems familiar, it should: The United States’ own PATRIOT Act is remarkably similar in scope and application. Why should the United Kingdom have the best toys, after all?

This is one of the problems with being the United Kingdom’s younger sibling. We always want what Big Brother has. Unless it’s soccer. Wiretaps, though? We just can’t get enough!

The PATRIOT Act, broad as it is, doesn’t match RIPA’s incredible wiretap allowances. In 1994, the United States passed the Communications Assistance for Law Enforcement Act, which mandated that service providers give the government “technical assistance” in the use of wiretaps. RIPA goes a step further and insists that wiretap capability be implemented right into the system. If you’re a service provider and can’t set up plug-and-play wiretap capability within a short time, Johnny English comes knocking at your door to say, ” ‘Allo, guvna! I ‘ear tell you ‘aven’t put in me wiretaps yet. Blimey! We’ll jus’ ‘ave to give you a hefty fine! Ods bodkins!” Wouldn’t that be awful (the law, not the accent)? It would, and it’s just what the FBI is hoping for. CALEA is getting a rewrite that, if it passes, would give the FBI that very capability.

I understand. Older siblings always get the new toys, and it’s only natural that we want to have them as well. But why does it have to be legal toys for surveillance? Why can’t it be chocolate? The United Kingdom enjoys chocolate that’s almost twice as good as American chocolate. Literally, they get 20 percent solid cocoa in their chocolate bars, while we suffer with a measly 11 percent. Instead, we’re learning to shut off the Internet for entire families.

That’s right. In the United Kingdom, if you are just suspected of having downloaded illegally obtained material three times (it’s known as the “three strikes” law), your Internet is cut off. Not just for you, but for your entire household. Life without the Internet, let’s face it, sucks. You’re not just missing out on videos of cats falling into bathtubs. You’re missing out of communication, jobs, and being a 21st-century citizen. Maybe this is OK in the United Kingdom because you can move up north, become a farmer, and enjoy a few pints down at the pub every night. Or you can just get a new ISP, because the United Kingdom actually has a competitive market for ISPs. The United States, as an homage, has developed the so-called “copyright alert system.” It works much the same way as the U.K. law, but it provides for six “strikes” instead of three and has a limited appeals system, in which the burden of proof lies on the suspected customer. In the United States, though, the rights-holders monitor users for suspected copyright infringement on their own, without the aid of ISPs. So far, we haven’t adopted the U.K. system in which ISPs are expected to monitor traffic and dole out their three strikes at their discretion.

These are examples of more targeted surveillance of criminal activities, though. What about untargeted mass surveillance? On June 21, one of Edward Snowden’s leaks revealed that the Government Communications Headquarters, the United Kingdom’s NSA equivalent, has been engaging in a staggering amount of data collection from civilians. This development generated far less fanfare than the NSA news, perhaps because the legal framework for this data collection has existed for a very long time under RIPA, and we expect surveillance in the United Kingdom. (Or maybe Americans were just living down to the stereotype of not caring about other countries.) The NSA models follow the GCHQ’s very closely, though, right down to the oversight, or lack thereof.

Media have labeled the FISA court that regulates the NSA’s surveillance as a “rubber-stamp” court, but it’s no match for the omnipotence of the Investigatory Powers Tribunal, which manages oversight for MI5, MI6, and the GCHQ. The Investigatory Powers Tribunal is exempt from the United Kingdom’s Freedom of Information Act, so it doesn’t have to share a thing about its activities (FISA apparently does not have this luxury—yet). On top of that, members of the tribunal are appointed by the queen. The queen. The one with the crown who has jubilees and a castle and probably a court wizard. Out of 956 complaints to the Investigatory Powers Tribunal, five have been upheld. Now that’s a rubber-stamp court we can aspire to!

Or perhaps not. The future of U.S. surveillance looks very grim if we’re set on following the U.K.’s lead. Across the United Kingdom, an estimated 4.2 million CCTV cameras, some with facial-recognition capability, keep watch on nearly the entire nation. (This can lead to some Monty Python-esque high jinks.) Washington, D.C., took its first step toward strong camera surveillance in 2008, when several thousand were installed ahead of President Obama’s inauguration.

Read the entire article here.

Image: Royal coat of arms of Queen Elizabeth II of the United Kingdom, as used in England and Wales, and Scotland. Courtesy of Wikipedia.

Bella Italia: It’s All in the Hands

[tube]DW91Ec4DYkU[/tube]

Italians are famous and infamous for their eloquent and vigorous hand gestures. Psychologist professor Isabella Poggi, of Roma Tre University, has cataloged about 250 hand gestures used by Italians in everyday conversation. The gestures are used to reinforce a simple statement or emotion or convey quite complex meaning. Italy would not be the same without them.

Our favorite hand gesture is fingers and thumb pinched in the form of a spire often used to mean “what on earth are you talking about?“; moving the hand slightly up and down while doing this adds emphasis and demands explanation.

For a visual lexicon of the most popular gestures jump here.

From the New York Times:

In the great open-air theater that is Rome, the characters talk with their hands as much as their mouths. While talking animatedly on their cellphones or smoking cigarettes or even while downshifting their tiny cars through rush-hour traffic, they gesticulate with enviably elegant coordination.

From the classic fingers pinched against the thumb that can mean “Whaddya want from me?” or “I wasn’t born yesterday” to a hand circled slowly, indicating “Whatever” or “That’ll be the day,” there is an eloquence to the Italian hand gesture. In a culture that prizes oratory, nothing deflates airy rhetoric more swiftly.

Some gestures are simple: the side of the hand against the belly means hungry; the index finger twisted into the cheek means something tastes good; and tapping one’s wrist is a universal sign for “hurry up.” But others are far more complex. They add an inflection — of fatalism, resignation, world-weariness — that is as much a part of the Italian experience as breathing.

Two open hands can ask a real question, “What’s happening?” But hands placed in prayer become a sort of supplication, a rhetorical question: “What do you expect me to do about it?” Ask when a Roman bus might arrive, and the universal answer is shrugged shoulders, an “ehh” that sounds like an engine turning over and two raised hands that say, “Only when Providence allows.”

To Italians, gesturing comes naturally. “You mean Americans don’t gesture? They talk like this?” asked Pasquale Guarrancino, a Roman taxi driver, freezing up and placing his arms flat against his sides. He had been sitting in his cab talking with a friend outside, each moving his hands in elaborate choreography. Asked to describe his favorite gesture, he said it was not fit for print.

In Italy, children and adolescents gesture. The elderly gesture. Some Italians joke that gesturing may even begin before birth. “In the ultrasound, I think the baby is saying, ‘Doctor, what do you want from me?’ ” said Laura Offeddu, a Roman and an elaborate gesticulator, as she pinched her fingers together and moved her hand up and down.

On a recent afternoon, two middle-aged men in elegant dark suits were deep in conversation outside the Giolitti ice cream parlor in downtown Rome, gesturing even as they held gelato in cones. One, who gave his name only as Alessandro, noted that younger people used a gesture that his generation did not: quotation marks to signify irony.

Sometimes gesturing can get out of hand. Last year, Italy’s highest court ruled that a man who inadvertently struck an 80-year-old woman while gesticulating in a piazza in the southern region Puglia was liable for civil damages. “The public street isn’t a living room,” the judges ruled, saying, “The habit of accompanying a conversation with gestures, while certainly licit, becomes illicit” in some contexts.

In 2008, Umberto Bossi, the colorful founder of the conservative Northern League, raised his middle finger during the singing of Italy’s national anthem. But prosecutors in Venice determined that the gesture, while obscene and the cause of widespread outrage, was not a crime.

Gestures have long been a part of Italy’s political spectacle. Former Prime Minister Silvio Berlusconi is a noted gesticulator. When he greeted President Obama and his wife, Michelle, at a meeting of the Group of 20 leaders in September 2009, he extended both hands, palms facing toward himself, and then pinched his fingers as he looked Mrs. Obama up and down — a gesture that might be interpreted as “va-va-voom.”

In contrast, Giulio Andreotti — Christian Democrat, seven-time prime minister and by far the most powerful politician of the Italian postwar era — was famous for keeping both hands clasped in front of him. The subtle, patient gesture functioned as a kind of deterrent, indicating the tremendous power he could deploy if he chose to.

Isabella Poggi, a professor of psychology at Roma Tre University and an expert on gestures, has identified around 250 gestures that Italians use in everyday conversation. “There are gestures expressing a threat or a wish or desperation or shame or pride,” she said. The only thing differentiating them from sign language is that they are used individually and lack a full syntax, Ms. Poggi added.

Far more than quaint folklore, gestures have a rich history. One theory holds that Italians developed them as an alternative form of communication during the centuries when they lived under foreign occupation — by Austria, France and Spain in the 14th through 19th centuries — as a way of communicating without their overlords understanding.

Another theory, advanced by Adam Kendon, the editor in chief of the journal Gesture, is that in overpopulated cities like Naples, gesturing became a way of competing, of marking one’s territory in a crowded arena. “To get attention, people gestured and used their whole bodies,” Ms. Poggi said, explaining the theory.

Read the entire article here.

Video courtesy of New York Times.

United States of Strange

With the United States turning another year older it reminds us to ponder some of the lesser known components of this beautiful yet paradoxical place. All nations have their esoteric cultural wonders and benign local oddities: the British (actually the Scots) have kilts, bowler hats, the Royal Family; Italians have Vespas, governments that last on average 8 months; the French, well they’re just French; the Germans love fast cars and lederhosen. But for sheer variety and volume the United States probably surpasses all for its extreme absurdity.

From the Telegraph:

Run by the improbably named Genghis Cohen, Machine Gun Vegas bills itself as the ‘world’s first luxury gun lounge’. It opened last year, and claims to combine “the look and feel of an ultra-lounge with the functionality of a state of the art indoor gun range”. The team of NRA-certified on-site instructors, however, may be its most unique appeal. All are female, and all are ex-US military personnel.

See other images and read the entire article here.

Image courtesy of the Telegraph.

Everywhere And Nowhere

Most physicists believe that dark matter exists, but have never seen it, only deduced its existence. This is a rather unsettling state of affairs since by most estimates dark matter (and possibly dark energy) accounts for 95 percent of the universe. The stuff we are made from, interact with and see on a daily basis — atoms, their constituents and their forces — is a mere 5 percent.

From the Atlantic:

Here’s a little experiment.

Hold up your hand.

Now put it back down.

In that window of time, your hand somehow interacted with dark matter — the mysterious stuff that comprises the vast majority of the universe. “Our best guess,” according to Dan Hooper, an astronomy professor at the University of Chicago and a theoretical astrophysicist at the Fermi National Accelerator Laboratory, “is that a million particles of dark matter passed through your hand just now.”

Dark matter, in other words, is not merely the stuff of black holes and deep space. It is all around us. Somehow. We’re pretty sure.

But if you did the experiment — as the audience at Hooper’s talk on dark matter and other cosmic mysteries did at the Aspen Ideas Festival today — you didn’t feel those million particles. We humans have no sense of their existence, Hooper said, in part because they don’t hew to the forces that regulate our movement in the world — gravity, electromagnetism, the forces we can, in some way, feel. Dark matter, instead, is “this ghostly, elusive stuff that dominates our universe,” Hooper said.

It’s everywhere. And it’s also, as far as human knowledge is concerned, nowhere.

And yet, despite its mysteries, we know it’s out there. “All astronomers are in complete conviction that there is dark matter,” said Richard Massey, the lead author of a recent study mapping the dark matter of the universe, and Hooper’s co-panelist. The evidence for its existence, Hooper agreed, is “overwhelming.” And yet it’s evidence based on deduction: through our examinations of the observable universe, we make assumptions about the unobservable version.

Dark matter, in other words, is aptly named. A full 95 percent of the universe — the dark matter, the stuff that both is and is not — is effectively unknown to us. “All the science that we’ve ever done only ever examines five percent of the universe,” Massey said. Which means that there are still mysteries to be unraveled, and dark truths to be brought to light.

And it also means, Massey pointed out, that for scientists, “the job security is great.”

You might be wondering, though: given how little we know about dark matter, how is it that Hooper knew that a million particles of the stuff passed through your hand as you raised and lowered it?

“I cheated a little,” Hooper admitted. He assumed a particular mass for the individual particles. “We know what the density of dark matter is on Earth from watching how the Milky Way rotates. And we know roughly how fast they’re going. So you take those two bits of information, and all you need to know is how much mass each individual particle has, and then I can get the million number. And I assumed a kind of traditional guess. But it could be 10,000 higher; it could be 10,000 lower.”

Read the entire article here.

Fifty Years After Gettysburg

In 1913 some 50,000 veterans from both sides of the U.S. Civil War gathered at Gettysburg in Pennsylvania to commemorate. Photographers of the time were on hand to capture some fascinating and moving images, which are now preserved in the U.S. Library of Congress.

See more images here.

Image: The Blue and the Gray at Gettysburg: a Union veteran and a Confederate veteran shake hands at the Assembly Tent. Courtesy of U.S. Library of Congress.

Pretending to be Smart

Have you ever taken a date to a cerebral movie or the opera? Have you ever taken a classic work of literature to read at the beach? If so, you are not alone. But why are you doing it?

From the Telegraph:

Men try to impress their friends almost twice as much as women do by quoting Shakespeare and pretending to like jazz to seem more clever.

A fifth of all adults admitted they have tried to impress others by making out they are more cultured than they really are, but this rises to 41 per cent in London.

Scotland is the least pretentious country as only 14 per cent of the 1,000 UK adults surveyed had faked their intelligence there, according to Ask Jeeves research.

Typical methods of trying to seem cleverer ranged from deliberately reading a ‘serious’ novel on the beach, passing off other people’s witty remarks as one’s own and talking loudly about politics in front of others.

Two thirds put on the pretensions for friends, while 36 per cent did it to seem smarter in their workplace and 32 per cent tried to impress a potential partner.

One in five swapped their usual holiday read for something more serious on the beach and one in four went to an art gallery to look more cultured.

When it came to music tastes, 20 per cent have pretended to prefer Beethoven to Beyonce and many have referenced operas they have never seen.

A spokesman for Ask Jeeves said: “We were surprised by just how many people think they should go to such lengths in order to impress someone else.

“They obviously think they will make a better impression if they pretend to like Beethoven rather than admit they listen to Beyonce or read The Spectator rather than Loaded.

“Social media and the internet means it is increasingly easy to present this kind of false image about themselves.

“But in the end, if they are really going to be liked then it is going to be for the person they really are rather than the person they are pretending to be.”

Social media also plays a large part with people sharing Facebook posts on politics or re-tweeting clever tweets to raise their intellectual profile.

Men were the biggest offenders, with 26 per cent of men admitting to the acts of pretence compared to 14 per cent of women.

Top things people have done to seem smarter:

Repeated someone else’s joke as your own

Gone to an art gallery

Listened to classical music in front of others

Read a ‘serious’ book on the beach

Re-tweeted a clever tweet

Talked loudly about politics in front of others

Read a ‘serious’ magazine on public transport

Shared an intellectual article on Facebook

Quoted Shakespeare

Pretended to know about wine

Worn glasses with clear lenses

Mentioned an opera you’d ‘seen’

Pretended to like jazz

Read the entire article here.

Image: Opera. Courtesy of the New York Times.

Voting and Literacy

Voting is a right in the United States. But, as we know in the past this did not stop those in power from restricting those rights for many less fortunate or those having a different skin color (or gender). Some still justifiably maintain that voting rights are curtailed in some instances.

Louisiana in 1964 required voters to jump through some very high hurdles before they could even come close to a ballot box. The Louisiana State Literacy Test barred prospective from voters if they recorded even one wrong answer. So, just for fun take a look at the three pages of the test above and see if you’d qualify to vote in Louisiana. You have 10 minutes. Remember, one wrong answer and you’re disenfranchised!

From Slate:

This week’s Supreme Court decision in Shelby County v. Holder overturned Section 4(b) of the 1965 Voting Rights Act, which mandated federal oversight of changes in voting procedure in jurisdictions that have a history of using a “test or device” to impede enfranchisement. Here is one example of such a test, used in Louisiana in 1964.

After the end of the Civil War, would-be black voters in the South faced an array of disproportionate barriers to enfranchisement. The literacy test—supposedly applicable to both white and black prospective voters who couldn’t prove a certain level of education but in actuality disproportionately administered to black voters—was a classic example of one of these barriers.

The website of the Civil Rights Movement Veterans, which collects materials related to civil rights, hosts a few samples of actual literacy tests used in Alabama, Louisiana, and Mississippi during the 1950s and 1960s.

In many cases, people working within the movement collected these in order to use them in voter education, which is how we ended up with this documentary evidence. Update: This test—a word-processed transcript of an original—was added by Jeff Schwartz, who worked with the Congress of Racial Equality in Plaquemines Parish, Louisiana, in the summer of 1964. Schwartz wrote about his encounters with the test in this blog post.

Most of the tests collected here are a battery of trivia questions related to civic procedure and citizenship. (Two from the Alabama test: “Name the attorney general of the United States” and “Can you be imprisoned, under Alabama law, for a debt?”)

But this Louisiana “literacy” test, singular among its fellows, has nothing to do with citizenship. Designed to put the applicant through mental contortions, the test’s questions are often confusingly worded. If some of them seem unanswerable, that effect was intentional. The (white) registrar would be the ultimate judge of whether an answer was correct.

Try this one: “Write every other word in this first line and print every third word in same line (original type smaller and first line ended at comma) but capitalize the fifth word that you write.”

Or this: “Write right from the left to the right as you see it spelled here.”

Read the entire article here.

Image: Louisiana Voter Literacy Test, circa 1964. Courtesy of the Civil Rights Movement Veterans website.

Seeking Clues to Suicide

Suicide still ranks highly in many cultures as one of the commonest ways to die. The statistics are sobering — in 2012, more U.S. soldiers committed suicide than died in combat. Despite advances in the treatment of mental illness, little has made a dent in the annual increase in the numbers of those who take their lives. Psychologist Matthew Nock hopes to change this through some innovative research.

From the New York Times:

For reasons that have eluded people forever, many of us seem bent on our own destruction. Recently more human beings have been dying by suicide annually than by murder and warfare combined. Despite the progress made by science, medicine and mental-health care in the 20th century — the sequencing of our genome, the advent of antidepressants, the reconsidering of asylums and lobotomies — nothing has been able to drive down the suicide rate in the general population. In the United States, it has held relatively steady since 1942. Worldwide, roughly one million people kill themselves every year. Last year, more active-duty U.S. soldiers killed themselves than died in combat; their suicide rate has been rising since 2004. Last month, the Centers for Disease Control and Prevention announced that the suicide rate among middle-aged Americans has climbed nearly 30 percent since 1999. In response to that widely reported increase, Thomas Frieden, the director of the C.D.C., appeared on PBS NewsHour and advised viewers to cultivate a social life, get treatment for mental-health problems, exercise and consume alcohol in moderation. In essence, he was saying, keep out of those demographic groups with high suicide rates, which include people with a mental illness like a mood disorder, social isolates and substance abusers, as well as elderly white males, young American Indians, residents of the Southwest, adults who suffered abuse as children and people who have guns handy.

But most individuals in every one of those groups never have suicidal thoughts — even fewer act on them — and no data exist to explain the difference between those who will and those who won’t. We also have no way of guessing when — in the next hour? in the next decade? — known risk factors might lead to an attempt. Our understanding of how suicidal thinking progresses, or how to spot and halt it, is little better now than it was two and a half centuries ago, when we first began to consider suicide a medical rather than philosophical problem and physicians prescribed, to ward it off, buckets of cold water thrown at the head.

“We’ve never gone out and observed, as an ecologist would or a biologist would go out and observe the thing you’re interested in for hours and hours and hours and then understand its basic properties and then work from that,” Matthew K. Nock, the director of Harvard University’s Laboratory for Clinical and Developmental Research, told me. “We’ve never done it.”

It was a bright December morning, and we were in his office on the 12th floor of the building that houses the school’s psychology department, a white concrete slab jutting above its neighbors like a watchtower. Below, Cambridge looked like a toy city — gabled roofs and steeples, a ribbon of road, windshields winking in the sun. Nock had just held a meeting with four members of his research team — he in his swivel chair, they on his sofa — about several of the studies they were running. His blue eyes matched his diamond-plaid sweater, and he was neatly shorn and upbeat. He seemed more like a youth soccer coach, which he is on Saturday mornings for his son’s first-grade team, than an expert in self-destruction.

At the meeting, I listened to Nock and his researchers discuss a study they were collaborating on with the Army. They were calling soldiers who had recently attempted suicide and asking them to explain what they had done and why. Nock hoped that sifting through the interview transcripts for repeated phrasings or themes might suggest predictive patterns that he could design tests to catch. A clinical psychologist, he had trained each of his researchers how to ask specific questions over the telephone. Adam Jaroszewski, an earnest 29-year-old in tortoiseshell glasses, told me that he had been nervous about calling subjects in the hospital, where they were still recovering, and probing them about why they tried to end their lives: Why that moment? Why that method? Could anything have happened to make them change their minds? Though the soldiers had volunteered to talk, Jaroszewski worried about the inflections of his voice: how could he put them at ease and sound caring and grateful for their participation without ceding his neutral scientific tone? Nock, he said, told him that what helped him find a balance between empathy and objectivity was picturing Columbo, the frumpy, polite, persistently quizzical TV detective played by Peter Falk. “Just try to be really, really curious,” Nock said.

That curiosity has made Nock, 39, one of the most original and influential suicide researchers in the world. In 2011, he received a MacArthur genius award for inventing new ways to investigate the hidden workings of a behavior that seems as impossible to untangle, empirically, as love or dreams.

Trying to study what people are thinking before they try to kill themselves is like trying to examine a shadow with a flashlight: the minute you spotlight it, it disappears. Researchers can’t ethically induce suicidal thinking in the lab and watch it develop. Uniquely human, it can’t be observed in other species. And it is impossible to interview anyone who has died by suicide. To understand it, psychologists have most often employed two frustratingly imprecise methods: they have investigated the lives of people who have killed themselves, and any notes that may have been left behind, looking for clues to what their thinking might have been, or they have asked people who have attempted suicide to describe their thought processes — though their mental states may differ from those of people whose attempts were lethal and their recollections may be incomplete or inaccurate. Such investigative methods can generate useful statistics and hypotheses about how a suicidal impulse might start and how it travels from thought to action, but that’s not the same as objective evidence about how it unfolds in real time.

Read the entire article here.

Image: 2007 suicide statistics for 15-24 year-olds. Courtesy of Crimson White, UA.

Circadian Rhythm in Vegetables

The vegetables you eat may be better for you based on how and when they are exposed to light. Just as animals adhere to circadian rhythms, research shows that some plants may generate different levels of healthy nutritional metabolites based the light cycle as well.

From ars technica:

When you buy vegetables at the grocery store, they are usually still alive. When you lock your cabbage and carrots in the dark recess of the refrigerator vegetable drawer, they are still alive. They continue to metabolize while we wait to cook them.

Why should we care? Well, plants that are alive adjust to the conditions surrounding them. Researchers at Rice University have shown that some plants have circadian rhythms, adjusting their production of certain chemicals based on their exposure to light and dark cycles. Understanding and exploiting these rhythms could help us maximize the nutritional value of the vegetables we eat.

According to Janet Braam, a professor of biochemistry at Rice, her team’s initial research looked at how Arabidopsis, a common plant model for scientists, responded to light cycles. “It adjusts its defense hormones before the time of day when insects attack,” Braam said. Arabidopsis is in the same plant family as the cruciforous vegetables—broccoli, cabbage, and kale—so Braam and her colleagues decided to look for a similar light response in our foods.

They bought some grocery store cabbage and brought it back to the lab so they could subject the cabbage to the same tests they gave their model plant, which involved offering up living, leafy vegetables to a horde of hungry caterpillars. First, half the cabbages were exposed to a normal light and dark cycle, the same schedule as the caterpillars, while the other half were exposed to the opposite light cycle.

The caterpillars tend to feed in the late afternoon, according to Braam, so the light signals the plants to increase production of glucosinolates, a chemical that the insects don’t like. The study found that cabbages that adjusted to the normal light cycle had far less insect damage than the jet-lagged cabbages.

While it’s cool to know that cabbages are still metabolizing away and responding to light stimulus days after harvest, Braam said that this process could affect the nutritional value of the cabbage. “We eat cabbage, in part, because these glucosinolates are anti-cancer compounds,” Braam said.

Glucosinolates are only found in the cruciform vegetable family, but the Rice team wanted to see if other vegetables demonstrated similar circadian rhythms. They tested spinach, lettuce, zucchini, blueberries, carrots, and sweet potatoes. “Luckily, our caterpillar isn’t picky,” Braam said. “It’ll eat just about anything.”

Just like with the cabbage, the caterpillars ate far less of the vegetables trained on the normal light schedule. Even the fruits and roots increased production of some kind of anti-insect compound in response to light stimulus.

Metabolites affected by circadian rhythms could include vitamins and antioxidants. The Rice team is planning follow-up research to begin exploring how the cycling phenomenon affects known nutrients and if the magnitude of the shifts are large enough to have an impact on our diets. “We’ve uncovered some very basic stimuli, but we haven’t yet figured out how to amplify that for human nutrition,” Braam said.

Read the entire article here.

UnGoogleable: The Height of Cool

So, it is no longer a surprise — our digital lives are tracked, correlated, stored and examined. The NSA (National Security Agency) does it to determine if you are an unsavory type; Google does it to serve you better information and ads; and, a whole host of other companies do it to sell you more things that you probably don’t need and for a price that you can’t afford. This of course raises deep and troubling questions about privacy. With this in mind, some are taking ownership of the issue and seeking to erase themselves from the vast digital Orwellian eye. However, to some being untraceable online is a fashion statement, rather than a victory for privacy.

From the Guardian:

“The chicest thing,” said fashion designer Phoebe Philo recently, “is when you don’t exist on Google. God, I would love to be that person!”

Philo, creative director of Céline, is not that person. As the London Evening Standard put it: “Unfortunately for the famously publicity-shy London designer – Paris born, Harrow-on-the-Hill raised – who has reinvented the way modern women dress, privacy may well continue to be a luxury.” Nobody who is oxymoronically described as “famously publicity-shy” will ever be unGoogleable. And if you’re not unGoogleable then, if Philo is right, you can never be truly chic, even if you were born in Paris. And if you’re not truly chic, then you might as well die – at least if you’re in fashion.

If she truly wanted to disappear herself from Google, Philo could start by changing her superb name to something less diverting. Prize-winning novelist AM Homes is an outlier in this respect. Google “am homes” and you’re in a world of blah US real estate rather than cutting-edge literature. But then Homes has thought a lot about privacy, having written a play about the most famously private person in recent history, JD Salinger, and had him threaten to sue her as a result.

And Homes isn’t the only one to make herself difficult to detect online. UnGoogleable bands are 10 a penny. The New York-based band !!! (known verbally as “chick chick chick” or “bang bang bang” – apparently “Exclamation point, exclamation point, exclamation point” proved too verbose for their meagre fanbase) must drive their business manager nuts. As must the band Merchandise, whose name – one might think – is a nominalist satire of commodification by the music industry. Nice work, Brad, Con, John and Rick.

 

If Philo renamed herself online as Google Maps or @, she might make herself more chic.

Welcome to anonymity chic – the antidote to an online world of exhibitionism. But let’s not go crazy: anonymity may be chic, but it is no business model. For years XXX Porn Site, my confusingly named alt-folk combo, has remained undiscovered. There are several bands called Girls (at least one of them including, confusingly, dudes) and each one has worried – after a period of chic iconoclasm – that such a putatively cool name means no one can find them online.

But still, maybe we should all embrace anonymity, given this week’s revelations that technology giants cooperated in Prism, a top-secret system at the US National Security Agency that collects emails, documents, photos and other material for secret service agents to review. It has also been a week in which Lindsay Mills, girlfriend of NSA whistleblower Edward Snowden, has posted on her blog (entitled: “Adventures of a world-traveling, pole-dancing super hero” with many photos showing her performing with the Waikiki Acrobatic Troupe) her misery that her fugitive boyfriend has fled to Hong Kong. Only a cynic would suggest that this blog post might help the Waikiki Acrobating Troupe veteran’s career at this – serious face – difficult time. Better the dignity of silent anonymity than using the internet for that.

Furthermore, as social media diminishes us with not just information overload but the 24/7 servitude of liking, friending and status updating, this going under the radar reminds us that we might benefit from withdrawing the labour on which the founders of Facebook, Twitter and Instagram have built their billions. “Today our intense cultivation of a singular self is tied up in the drive to constantly produce and update,” argues Geert Lovink, research professor of interactive media at the Hogeschool van Amsterdam and author of Networks Without a Cause: A Critique of Social Media. “You have to tweet, be on Facebook, answer emails,” says Lovink. “So the time pressure on people to remain present and keep up their presence is a very heavy load that leads to what some call the psychopathology of online.”

Internet evangelists such as Clay Shirky and Charles Leadbeater hoped for something very different from this pathologised reality. In Shirky’s Here Comes Everybody and Leadbeater’s We-Think, both published in 2008, the nascent social media were to echo the anti-authoritarian, democratising tendencies of the 60s counterculture. Both men revelled in the fact that new web-based social tools helped single mothers looking online for social networks and pro-democracy campaigners in Belarus. Neither sufficiently realised that these tools could just as readily be co-opted by The Man. Or, if you prefer, Mark Zuckerberg.

Not that Zuckerberg is the devil in this story. Social media have changed the way we interact with other people in line with what the sociologist Zygmunt Bauman wrote in Liquid Love. For us “liquid moderns”, who have lost faith in the future, cannot commit to relationships and have few kinship ties, Zuckerberg created a new way of belonging, one in which we use our wits to create provisional bonds loose enough to stop suffocation, but tight enough to give a needed sense of security now that the traditional sources of solace (family, career, loving relationships) are less reliable than ever.

Read the entire article here.

The Mother of All Storms

Some regions of our planet are home to violent and destructive storms. However, one look at a recent mega-storm on Saturn may put it all in perspective — it could be much, much worse.

From ars technica:

Jupiter’s Great Red Spot may get most of the attention, but it’s hardly the only big weather event in the Solar System. Saturn, for example, has an odd hexagonal pattern in the clouds at its north pole, and when the planet tilted enough to illuminate it, the light revealed a giant hurricane embedded in the center of the hexagon. Scientists think the immense storm may have been there for years.

But Saturn is also home to transient storms that show up sporadically. The most notable of these are the Great White Spots, which can persist for months and alter the weather on a planetary scale. Great White Spots are rare, with only six having been observed since 1876. When one formed in 2010, we were lucky enough to have the Cassini orbiter in place to watch it from close up. Even though the head of the storm was roughly 7,000 km across, Cassini’s cameras were able to image it at resolutions where each pixel was only 14 km across, allowing an unprecedented view into the storm’s dynamics.

The storm turned out to be very violent, with convective features as big as 3,000 km across that could form and dissipate in as little as 10 hours. Winds of over 400 km/hour were detected, and the pressure gradient between the storm and the unaffected areas nearby was twice that of the one observed in the Great Red Spot of Jupiter. By carefully mapping the direction of the winds, the authors were able to conclude that the head of the White Spot was an anti-cyclone, with winds orbiting around a central feature.

Convection that brings warm material up from the depths of Saturn’s atmosphere appears to be key to driving these storms. The authors built an atmospheric model that could reproduce the White Spot and found that shutting down the energy injection from the lower atmosphere was enough to kill the storm. In addition, observations suggest that many areas of the storm contain freshly condensed particles, which may represent material that was brought up from the lower atmosphere and then condensed when it reached the cooler upper layers.

The Great White spot was an anticyclone, and the authors’ model suggests that there’s only a very narrow band of winds on Saturn that enable the formation of a Great White Spot. The convective activity won’t trigger a White Spot anywhere outside the range of 31.5° and 32.4°N, which probably goes a long way toward explaining why the storms are so rare.

Read the entire article here.

Image: The huge storm churning through the atmosphere in Saturn’s northern hemisphere overtakes itself as it encircles the planet in this true-color view from NASA’s Cassini spacecraft. Courtesy of NASA/JPL.

Technology and Kids

There is no doubting that technology’s grasp finds us at increasingly younger ages. No longer is it just our teens constantly mesmerized by status updates on their mobiles, and not just our “in-betweeners” addicted to “facetiming” with their BFFs. Now our technologies are fast becoming the tools of choice for our kindergarteners and pre-K kids. Some parents lament.

From New York Times:

A few months ago, I attended my daughter Josie’s kindergarten open house, the highlight of which was a video slide show featuring our moppets using iPads to practice their penmanship. Parental cooing ensued.

I happened to be sitting next to the teacher, and I asked her about the rumor I’d heard: that next year, every elementary-school kid in town would be provided his or her own iPad. She said this pilot program was being introduced only at the newly constructed school three blocks from our house, which Josie will attend next year. “You’re lucky,” she observed wistfully.

This seemed to be the consensus around the school-bus stop. The iPads are coming! Not only were our kids going to love learning, they were also going to do so on the cutting edge of innovation. Why, in the face of this giddy chatter, was I filled with dread?

It’s not because I’m a cranky Luddite. I swear. I recognize that iPads, if introduced with a clear plan, and properly supervised, can improve learning and allow students to work at their own pace. Those are big ifs in an era of overcrowded classrooms. But my hunch is that our school will do a fine job. We live in a town filled with talented educators and concerned parents.

Frankly, I find it more disturbing that a brand-name product is being elevated to the status of mandatory school supply. I also worry that iPads might transform the classroom from a social environment into an educational subway car, each student fixated on his or her personalized educational gadget.

But beneath this fretting is a more fundamental beef: the school system, without meaning to, is subverting my parenting, in particular my fitful efforts to regulate my children’s exposure to screens. These efforts arise directly from my own tortured history as a digital pioneer, and the war still raging within me between harnessing the dazzling gifts of technology versus fighting to preserve the slower, less convenient pleasures of the analog world.

What I’m experiencing is, in essence, a generational reckoning, that queasy moment when those of us whose impatient desires drove the tech revolution must face the inheritors of this enthusiasm: our children.

It will probably come as no surprise that I’m one of those annoying people fond of boasting that I don’t own a TV. It makes me feel noble to mention this — I am feeling noble right now! — as if I’m taking a brave stand against the vulgar superficiality of the age. What I mention less frequently is the reason I don’t own a TV: because I would watch it constantly.

My brothers and I were so devoted to television as kids that we created an entire lexicon around it. The brother who turned on the TV, and thus controlled the channel being watched, was said to “emanate.” I didn’t even know what “emanate” meant. It just sounded like the right verb.

This was back in the ’70s. We were latchkey kids living on the brink of a brave new world. In a few short years, we’d hurtled from the miraculous calculator (turn it over to spell out “boobs”!) to arcades filled with strobing amusements. I was one of those guys who spent every spare quarter mastering Asteroids and Defender, who found in video games a reliable short-term cure for the loneliness and competitive anxiety that plagued me. By the time I graduated from college, the era of personal computers had dawned. I used mine to become a closet Freecell Solitaire addict.

Midway through my 20s I underwent a reformation. I began reading, then writing, literary fiction. It quickly became apparent that the quality of my work rose in direct proportion to my ability filter out distractions. I’ve spent the past two decades struggling to resist the endless pixelated enticements intended to capture and monetize every spare second of human attention.

Has this campaign succeeded? Not really. I’ve just been a bit slower on the uptake than my contemporaries. But even without a TV or smartphones, our household can feel dominated by computers, especially because I and my wife (also a writer) work at home. We stare into our screens for hours at a stretch, working and just as often distracting ourselves from work.

Read the entire article here.

Image courtesy of Wired.

Technology and Employment

Technology is altering the lives of us all. Often it is a positive influence, offering its users tremendous benefits from time-saving to life-extension. However, the relationship of technology to our employment is more complex and usually detrimental.

Many traditional forms of employment have already disappeared thanks to our technological tools; still many other jobs have changed beyond recognition, requiring new skills and knowledge. And this may be just the beginning.

From Technology Review:

Given his calm and reasoned academic demeanor, it is easy to miss just how provocative Erik Brynjolfsson’s contention really is. ­Brynjolfsson, a professor at the MIT Sloan School of Management, and his collaborator and coauthor Andrew McAfee have been arguing for the last year and a half that impressive advances in computer technology—from improved industrial robotics to automated translation services—are largely behind the sluggish employment growth of the last 10 to 15 years. Even more ominous for workers, the MIT academics foresee dismal prospects for many types of jobs as these powerful new technologies are increasingly adopted not only in manufacturing, clerical, and retail work but in professions such as law, financial services, education, and medicine.

That robots, automation, and software can replace people might seem obvious to anyone who’s worked in automotive manufacturing or as a travel agent. But Brynjolfsson and McAfee’s claim is more troubling and controversial. They believe that rapid technological change has been destroying jobs faster than it is creating them, contributing to the stagnation of median income and the growth of inequality in the United States. And, they suspect, something similar is happening in other technologically advanced countries.

Perhaps the most damning piece of evidence, according to Brynjolfsson, is a chart that only an economist could love. In economics, productivity—the amount of economic value created for a given unit of input, such as an hour of labor—is a crucial indicator of growth and wealth creation. It is a measure of progress. On the chart Brynjolfsson likes to show, separate lines represent productivity and total employment in the United States. For years after World War II, the two lines closely tracked each other, with increases in jobs corresponding to increases in productivity. The pattern is clear: as businesses generated more value from their workers, the country as a whole became richer, which fueled more economic activity and created even more jobs. Then, beginning in 2000, the lines diverge; productivity continues to rise robustly, but employment suddenly wilts. By 2011, a significant gap appears between the two lines, showing economic growth with no parallel increase in job creation. Brynjolfsson and McAfee call it the “great decoupling.” And Brynjolfsson says he is confident that technology is behind both the healthy growth in productivity and the weak growth in jobs.

It’s a startling assertion because it threatens the faith that many economists place in technological progress. Brynjolfsson and McAfee still believe that technology boosts productivity and makes societies wealthier, but they think that it can also have a dark side: technological progress is eliminating the need for many types of jobs and leaving the typical worker worse off than before. ­Brynjolfsson can point to a second chart indicating that median income is failing to rise even as the gross domestic product soars. “It’s the great paradox of our era,” he says. “Productivity is at record levels, innovation has never been faster, and yet at the same time, we have a falling median income and we have fewer jobs. People are falling behind because technology is advancing so fast and our skills and organizations aren’t keeping up.”

Brynjolfsson and McAfee are not Luddites. Indeed, they are sometimes accused of being too optimistic about the extent and speed of recent digital advances. Brynjolfsson says they began writing Race Against the Machine, the 2011 book in which they laid out much of their argument, because they wanted to explain the economic benefits of these new technologies (Brynjolfsson spent much of the 1990s sniffing out evidence that information technology was boosting rates of productivity). But it became clear to them that the same technologies making many jobs safer, easier, and more productive were also reducing the demand for many types of human workers.

Anecdotal evidence that digital technologies threaten jobs is, of course, everywhere. Robots and advanced automation have been common in many types of manufacturing for decades. In the United States and China, the world’s manufacturing powerhouses, fewer people work in manufacturing today than in 1997, thanks at least in part to automation. Modern automotive plants, many of which were transformed by industrial robotics in the 1980s, routinely use machines that autonomously weld and paint body parts—tasks that were once handled by humans. Most recently, industrial robots like Rethink Robotics’ Baxter (see “The Blue-Collar Robot,” May/June 2013), more flexible and far cheaper than their predecessors, have been introduced to perform simple jobs for small manufacturers in a variety of sectors. The website of a Silicon Valley startup called Industrial Perception features a video of the robot it has designed for use in warehouses picking up and throwing boxes like a bored elephant. And such sensations as Google’s driverless car suggest what automation might be able to accomplish someday soon.

A less dramatic change, but one with a potentially far larger impact on employment, is taking place in clerical work and professional services. Technologies like the Web, artificial intelligence, big data, and improved analytics—all made possible by the ever increasing availability of cheap computing power and storage capacity—are automating many routine tasks. Countless traditional white-collar jobs, such as many in the post office and in customer service, have disappeared. W. Brian Arthur, a visiting researcher at the Xerox Palo Alto Research Center’s intelligence systems lab and a former economics professor at Stanford University, calls it the “autonomous economy.” It’s far more subtle than the idea of robots and automation doing human jobs, he says: it involves “digital processes talking to other digital processes and creating new processes,” enabling us to do many things with fewer people and making yet other human jobs obsolete.

It is this onslaught of digital processes, says Arthur, that primarily explains how productivity has grown without a significant increase in human labor. And, he says, “digital versions of human intelligence” are increasingly replacing even those jobs once thought to require people. “It will change every profession in ways we have barely seen yet,” he warns.

Read the entire article here.

Image: Industrial robots. Courtesy of Techjournal.

What Makes Us Human

Psychologist Jerome Kagan leaves no stone unturned in his quest to determine what makes us distinctly human. His latest book, The Human Spark: The science of human development, comes up with some fresh conclusions.

From the New Scientist:

What is it that makes humans special, that sets our species apart from all others? It must be something connected with intelligence – but what exactly? People have asked these questions for as long as we can remember. Yet the more we understand the minds of other animals, the more elusive the answers to these questions have become.

The latest person to take up the challenge is Jerome Kagan, a former professor at Harvard University. And not content with pinning down the “human spark” in the title of his new book, he then tries to explain what makes each of us unique.

As a pioneer in the science of developmental psychology, Kagan has an interesting angle. A life spent investigating how a fertilised egg develops into an adult human being provides him with a rich understanding of the mind and how it differs from that of our closest animal cousins.

Human and chimpanzee infants behave in remarkably similar ways for the first four to six months, Kagan notes. It is only during the second year of life that we begin to diverge profoundly. As the toddler’s frontal lobes expand and the connections between the brain sites increase, the human starts to develop the talents that set our species apart. These include “the ability to speak a symbolic language, infer the thoughts and feelings of others, understand the meaning of a prohibited action, and become conscious of their own feelings, intentions and actions”.

Becoming human, as Kagan describes it, is a complex dance of neurobiological changes and psychological advances. All newborns possess the potential to develop the universal human properties “inherent in their genomes”. What makes each of us individual is the unique backdrop of genetics, epigenetics, and the environment against which this development plays out.

Kagan’s research highlighted the role of temperament, which he notes is underpinned by at least 1500 genes, affording huge individual variation. This variation, in turn, influences the way we respond to environmental factors including family, social class, culture and historical era.

But what of that human spark? Kagan seems to locate it in a quartet of qualities: language, consciousness, inference and, especially, morality. This is where things start to get weird. He would like you to believe that morality is uniquely human, which, of course, bolsters his argument. Unfortunately, it also means he has to deny that a rudimentary morality has evolved in other social animals whose survival also depends on cooperation.

Instead, Kagan argues that morality is a distinctive property of our species, just as “fish do not have lungs”. No mention of evolution. So why are we moral, then? “The unique biology of the human brain motivates children and adults to act in ways that will allow them to arrive at the judgement that they are a good person.” That’s it?

Warming to his theme, Kagan argues that in today’s world, where traditional moral standards have been eroded and replaced by a belief in the value of wealth and celebrity, it is increasingly difficult to see oneself as a good person. He thinks this mismatch between our moral imperative and Western culture helps explain the “modern epidemic” of mental illness. Unwittingly, we have created an environment in which the human spark is fading.

Some of Kagan’s ideas are even more outlandish, surely none more so than the assertion that a declining interest in natural sciences may be a consequence of mothers becoming less sexually mysterious than they once were. More worryingly, he doesn’t seem to believe that humans are subject to the same forces of evolution as other animals.

Read the entire article here.

Sci-Fi Begets Cli-Fi

The world of fiction is populated with hundreds of different genres — most of which were invented by clever marketeers anxious to ensure vampire novels (teen / horror) don’t live next to classic works (literary) on real or imagined (think Amazon) book shelves. So, it should come as no surprise to see a new category recently emerge: cli-fi.

Short for climate fiction, cli-fi novels explore the dangers of environmental degradation and apocalyptic climate change. Not light reading for your summer break at the beach. But, then again, more books in this category may get us to think often and carefully about preserving our beaches — and the rest of the planet — for our kids.

From the Guardian:

A couple of days ago Dan Bloom, a freelance news reporter based in Taiwan, wrote on the Teleread blog that his word had been stolen from him. In 2012 Bloom had “produced and packaged” a novella called Polar City Red, about climate refugees in a post-apocalyptic Alaska in the year 2075. Bloom labelled the book “cli-fi” in the press release and says he coined that term in 2007, cli-fi being short for “climate fiction”, described as a sub-genre of sci-fi. Polar City Red bombed, selling precisely 271 copies, until National Public Radio (NPR) and the Christian Science Monitor picked up on the term cli-fi last month, writing Bloom out of the story. So Bloom has blogged his reply on Teleread, saying he’s simply pleased the term is now out there – it has gone viral since the NPR piece by Scott Simon. It’s not quite as neat as that – in recent months the term has been used increasingly in literary and environmental circles – but there’s no doubt it has broken out more widely. You can search for cli-fi on Amazon, instantly bringing up a plethora of books with titles such as 2042: The Great Cataclysm, or Welcome to the Greenhouse. Twitter has been abuzz.

Whereas 10 or 20 years ago it would have been difficult to identify even a handful of books that fell under this banner, there is now a growing corpus of novels setting out to warn readers of possible environmental nightmares to come. Barbara Kingsolver’s Flight Behaviour, the story of a forest valley filled with an apparent lake of fire, is shortlisted for the 2013 Women’s prize for fiction. Meanwhile, there’s Nathaniel Rich’s Odds Against Tomorrow, set in a future New York, about a mathematician who deals in worst-case scenarios. In Liz Jensen’s 2009 eco-thriller The Rapture, summer temperatures are asphyxiating and Armageddon is near; her most recent book, The Uninvited, features uncanny warnings from a desperate future. Perhaps the most high-profile cli-fi author is Margaret Atwood, whose 2009 The Year of the Flood features survivors of a biological catastrophe also central to her 2003 novel Oryx and Crake, a book Atwood sometimes preferred to call “speculative fiction”.

Engaging with this subject in fiction increases debate about the issue; finely constructed, intricate narratives help us broaden our understanding and explore imagined futures, encouraging us to think about the kind of world we want to live in. This can often seem difficult in our 24?hour news-on-loop society where the consequences of climate change may appear to be everywhere, but intelligent discussion of it often seems to be nowhere. Also, as the crime genre can provide the dirty thrill of, say, reading about a gruesome fictional murder set on a street the reader recognises, the best cli-fi novels allow us to be briefly but intensely frightened: climate chaos is closer, more immediate, hovering over our shoulder like that murderer wielding his knife. Outside of the narrative of a novel the issue can seem fractured, incoherent, even distant. As Gregory Norminton puts it in his introduction to an anthology on the subject, Beacons: Stories for Our Not-So-Distant Future: “Global warming is a predicament, not a story. Narrative only comes in our response to that predicament.” Which is as good an argument as any for engaging with those stories.

All terms are reductive, all labels simplistic – clearly, the likes of Kingsolver, Jensen and Atwood have a much broader canvas than this one issue. And there’s an argument for saying this is simply rebranding: sci-fi writers have been engaging with the climate-change debate for longer than literary novelists – Snow by Adam Roberts comes to mind – and I do wonder whether this is a term designed for squeamish writers and critics who dislike the box labelled “science fiction”. So the term is certainly imperfect, but it’s also valuable. Unlike sci-fi, cli-fi writing comes primarily from a place of warning rather than discovery. There are no spaceships hovering in the sky; no clocks striking 13. On the contrary, many of the horrors described seem oddly familiar.

Read the entire article after the jump.

Image: Aftermath of Superstorm Sandy. Courtesy of the Independent.

Us and Them: Group Affinity Begins Early

Research shows how children as young as four years empathize with some but not others. It’s all about the group: which peer group you belong to versus the rest. Thus, the uphill struggle to instill tolerance in the next generation needs to begin very early in life.

From the WSJ:

Here’s a question. There are two groups, Zazes and Flurps. A Zaz hits somebody. Who do you think it was, another Zaz or a Flurp?

It’s depressing, but you have to admit that it’s more likely that the Zaz hit the Flurp. That’s an understandable reaction for an experienced, world-weary reader of The Wall Street Journal. But here’s something even more depressing—4-year-olds give the same answer.

In my last column, I talked about some disturbing new research showing that preschoolers are already unconsciously biased against other racial groups. Where does this bias come from?

Marjorie Rhodes at New York University argues that children are “intuitive sociologists” trying to make sense of the social world. We already know that very young children make up theories about everyday physics, psychology and biology. Dr. Rhodes thinks that they have theories about social groups, too.

In 2012 she asked young children about the Zazes and Flurps. Even 4-year-olds predicted that people would be more likely to harm someone from another group than from their own group. So children aren’t just biased against other racial groups: They also assume that everybody else will be biased against other groups. And this extends beyond race, gender and religion to the arbitrary realm of Zazes and Flurps.

In fact, a new study in Psychological Science by Dr. Rhodes and Lisa Chalik suggests that this intuitive social theory may even influence how children develop moral distinctions.

Back in the 1980s, Judith Smetana and colleagues discovered that very young kids could discriminate between genuinely moral principles and mere social conventions. First, the researchers asked about everyday rules—a rule that you can’t be mean to other children, for instance, or that you have to hang up your clothes. The children said that, of course, breaking the rules was wrong. But then the researchers asked another question: What would you think if teachers and parents changed the rules to say that being mean and dropping clothes were OK?

Children as young as 2 said that, in that case, it would be OK to drop your clothes, but not to be mean. No matter what the authorities decreed, hurting others, even just hurting their feelings, was always wrong. It’s a strikingly robust result—true for children from Brazil to Korea. Poignantly, even abused children thought that hurting other people was intrinsically wrong.

This might leave you feeling more cheerful about human nature. But in the new study, Dr. Rhodes asked similar moral questions about the Zazes and Flurps. The 4-year-olds said it would always be wrong for Zazes to hurt the feelings of others in their group. But if teachers decided that Zazes could hurt Flurps’ feelings, then it would be OK to do so. Intrinsic moral obligations only extended to members of their own group.

The 4-year-olds demonstrate the deep roots of an ethical tension that has divided philosophers for centuries. We feel that our moral principles should be universal, but we simultaneously feel that there is something special about our obligations to our own group, whether it’s a family, clan or country.

Read the entire article after the jump.

Image: Us and Them, Pink Floyd. Courtesy of Pink Floyd / flickr.

Kolmanskop, Namibian Ghost Town

Ghost towns have a peculiar fascination. They hold the story of a once glorious past and show us how the future took a very different and unexpected turn. Many ghost towns were abandoned by their residents as the economic fortunes of the local area took a turn for the worst — some from exhausted natural resources such as over-exploited mines, others from re-routed transportation, natural disaster or changes in demographics. One such town to have suffered from the inevitable boom and bust cycle of mining — in this case diamonds — is Kolmanskop in Namibia. The town is now being swallowed whole by the ever-shifting sands of the nearby Namib desert, which makes the eerie landscape a photographer’s paradise.

From Atlas Obscura:

People flocked to what became known as Kolmanskop, Namibia, after the discovery of diamonds in the area in 1908. As people arrived with high hopes, houses and other key buildings were built. The new town, which was German-influenced, saw the construction of ballrooms, casinos, theaters, ice factories, and hospitals, as well as the first X-ray station in the southern hemisphere.

Prior to World War I, over 2000 pounds of diamonds were sifted from the sands of the Namib desert. During the war, however, the price of diamonds dropped considerably. On top of this, larger diamonds were later found south of Kolmanskop, in Oranjemund. People picked up and chased after the precious stones. By 1956, the town was completely abandoned.

Today, the eerie ghost town is a popular tourist destination. Guided tours take visitors around the town and through the houses which, today, are filled only with sand.

Read the entire article here.

Image: Kolmanskop. Courtesy of Damien du Toit (coda). See more images from the flickr-stream here.

The College Application Essay

Most U.S. high school seniors have now finished their last days of the last year through the production line that is the educational system. Most will also have a college, and courses, selected from one of the thousands of U.S. institutions that offer further education. Competition to enter many of these colleges is steep and admissions offices use a variety of techniques and measurements to filter applicants and to gauge a prospective student’s suitability. One such measure is the college entrance essay, which still features quite prominently alongside GPA, SAT, and ACT scores and, of course, the parental bank balance.

The New York Times recently featured several student essays that diverged from the norm — these were honest and risky, open and worldly. We excerpt below one such essay for Antioch College by Julian Cranberg:

Ever since I took my first PSAT as a first-semester junior, I have received a constant flow of magazines, brochures, booklets, postcards, etc. touting the virtues of various colleges. Simultaneously, my email account has been force-fed a five-per-week diet of newsletters, college “quizzes,” virtual campus tour links, application calendars, and invitations to “exclusive” over-the-phone question-and-answer sessions. I am a one-year veteran of college advertising.

They started out by sending me friendly yet impersonal compliments, such as “We’re impressed by your academic record,” or “You’ve impressed us, Julian.” One of the funniest yet most disturbing letters I received was printed on a single sheet of paper inside a priority DHL envelope, telling me I received it in this fashion because I was a “priority” to that college. Now, as application time is rolling around, they’ve become a bit more aggressive, hence “REMINDER – University of X Application Due” or “Important Deadline Notice”..

How is it that while I can only send one application to any school to which I am applying, it is okay for any school to send unbridled truckloads of mail my way, applying for my attention? If I have not already made it clear, it’s an annoyance, and, in fact, turns me and undoubtedly others off to applying to these certain schools. However, this annoyance is easy to ignore, and, if I wanted to, I could easily forget all about these mailings after recycling them or deleting them from my email. But beneath the simple annoyance of these mailings lies a pressing and unchallenged issue..

What do these colleges want to get out of these advertisements? For one reason or another, they want my application. This doesn’t mean that their only objective is to craft a better and more diverse incoming class. The more applications a college receives, the more selective they are considered, and the higher they are ranked. This outcome is no doubt figured into their calculations, if it is not, in some cases, the primary driving force behind their mailings..

And these mailings are expensive. Imagine what it would cost to mail a school magazine, with $2.39 postage, to thousands of students across the country every week. The combined postage charge of everything I have received from various colleges must be above $200. Small postcards and envelopes add up fast, especially considering the colossal pool of potential applicants to which they are being sent. Although vastly aiding the United States Postal Service in its time of need, it is nauseating to imagine the volume of money spent on this endeavor. Why, in an era of record-high student loan debt and unemployment, are colleges not reallocating these ludicrous funds to aid their own students instead of extending their arms far and wide to students they have never met? I understand where the colleges are coming from. The precedent that schools should send mailings to students to “inform” them of what they have to offer has been set, and in this competitive world of colleges vying for the most applications, I only see more mailings to come in the future. It’s strange that the college process is always presented as a competition between students to get into the same colleges. It seems that another battle is also happening, where colleges are competing for the applications of the students..

High school seniors aren’t stupid. Neither are admissions offices. Don’t seniors want to go to school somewhere where they will fit and thrive and not just somewhere that is selective and will look good? Don’t applications offices want a pool of people who truly believe they would thrive in that college’s environment, and not have to deal with the many who thought those guys tossing the frisbee in the picture on the postcard they sent them looked pretty cool? I think it’s time to rethink what applying to college really means, for the folks on both sides, before we hit the impending boom in competition that I see coming. And let’s start by eliminating these silly mailings. Maybe we as seniors would then follow suit and choose intelligently where to apply.

More from the New York Times:

“I wonder if Princeton should be poorer.”

If you’re a high school senior trying to seduce the admissions officer reading your application essay, this may not strike you as the ideal opening line. But Shanti Kumar, a senior at the Bronx High School of Science, went ahead anyway when the university prompted her to react in writing to the idea of “Princeton in the nation’s service and in the service of all nations.”

Back in January, when I asked high school seniors to send in college application essays about money, class, working and the economy, I wasn’t sure what, if anything, would come in over the transom.

But 66 students submitted essays, and with the help of Harry Bauld, the author of “On Writing the College Application Essay,” we’ve selected four to publish in full online and in part in this column. That allowed us to be slightly more selective than Princeton itself was last year.

What these four writers have in common is an appetite for risk. Not only did they talk openly about issues that are emotionally complex and often outright taboo, but they took brave and counterintuitive positions on class, national identity and the application process itself. For anyone looking to inspire their own children or grandchildren who are seeking to go to college in the fall of 2014, these four essays would be a good place to start.

Perhaps the most daring essay of all came from Julian Cranberg, a 17-year-old from Brookline, Mass. One of the first rules of the college admissions process is that you don’t write about the college admissions process.

But Mr. Cranberg thumbed his nose at that convention, taking on the tremendous cost of the piles of mail schools send to potential students, and the waste that results from the effort. He figured that he received at least $200 worth of pitches in the past year or so.

“Why, in an era of record-high student loan debt and unemployment, are colleges not reallocating these ludicrous funds to aid their own students instead of extending their arms far and wide to students they have never met?” he asked in the essay.

Antioch College seemed to think that was a perfectly reasonable question and accepted him, though he will attend Oberlin College instead, to which he did not submit the essay.

“It’s a bold move to critique the very institution he was applying to,” said Mr. Bauld, who also teaches English at Horace Mann School in New York City. “But here’s somebody who knows he can make it work with intelligence and humor.”

Read the entire article here.

Amazon All the Time and Google Toilet Paper

Soon courtesy of Amazon, Google and other retail giants, and of course lubricated by the likes of the ubiquitous UPS and Fedex trucks, you may be able to dispense with the weekly or even daily trip to the grocery store. Amazon is expanding a trial of its same-day grocery delivery service, and others are following suit in select local and regional tests.

You may recall the spectacular implosion of the online grocery delivery service Webvan — a dot.com darling — that came and went in the blink of an internet eye, finally going bankrupt in 2001. Well, times have changed and now avaricious Amazon and its peers have their eyes trained on your groceries.

So now all you need to do is find a service to deliver your kids to and from school, an employer who will let you work from home, convince your spouse that “staycations” are cool, use Google Street View to become a virtual tourist, and you will never, ever, ever, EVER need to leave your house again!

From Slate:

The other day I ran out of toilet paper. You know how that goes. The last roll in the house sets off a ticking clock; depending on how many people you live with and their TP profligacy, you’re going to need to run to the store within a few hours, a day at the max, or you’re SOL. (Unless you’re a man who lives alone, in which case you can wait till the next equinox.) But it gets worse. My last roll of toilet paper happened to coincide with a shortage of paper towels, a severe run on diapers (you know, for kids!), and the last load of dishwashing soap. It was a perfect storm of household need. And, as usual, I was busy and in no mood to go to the store.

This quotidian catastrophe has a happy ending. In April, I got into the “pilot test” for Google Shopping Express, the search company’s effort to create an e-commerce service that delivers goods within a few hours of your order. The service, which is currently being offered in the San Francisco Bay Area, allows you to shop online at Target, Walgreens, Toys R Us, Office Depot, and several smaller, local stores, like Blue Bottle Coffee. Shopping Express combines most of those stores’ goods into a single interface, which means you can include all sorts of disparate items in the same purchase. Shopping Express also offers the same prices you’d find at the store. After you choose your items, you select a delivery window—something like “Anytime Today” or “Between 2 p.m. and 6 p.m.”—and you’re done. On the fateful day that I’d run out of toilet paper, I placed my order at around noon. Shortly after 4, a green-shirted Google delivery guy strode up to my door with my goods. I was back in business, and I never left the house.

Google is reportedly thinking about charging $60 to $70 a year for the service, making it a competitor to Amazon’s Prime subscription plan. But at this point the company hasn’t finalized pricing, and during the trial period, the whole thing is free. I’ve found it easy to use, cheap, and reliable. Similar to my experience when I first got Amazon Prime, it has transformed how I think about shopping. In fact, in the short time I’ve been using it, Shopping Express has replaced Amazon as my go-to source for many household items. I used to buy toilet paper, paper towels, and diapers through Amazon’s Subscribe & Save plan, which offers deep discounts on bulk goods if you choose a regular delivery schedule. I like that plan when it works, but subscribing to items whose use is unpredictable—like diapers for a newborn—is tricky. I often either run out of my Subscribe & Save items before my next delivery, or I get a new delivery while I still have a big load of the old stuff. Shopping Express is far simpler. You get access to low-priced big-box-store goods without all the hassle of big-box stores—driving, parking, waiting in line. And you get all the items you want immediately.

After using it for a few weeks, it’s hard to escape the notion that a service like Shopping Express represents the future of shopping. (Also the past of shopping—the return of profitless late-1990s’ services like Kozmo and WebVan, though presumably with some way of making money this time.) It’s not just Google: Yesterday, Reuters reported that Amazon is expanding AmazonFresh, its grocery delivery service, to big cities beyond Seattle, where it has been running for several years. Amazon’s move confirms the theory I floated a year ago, that the e-commerce giant’s long-term goal is to make same-day shipping the norm for most of its customers.

Amazon’s main competitive disadvantage, today, is shipping delays. While shopping online makes sense for many purchases, the vast majority of the world’s retail commerce involves stuff like toilet paper and dishwashing soap—items that people need (or think they need) immediately. That explains why Wal-Mart sells half a trillion dollars worth of goods every year, and Amazon sells only $61 billion. Wal-Mart’s customers return several times a week to buy what they need for dinner, and while they’re there, they sometimes pick up higher-margin stuff, too. By offering same-day delivery on groceries and household items, Amazon and Google are trying to edge in on that market.

As I learned while using Shopping Express, the plan could be a hit. If done well, same-day shipping erases the distinctions between the kinds of goods we buy online and those we buy offline. Today, when you think of something you need, you have to go through a mental checklist: Do I need it now? Can it wait two days? Is it worth driving for? With same-day shipping, you don’t have to do that. All shopping becomes online shopping.

Read the entire article here.

Image: Webvan truck. Courtesy of Wikipedia.

Stale Acronym Soup

If you have ever typed (sorry, tweeted) the acronyms LOL or YOLO then you are guilty as charged of  language pollution. The most irritating thumbspeak below.

From the Guardian:

Thanks to the on-the-hoof style of chat-rooms and the curtailed nature of the text message and tweet, online abbreviations are now an established part of written English. The question of which is the most irritating, however, is a matter of scholarly debate. Here, by way of opening the discussion, are 10 contenders.

Linguists like to make a distinction between the denotative function of a sign – what it literally means – and the connotative, which is (roughly) what it tells you by implication. The denotative meanings of these abbreviations vary over a wide range. But pretty much all of them connote one thing, which is: “I am a douchebag.”

1) LOL

This is the daddy of them all. In the last decade it has effortlessly overtaken “The cheque’s in the post” and “I love you” as the most-often-told lie in human history. Out loud? Really? And, to complicate things, people are now saying LOL out loud, which is especially banjaxing since you can’t simultaneously say “LOL” and laugh aloud unless you can laugh through your arse. Or say “LOL” through your arse, I suppose, which makes a sort of pun because, linguistically speaking, LOL is now a form of phatic communication. See what I did there? Mega-LOL!

2) YOLO

You Only Live Once. But not for very much longer if you use this abbreviation anywhere near me when I’m holding a claw-hammer. This, as the distinguished internet scholar Matt Muir puts it, is “carpe diem for people with an IQ in double figures”. A friend of mine reports her children using this out loud. This has to end.

3) TBH

To Be Honest. We expect you to be honest, not to make some weary three-fingered gesture of reluctance at having to pony up an uncomfortable truth for an audience who probably can’t really take it. It’s out of the same drawer as “frankly” and “with respect”, and it should be returned to that drawer forthwith.

4) IMHO

In My Humble Opinion. The H in this acronym is always redundant, and the M is usually redundant too: it’s generally an opinion taken off-the-peg from people you follow on Twitter and by whom you hope to be retweeted.

5) JFGI

Just Fucking Google It. Well, charming. Glad I came to you for help. A wittier and more passive-aggressive version of this rude put-down is the website www.lmgtfy.com, which allows you to send your interlocutor a custom-made link saying “Let Me Google That For You” and doing so. My friend Stefan Magdalinski once sent me there, and I can say from first-hand experience that he’s a complete asshole.

6) tl;dr

It stands for “too long; didn’t read”. This abbreviation’s only redeeming feature is that it contains that murmuring under-butler of punctuation marks, the semicolon. On the other hand, it announces that the user is taking time out of his or her life to tell the world not that he disagrees with something, but that he’s ignorant of it. In your face, people who know stuff! In an ideal world there would be a one-character riposte that would convey that you’d stopped reading halfway through your interlocutor’s tedious five-character put-down.

Read the entire article here.

Great Literature and Human Progress

Professor of Philosophy Gregory Currie tackles a thorny issue in his latest article. The question he seeks to answer is, “does great literature make us better?” It’s highly likely that a poll of most nations would show the majority of people  believe that literature does in fact propel us in a forward direction, intellectually, morally, emotionally and culturally. It seem like a no-brainer. But where is the hard evidence?

From the New York Times:

You agree with me, I expect, that exposure to challenging works of literary fiction is good for us. That’s one reason we deplore the dumbing-down of the school curriculum and the rise of the Internet and its hyperlink culture. Perhaps we don’t all read very much that we would count as great literature, but we’re apt to feel guilty about not doing so, seeing it as one of the ways we fall short of excellence. Wouldn’t reading about Anna Karenina, the good folk of Middlemarch and Marcel and his friends expand our imaginations and refine our moral and social sensibilities?

If someone now asks you for evidence for this view, I expect you will have one or both of the following reactions. First, why would anyone need evidence for something so obviously right? Second, what kind of evidence would he want? Answering the first question is easy: if there’s no evidence – even indirect evidence – for the civilizing value of literary fiction, we ought not to assume that it does civilize. Perhaps you think there are questions we can sensibly settle in ways other than by appeal to evidence: by faith, for instance. But even if there are such questions, surely no one thinks this is one of them.

What sort of evidence could we present? Well, we can point to specific examples of our fellows who have become more caring, wiser people through encounters with literature. Indeed, we are such people ourselves, aren’t we?

I hope no one is going to push this line very hard. Everything we know about our understanding of ourselves suggests that we are not very good at knowing how we got to be the kind of people we are. In fact we don’t really know, very often, what sorts of people we are. We regularly attribute our own failures to circumstance and the failures of others to bad character. But we can’t all be exceptions to the rule (supposing it is a rule) that people do bad things because they are bad people.

We are poor at knowing why we make the choices we do, and we fail to recognize the tiny changes in circumstances that can shift us from one choice to another. When it comes to other people, can you be confident that your intelligent, socially attuned and generous friend who reads Proust got that way partly because of the reading? Might it not be the other way around: that bright, socially competent and empathic people are more likely than others to find pleasure in the complex representations of human interaction we find in literature?

There’s an argument we often hear on the other side, illustrated earlier this year by a piece on The New Yorker’s Web site. Reminding us of all those cultured Nazis, Teju Cole notes the willingness of a president who reads novels and poetry to sign weekly drone strike permissions. What, he asks, became of “literature’s vaunted power to inspire empathy?” I find this a hard argument to like, and not merely because I am not yet persuaded by the moral case against drones. No one should be claiming that exposure to literature protects one against moral temptation absolutely, or that it can reform the truly evil among us. We measure the effectiveness of drugs and other medical interventions by thin margins of success that would not be visible without sophisticated statistical techniques; why assume literature’s effectiveness should be any different?

We need to go beyond the appeal to common experience and into the territory of psychological research, which is sophisticated enough these days to make a start in testing our proposition.

Psychologists have started to do some work in this area, and we have learned a few things so far. We know that if you get people to read a short, lowering story about a child murder they will afterward report feeling worse about the world than they otherwise would. Such changes, which are likely to be very short-term, show that fictions press our buttons; they don’t show that they refine us emotionally or in any other way.

We have learned that people are apt to pick up (purportedly) factual information stated or implied as part of a fictional story’s background. Oddly, people are more prone to do that when the story is set away from home: in a study conducted by Deborah Prentice and colleagues and published in 1997, Princeton undergraduates retained more from a story when it was set at Yale than when it was set on their own campus (don’t worry Princetonians, Yalies are just as bad when you do the test the other way around). Television, with its serial programming, is good for certain kinds of learning; according to a study from 2001 undertaken for the Kaiser Foundation, people who regularly watched the show “E.R.” picked up a good bit of medical information on which they sometimes acted. What we don’t have is compelling evidence that suggests that people are morally or socially better for reading Tolstoy.

Not nearly enough research has been conducted; nor, I think, is the relevant psychological evidence just around the corner. Most of the studies undertaken so far don’t draw on serious literature but on short snatches of fiction devised especially for experimental purposes. Very few of them address questions about the effects of literature on moral and social development, far too few for us to conclude that literature either does or doesn’t have positive moral effects.

There is a puzzling mismatch between the strength of opinion on this topic and the state of the evidence. In fact I suspect it is worse than that; advocates of the view that literature educates and civilizes don’t overrate the evidence — they don’t even think that evidence comes into it. While the value of literature ought not to be a matter of faith, it looks as if, for many of us, that is exactly what it is.

Read the entire article here.

Image: The Odyssey, Homer. Book cover. Courtesy of Goodreads.com

Worst Job in the World

Would you rather be a human automaton inside a Chinese factory making products for your peers or a banquet attendant in ancient Rome? Thanks to Lapham’s Quarterly for this disturbing infographic, which shows how times may not have changed as much as we would have believed for the average worker over the last 2,000 years.

Visit the original infographic here.

Infographic courtesy of Lapham’s Quarterly.

Self-Assured Destruction (SAD)

The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.

Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.

From TomDispatch:

 

What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.

For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.

And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.

The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.

There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.

In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”

Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.

Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.

So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.

Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.

And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.

So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.

The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.

We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.

What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.

So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.

Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.

And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.

Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.

Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.

Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.

At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.

Read the entire article here.

Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.

Law, Common Sense and Your DNA

Paradoxically the law and common sense often seem to be at odds. Justice may still be blind, at least in most open democracies, but there seems to be no question as to the stupidity of much of our law.

Some examples: in Missouri it’s illegal to drive with an uncaged bear in the car; in Maine, it’s illegal to keep Christmas decorations up after January 14th; in New Jersey, it’s illegal to wear a bulletproof vest while committing murder; in Connecticut, a pickle is not an official, legal pickle unless it can bounce; in Louisiana, you can be fined $500 for instructing a pizza delivery service to deliver pizza to a friend unknowingly.

So, today we celebrate a victory for common sense and justice over thoroughly ill-conceived and badly written law — the U.S. Supreme Court unanimously struck down laws granting patents to corporations for human genes.

Unfortunately though, due to the extremely high financial stakes this is not likely to be the last we hear about big business seeking to patent or control the building blocks to life.

From the WSJ:

The Supreme Court unanimously ruled Thursday that human genes isolated from the body can’t be patented, a victory for doctors and patients who argued that such patents interfere with scientific research and the practice of medicine.

The court was handing down one of its most significant rulings in the age of molecular medicine, deciding who may own the fundamental building blocks of life.

The case involved Myriad Genetics Inc., which holds patents related to two genes, known as BRCA1 and BRCA2, that can indicate whether a woman has a heightened risk of developing breast cancer or ovarian cancer.

Justice Clarence Thomas, writing for the court, said the genes Myriad isolated are products of nature, which aren’t eligible for patents.

“Myriad did not create anything,” Justice Thomas wrote in an 18-page opinion. “To be sure, it found an important and useful gene, but separating that gene from its surrounding genetic material is not an act of invention.”

Even if a discovery is brilliant or groundbreaking, that doesn’t necessarily mean it’s patentable, the court said.

However, the ruling wasn’t a complete loss for Myriad. The court said that DNA molecules synthesized in a laboratory were eligible for patent protection. Myriad’s shares soared after the court’s ruling.

The court adopted the position advanced by the Obama administration, which argued that isolated forms of naturally occurring DNA weren’t patentable, but artificial DNA molecules were.

Myriad also has patent claims on artificial genes, known as cDNA.

The high court’s ruling was a win for a coalition of cancer patients, medical groups and geneticists who filed a lawsuit in 2009 challenging Myriad’s patents. Thanks to those patents, the Salt Lake City company has been the exclusive U.S. commercial provider of genetic tests for breast cancer and ovarian cancer.

“Today, the court struck down a major barrier to patient care and medical innovation,” said Sandra Park of the American Civil Liberties Union, which represented the groups challenging the patents. “Because of this ruling, patients will have greater access to genetic testing and scientists can engage in research on these genes without fear of being sued.”

Myriad didn’t immediately respond to a request for comment.

The challengers argued the patents have allowed Myriad to dictate the type and terms of genetic screening available for the diseases, while also dissuading research by other laboratories.

Read the entire article here.

Image: Gene showing the coding region in a segment of eukaryotic DNA. Courtesy of Wikipedia.

Innocent Until Proven Guilty, But Always Under Suspicion

It is strange to see the reaction to a remarkable disclosure such as that by the leaker / whistleblower Edward Snowden about the National Security Agency (NSA) peering into all our daily, digital lives. One strange reaction comes from the political left: the left desires a broad and activist government, ready to protect us all, but decries the NSA’s snooping. Another odd reaction comes from the political right: the right wants government out of people’s lives, but yet embraces the idea that the NSA should be looking for virtual skeletons inside people’s digital closets.

But let’s humanize this for a second. Somewhere inside the bowels of the NSA there is (or was) a person, or a small group of people, who actively determines what to look for in your digital communications trail. This person sets some parameters in a computer program and the technology does the rest, sifting through vast mountains of data looking for matches and patterns. Perhaps today that filter may have been set to contain certain permutations of data: zone of originating call, region of the recipient, keywords or code words embedded in the data traffic. However, tomorrow a rather zealous NSA employee may well set the filter to look for different items: keywords highlighting a particular political affiliation, preference for certain TV shows or bars, likes and dislikes of certain foods or celebrities.

We have begun the slide down a very dangerous, slippery slope that imperils our core civil liberties. The First Amendment protects our speech and assembly, but now we know that someone or some group may be evaluating the quality of that speech and determining a course of action if they disagree or if they find us assembling with others with whom they disagree. The Fourth Amendment prohibits unreasonable search — well, it looks like this one is falling by the wayside in light of the NSA program. We presume the secret FISA court, overseeing the secret program determines in secret what may or may not be deemed “reasonable”.

Regardless of Edward Snowden’s motivations (and his girl friend’s reaction), this event raises extremely serious issues that citizens must contemplate and openly discuss. It raises questions about the exercise of power, about government overreach and about the appropriate balance between security and privacy. It also raises questions about due process and about the long held right that presumes us to be innocent first and above all else. It raises a fundamental question about U.S. law and the Constitution and to whom it does and does not apply.

The day before the PRISM program exploded in the national consciousness only a handful of people — in secret — were determining answers to these constitutional and societal questions. Now, thanks to Mr.Snowden we can all participate in that debate, and rightly so — while being watched of course.

From Slate:

Every April, I try to wade through mounds of paperwork to file my taxes. Like most Americans, I’m trying to follow the law and pay all of the taxes that I owe without getting screwed in the process. I try and make sure that every donation I made is backed by proof, every deduction is backed by logic and documentation that I’ll be able to make sense of seven years. Because, like many Americans, I completely and utterly dread the idea of being audited. Not because I’ve done anything wrong, but the exact opposite. I know that I’m filing my taxes to the best of my ability and yet, I also know that if I became a target of interest from the IRS, they’d inevitably find some checkbox I forgot to check or some subtle miscalculation that I didn’t see. And so what makes an audit intimidating and scary is not because I have something to hide but because proving oneself to be innocent takes time, money, effort, and emotional grit.

Sadly, I’m getting to experience this right now as Massachusetts refuses to believe that I moved to New York mid-last-year. It’s mind-blowing how hard it is to summon up the paperwork that “proves” to them that I’m telling the truth. When it was discovered that Verizon (and presumably other carriers) was giving metadata to government officials, my first thought was: Wouldn’t it be nice if the government would use that metadata to actually confirm that I was in NYC, not Massachusetts? But that’s the funny thing about how data is used by our current government. It’s used to create suspicion, not to confirm innocence.

The frameworks of “innocent until proven guilty” and “guilty beyond a reasonable doubt” are really, really important to civil liberties, even if they mean that some criminals get away. These frameworks put the burden on the powerful entity to prove that someone has done something wrong. Because it’s actually pretty easy to generate suspicion, even when someone is wholly innocent. And still, even with this protection, innocent people are sentenced to jail and even given the death penalty. Because if someone has a vested interest in you being guilty, it’s not impossible to paint that portrait, especially if you have enough data.

It’s disturbing to me how often I watch as someone’s likeness is constructed in ways that contorts the image of who they are. This doesn’t require a high-stakes political issue. This is playground stuff. In the world of bullying, I’m astonished at how often schools misinterpret situations and activities to construct narratives of perpetrators and victims. Teens get really frustrated when they’re positioned as perpetrators, especially when they feel as though they’ve done nothing wrong. Once the stakes get higher, all hell breaks loose. In Sticks and Stones, Slate senior editor Emily Bazelon details how media and legal involvement in bullying cases means that they often spin out of control, such as they did in South Hadley. I’m still bothered by the conviction of Dharun Ravi in the highly publicized death of Tyler Clementi. What happens when people are tarred and feathered as symbols for being imperfect?

Of course, it’s not just one’s own actions that can be used against one’s likeness. Guilt-through-association is a popular American pastime. Remember how the media used Billy Carter to embarrass Jimmy Carter? Of course, it doesn’t take the media or require an election cycle for these connections to be made. Throughout school, my little brother had to bear the brunt of teachers who despised me because I was a rather rebellious student. So when the Boston Marathon bombing occurred, it didn’t surprise me that the media went hogwild looking for any connection to the suspects. Over and over again, I watched as the media took friendships and song lyrics out of context to try to cast the suspects as devils. By all accounts, it looks as though the brothers are guilty of what they are accused of, but that doesn’t make their friends and other siblings evil or justify the media’s decision to portray the whole lot in such a negative light.

So where does this get us? People often feel immune from state surveillance because they’ve done nothing wrong. This rhetoric is perpetuated on American TV. And yet the same media who tells them they have nothing to fear will turn on them if they happen to be in close contact with someone who is of interest to—or if they themselves are the subject of—state interest. And it’s not just about now, but it’s about always.

And here’s where the implications are particularly devastating when we think about how inequality, racism, and religious intolerance play out. As a society, we generate suspicion of others who aren’t like us, particularly when we believe that we’re always under threat from some outside force. And so the more that we live in doubt of other people’s innocence, the more that we will self-segregate. And if we’re likely to believe that people who aren’t like us are inherently suspect, we won’t try to bridge those gaps. This creates societal ruptures and undermines any ability to create a meaningful republic. And it reinforces any desire to spy on the “other” in the hopes of finding something that justifies such an approach. But, like I said, it doesn’t take much to make someone appear suspect.

Read the entire article here.

Image: U.S. Constitution. Courtesy of Wikipedia.

Living Long and Prospering on Ikaria

It’s safe to suggest that most of us above a certain age — let’s say 30 — wish to stay young. It is also safer to suggest, in the absence of a solution to this first wish, that many of us wish to age gracefully and happily. Yet for most of us, especially in the West, we age in a less dignified manner in combination with colorful medicines, lengthy tubes, and unpronounceable procedures. We are collectively living longer. But, the quality of those extra years leaves much to be desired.

In a quest to understand the process of aging more thoroughly researchers regularly descend on areas the world over that are known to have higher than average populations of healthy older people. These have become known as “Blue Zones”. One such place is a small, idyllic (there’s a clue right there) Greek island called Ikaria.

From the Guardian:

Gregoris Tsahas has smoked a packet of cigarettes every day for 70 years. High up in the hills of Ikaria, in his favourite cafe, he draws on what must be around his half-millionth fag. I tell him smoking is bad for the health and he gives me an indulgent smile, which suggests he’s heard the line before. He’s 100 years old and, aside from appendicitis, has never known a day of illness in his life.

Tsahas has short-cropped white hair, a robustly handsome face and a bone-crushing handshake. He says he drinks two glasses of red wine a day, but on closer interrogation he concedes that, like many other drinkers, he has underestimated his consumption by a couple of glasses.

The secret of a good marriage, he says, is never to return drunk to your wife. He’s been married for 60 years. “I’d like another wife,” he says. “Ideally one about 55.”

Tsahas is known at the cafe as a bit of a gossip and a joker. He goes there twice a day. It’s a 1km walk from his house over uneven, sloping terrain. That’s four hilly kilometres a day. Not many people half his age manage that far in Britain.

In Ikaria, a Greek island in the far east of the Mediterranean, about 30 miles from the Turkish coast, characters such as Gregoris Tsahas are not exceptional. With its beautiful coves, rocky cliffs, steep valleys and broken canopy of scrub and olive groves, Ikaria looks similar to any number of other Greek islands. But there is one vital difference: people here live much longer than the population on other islands and on the mainland. In fact, people here live on average 10 years longer than those in the rest of Europe and America – around one in three Ikarians lives into their 90s. Not only that, but they also have much lower rates of cancer and heart disease, suffer significantly less depression and dementia, maintain a sex life into old age and remain physically active deep into their 90s. What is the secret of Ikaria? What do its inhabitants know that the rest of us don’t?

The island is named after Icarus, the young man in Greek mythology who flew too close to the sun and plunged into the sea, according to legend, close to Ikaria. Thoughts of plunging into the sea are very much in my mind as the propeller plane from Athens comes in to land. There is a fierce wind blowing – the island is renowned for its wind – and the aircraft appears to stall as it turns to make its final descent, tipping this way and that until, at the last moment, the pilot takes off upwards and returns to Athens. Nor are there any ferries, owing to a strike. “They’re always on strike,” an Athenian back at the airport tells me.

Stranded in Athens for the night, I discover that a fellow thwarted passenger is Dan Buettner, author of a book called The Blue Zones, which details the five small areas in the world where the population outlive the American and western European average by around a decade: Okinawa in Japan, Sardinia, the Nicoya peninsula in Costa Rica, Loma Linda in California and Ikaria.

Tall and athletic, 52-year-old Buettner, who used to be a long-distance cyclist, looks a picture of well-preserved youth. He is a fellow with National Geographic magazine and became interested in longevity while researching Okinawa’s aged population. He tells me there are several other passengers on the plane who are interested in Ikaria’s exceptional demographics. “It would have been ironic, don’t you think,” he notes drily, “if a group of people looking for the secret of longevity crashed into the sea and died.”

Chatting to locals on the plane the following day, I learn that several have relations who are centenarians. One woman says her aunt is 111. The problem for demographers with such claims is that they are often very difficult to stand up. Going back to Methuselah, history is studded with exaggerations of age. In the last century, longevity became yet another battleground in the cold war. The Soviet authorities let it be known that people in the Caucasus were living deep into their hundreds. But subsequent studies have shown these claims lacked evidential foundation.

Since then, various societies and populations have reported advanced ageing, but few are able to supply convincing proof. “I don’t believe Korea or China,” Buettner says. “I don’t believe the Hunza Valley in Pakistan. None of those places has good birth certificates.”

However, Ikaria does. It has also been the subject of a number of scientific studies. Aside from the demographic surveys that Buettner helped organise, there was also the University of Athens’ Ikaria Study. One of its members, Dr Christina Chrysohoou, a cardiologist at the university’s medical school, found that the Ikarian diet featured a lot of beans and not much meat or refined sugar. The locals also feast on locally grown and wild greens, some of which contain 10 times more antioxidants than are found in red wine, as well as potatoes and goat’s milk.

Chrysohoou thinks the food is distinct from that eaten on other Greek islands with lower life expectancy. “Ikarians’ diet may have some differences from other islands’ diets,” she says. “The Ikarians drink a lot of herb tea and small quantities of coffee; daily calorie consumption is not high. Ikaria is still an isolated island, without tourists, which means that, especially in the villages in the north, where the highest longevity rates have been recorded, life is largely unaffected by the westernised way of living.”

But she also refers to research that suggests the Ikarian habit of taking afternoon naps may help extend life. One extensive study of Greek adults showed that regular napping reduced the risk of heart disease by almost 40%. What’s more, Chrysohoou’s preliminary studies revealed that 80% of Ikarian males between the ages of 65 and 100 were still having sex. And, of those, a quarter did so with “good duration” and “achievement”. “We found that most males between 65 and 88 reported sexual activity, but after the age of 90, very few continued to have sex.”

Read the entire article here.

Image: Agios Giorgis Beach, Ikaria. Courtesy of Island-Ikaria travel guide.

Iain (M.) Banks

On June 9, 2013 we lost Iain Banks to cancer. He was a passionate human(ist) and a literary great.

Luckily he left us with a startling collection of resonant and complex works. Most notably his series of Culture novels that prophesied a distant future, which one day will surely bear his name as a founding member. Mr.Banks, you will be greatly missed.

From the Guardian

The writer Iain Banks, who has died aged 59, had already prepared his many admirers for his death. On 3 April he announced on his website that he had inoperable gall bladder cancer, giving him, at most, a year to live. The announcement was typically candid and rueful. It was also characteristic in another way: Banks had a large web-attentive readership who liked to follow his latest reflections as well as his writings. Particularly in his later years, he frequently projected his thoughts via the internet. There can have been few novelists of recent years who were more aware of what their readers thought of their books; there is a frequent sense in his novels of an author teasing, testing and replying to a readership with which he was pretty familiar.

His first published novel, The Wasp Factory, appeared in 1984, when he was 30 years old, though it had been rejected by six publishers before being accepted by Macmillan. It was an immediate succès de scandale. The narrator is the 16-year-old Frank Cauldhame, who lives with his taciturn father in an isolated house on the north-east coast of Scotland. Frank lives in a world of private rituals, some of which involve torturing animals, and has committed several murders. The explanation of his isolation and his obsessiveness is shockingly revealed in one of the culminating plot twists for which Banks was to become renowned.

It was followed by Walking on Glass (1985), composed of three separate narratives whose connections are deliberately made obscure until near the end of the novel. One of these seems to be a science fiction narrative and points the way to Banks’s strong interest in this genre. Equally, multiple narration would continue to feature in his work.

The next year’s novel, The Bridge, featured three separate stories told in different styles: one a realist narrative about Alex, a manager in an engineering company, who crashes his car on the Forth road bridge; another the story of John Orr, an amnesiac living on a city-sized version of the bridge; and a third, the first-person narrative of the Barbarian, retelling myths and legends in colloquial Scots. In combining fantasy and allegory with minutely located naturalistic narrative, it was clearly influenced by Alasdair Gray’s Lanark (1981). It remained the author’s own avowed favourite.

His first science fiction novel, Consider Phlebas, was published in 1987, though he had drafted it soon after completing The Wasp Factory. In it he created The Culture, a galaxy-hopping society run by powerful but benevolent machines and possessed of what its inventor called “well-armed liberal niceness”. It would feature in most of his subsequent sci-fi novels. Its enemies are the Idirans, a religious, humanoid race who resent the benign powers of the Culture. In this conflict, good and ill are not simply apportioned. Banks provided a heady mix of, on the one hand, action and intrigue on a cosmic scale (his books were often called “space operas”), and, on the other, ruminations on the clash of ideas and ideologies.

For the rest of his career literary novels would alternate with works of science fiction, the latter appearing under the name “Iain M Banks” (the “M” standing for Menzies). Banks sometimes spoke of his science fiction books as a writerly vacation from the demands of literary fiction, where he could “pull out the stops”, as he himself put it. Player of Games (1988) was followed by Use of Weapons (1990). The science fiction employed some of the narrative trickery that characterised his literary fiction: Use of Weapons, for instance, featured two interleaved narratives, one of which moved forward in time and the other backwards. Their connectedness only became clear with a final, somewhat outrageous, twist of the narrative. His many fans came to relish these tricks.

Read the entire article here.

Image: Iain Banks. Courtesy of BBC.

MondayMap: The Double Edge of Climate Change

So the changing global climate will imperil our coasts, flood low-lying lands, fuel more droughts, increase weather extremes, and generally make the planet more toasty. But, a new study — for the first time — links increasing levels of CO2 to an increase in global vegetation. Perhaps this portends our eventual fate — ceding the Earth back to the plants — unless humans make some drastic behavioral changes.

From the New Scientist:

The planet is getting lusher, and we are responsible. Carbon dioxide generated by human activity is stimulating photosynthesis and causing a beneficial greening of the Earth’s surface.

For the first time, researchers claim to have shown that the increase in plant cover is due to this “CO2 fertilisation effect” rather than other causes. However, it remains unclear whether the effect can counter any negative consequences of global warming, such as the spread of deserts.

Recent satellite studies have shown that the planet is harbouring more vegetation overall, but pinning down the cause has been difficult. Factors such as higher temperatures, extra rainfall, and an increase in atmospheric CO2 – which helps plants use water more efficiently – could all be boosting vegetation.

To home in on the effect of CO2, Randall Donohue of Australia’s national research institute, the CSIRO in Canberra, monitored vegetation at the edges of deserts in Australia, southern Africa, the US Southwest, North Africa, the Middle East and central Asia. These are regions where there is ample warmth and sunlight, but only just enough rainfall for vegetation to grow, so any change in plant cover must be the result of a change in rainfall patterns or CO2 levels, or both.

If CO2 levels were constant, then the amount of vegetation per unit of rainfall ought to be constant, too. However, the team found that this figure rose by 11 per cent in these areas between 1982 and 2010, mirroring the rise in CO2 (Geophysical Research Letters, doi.org/mqx). Donohue says this lends “strong support” to the idea that CO2 fertilisation drove the greening.

Climate change studies have predicted that many dry areas will get drier and that some deserts will expand. Donohue’s findings make this less certain.

However, the greening effect may not apply to the world’s driest regions. Beth Newingham of the University of Idaho, Moscow, recently published the result of a 10-year experiment involving a greenhouse set up in the Mojave desert of Nevada. She found “no sustained increase in biomass” when extra CO2 was pumped into the greenhouse. “You cannot assume that all these deserts respond the same,” she says. “Enough water needs to be present for the plants to respond at all.”

The extra plant growth could have knock-on effects on climate, Donohue says, by increasing rainfall, affecting river flows and changing the likelihood of wildfires. It will also absorb more CO2 from the air, potentially damping down global warming but also limiting the CO2 fertilisation effect itself.

Read the entire article here.

Image: Global vegetation mapped: Normalized Difference Vegetation Index (NDVI) from Nov. 1, 2007, to Dec. 1, 2007, during autumn in the Northern Hemisphere. This monthly average is based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The greenness values depict vegetation density; higher values (dark greens) show land areas with plenty of leafy green vegetation, such as the Amazon Rainforest. Lower values (beige to white) show areas with little or no vegetation, including sand seas and Arctic areas. Areas with moderate amounts of vegetation are pale green. Land areas with no data appear gray, and water appears blue. Courtesy of NASA.

Cos Things Break, Don’t They

Most things, natural or manufactured, break after a while. And, most photographers spend an inordinate amount of time ensuring that their subject — usually an object — is represented in the best possible wholesome light, literally and metaphorically. However, for one enterprising photographer it’s all about things in their broken form, albeit displayed exquisitely in a collage of their constituent pieces.

From the Guardian:

Canadian photographer Todd McLellan makes visible the inner workings of everyday products by dismantling, carefully arranging the components and photographing them. His book, Things Come Apart, presents a unique view of items such as chainsaws and iPods, transforming ordinary objects into works of art.

See the entire galley here.

Image: Raleigh bicycle from the 80s. Number of parts: 893. Courtesy of Todd McLellan/Thames & Hudson / Guardian.

PRISM

From the news reports first aired a couple of days ago and posted here, we now know the U.S. National Security Agency (NSA) has collected and is collecting vast amounts of data related to our phone calls. But, it seems that this is only the very tip of a very large, nasty iceberg. Our government is also sifting though our online communications as well — email, online chat, photos, videos, social networking data.

From the Washington Post:

Through a top-secret program authorized by federal judges working under the Foreign Intelligence Surveillance Act (FISA), the U.S. intelligence community can gain access to the servers of nine Internet companies for a wide range of digital data. Documents describing the previously undisclosed program, obtained by The Washington Post, show the breadth of U.S. electronic surveillance capabilities in the wake of a widely publicized controversy over warrantless wiretapping of U.S. domestic telephone communications in 2005.

Read the entire article here.

Image: From the PRISM Powerpoint presentation – The PRISM program collects a wide range of data from the nine companies, although the details vary by provider. Courtesy of Washington Post.