Big Data and Even Bigger Problems

First a definition. Big data: typically a collection of large and complex datasets that are too cumbersome to process and analyze using traditional computational approaches and database applications. Usually the big data moniker will be accompanied by an IT vendor’s pitch for shiny new software (and possible hardware) solution able to crunch through petabytes (one petabyte is a million gigabytes) of data and produce a visualizable result that mere mortals can decipher.

Many companies see big data and related solutions as a panacea to a range of business challenges: customer service, medical diagnostics, product development, shipping and logistics, climate change studies, genomic analysis and so on. A great example was the last U.S. election. Many political wonks — from both sides of the aisle — agreed that President Obama was significantly aided in his won re-election with the help of big data. So, with that in mind, many are now looking at more important big data problems.

From Technology Review:

As chief scientist for President Obama’s reëlection effort, Rayid Ghani helped revolutionize the use of data in politics. During the final 18 months of the campaign, he joined a sprawling team of data and software experts who sifted, collated, and combined dozens of pieces of information on each registered U.S. voter to discover patterns that let them target fund-raising appeals and ads.

Now, with Obama again ensconced in the Oval Office, some veterans of the campaign’s data squad are applying lessons from the campaign to tackle social issues such as education and environmental stewardship. Edgeflip, a startup Ghani founded in January with two other campaign members, plans to turn the ad hoc data analysis tools developed for Obama for America into software that can make nonprofits more effective at raising money and recruiting volunteers.

Ghani isn’t the only one thinking along these lines. In Chicago, Ghani’s hometown and the site of Obama for America headquarters, some campaign members are helping the city make available records of utility usage and crime statistics so developers can build apps that attempt to improve life there. It’s all part of a bigger idea to engineer social systems by scanning the numerical exhaust from mundane activities for patterns that might bear on everything from traffic snarls to human trafficking. Among those pursuing such humanitarian goals are startups like DataKind as well as large companies like IBM, which is redrawing bus routes in Ivory Coast (see “African Bus Routes Redrawn Using Cell-Phone Data”), and Google, with its flu-tracking software (see “Sick Searchers Help Track Flu”).

Ghani, who is 35, has had a longstanding interest in social causes, like tutoring disadvantaged kids. But he developed his data-mining savvy during 10 years as director of analytics at Accenture, helping retail chains forecast sales, creating models of consumer behavior, and writing papers with titles like “Data Mining for Business Applications.”

Before joining the Obama campaign in July 2011, Ghani wasn’t even sure his expertise in machine learning and predicting online prices could have an impact on a social cause. But the campaign’s success in applying such methods on the fly to sway voters is now recognized as having been potentially decisive in the election’s outcome (see “A More Perfect Union”).

“I realized two things,” says Ghani. “It’s doable at the massive scale of the campaign, and that means it’s doable in the context of other problems.”

At Obama for America, Ghani helped build statistical models that assessed each voter along five axes: support for the president; susceptibility to being persuaded to support the president; willingness to donate money; willingness to volunteer; and likelihood of casting a vote. These models allowed the campaign to target door knocks, phone calls, TV spots, and online ads to where they were most likely to benefit Obama.

One of the most important ideas he developed, dubbed “targeted sharing,” now forms the basis of Edgeflip’s first product. It’s a Facebook app that prompts people to share information from a nonprofit, but only with those friends predicted to respond favorably. That’s a big change from the usual scattershot approach of posting pleas for money or help and hoping they’ll reach the right people.

Edgeflip’s app, like the one Ghani conceived for Obama, will ask people who share a post to provide access to their list of friends. This will pull in not only friends’ names but also personal details, like their age, that can feed models of who is most likely to help.

Say a hurricane strikes the southeastern United States and the Red Cross needs clean-up workers. The app would ask Facebook users to share the Red Cross message, but only with friends who live in the storm zone, are young and likely to do manual labor, and have previously shown interest in content shared by that user. But if the same person shared an appeal for donations instead, he or she would be prompted to pass it along to friends who are older, live farther away, and have donated money in the past.

Michael Slaby, a senior technology official for Obama who hired Ghani for the 2012 election season, sees great promise in the targeted sharing technique. “It’s one of the most compelling innovations to come out of the campaign,” says Slaby. “It has the potential to make online activism much more efficient and effective.”

For instance, Ghani has been working with Fidel Vargas, CEO of the Hispanic Scholarship Fund, to increase that organization’s analytical savvy. Vargas thinks social data could predict which scholarship recipients are most likely to contribute to the fund after they graduate. “Then you’d be able to give away scholarships to qualified students who would have a higher probability of giving back,” he says. “Everyone would be much better off.”

Ghani sees a far bigger role for technology in the social sphere. He imagines online petitions that act like open-source software, getting passed around and improved. Social programs, too, could get constantly tested and improved. “I can imagine policies being designed a lot more collaboratively,” he says. “I don’t know if the politicians are ready to deal with it.” He also thinks there’s a huge amount of untapped information out there about childhood obesity, gang membership, and infant mortality, all ready for big data’s touch.

Read the entire article here.

Inforgraphic courtesy of visua.ly. See the original here.

Your Home As Eco-System

For centuries biologists, zoologists and ecologists have been mapping the wildlife that surrounds us in the great outdoors. Now a group led by microbiologist Noah Fierer at the University of Colorado Boulder is pursuing flora and fauna in one of the last unexplored eco-systems — the home. (Not for the faint of heart).

From the New York Times:

On a sunny Wednesday, with a faint haze hanging over the Rockies, Noah Fierer eyed the field site from the back of his colleague’s Ford Explorer. Two blocks east of a strip mall in Longmont, one of the world’s last underexplored ecosystems had come into view: a sandstone-colored ranch house, code-named Q. A pair of dogs barked in the backyard.

Dr. Fierer, 39, a microbiologist at the University of Colorado Boulder and self-described “natural historian of cooties,” walked across the front lawn and into the house, joining a team of researchers inside. One swabbed surfaces with sterile cotton swabs. Others logged the findings from two humming air samplers: clothing fibers, dog hair, skin flakes, particulate matter and microbial life.

Ecologists like Dr. Fierer have begun peering into an intimate, overlooked world that barely existed 100,000 years ago: the great indoors. They want to know what lives in our homes with us and how we “colonize” spaces with other species — viruses, bacteria, microbes. Homes, they’ve found, contain identifiable ecological signatures of their human inhabitants. Even dogs exert a significant influence on the tiny life-forms living on our pillows and television screens. Once ecologists have more thoroughly identified indoor species, they hope to come up with strategies to scientifically manage homes, by eliminating harmful taxa and fostering species beneficial to our health.

But the first step is simply to take a census of what’s already living with us, said Dr. Fierer; only then can scientists start making sense of their effects. “We need to know what’s out there first. If you don’t know that, you’re wandering blind in the wilderness.”

Here’s an undeniable fact: We are an indoor species. We spend close to 90 percent of our lives in drywalled caves. Yet traditionally, ecologists ventured outdoors to observe nature’s biodiversity, in the Amazon jungles, the hot springs of Yellowstone or the subglacial lakes of Antarctica. (“When you train as an ecologist, you imagine yourself tromping around in the forest,” Dr. Fierer said. “You don’t imagine yourself swabbing a toilet seat.”)

But as humdrum as a home might first appear, it is a veritable wonderland. Ecology does not stop at the front door; a home to you is also home to an incredible array of wildlife.

Besides the charismatic fauna commonly observed in North American homes — dogs, cats, the occasional freshwater fish — ants and roaches, crickets and carpet bugs, mites and millions upon millions of microbes, including hundreds of multicellular species and thousands of unicellular species, also thrive in them. The “built environment” doubles as a complex ecosystem that evolves under the selective pressure of its inhabitants, their behavior and the building materials. As microbial ecologists swab DNA from our homes, they’re creating an atlas of life much as 19th-century naturalists like Alfred Russel Wallace once logged flora and fauna on the Malay Archipelago.

Take an average kitchen. In a study published in February in the journal Environmental Microbiology, Dr. Fierer’s lab examined 82 surfaces in four Boulder kitchens. Predictable patterns emerged. Bacterial species associated with human skin, like Staphylococcaceae or Corynebacteriaceae, predominated. Evidence of soil showed up on the floor, and species associated with raw produce (Enterobacteriaceae, for example) appeared on countertops. Microbes common in moist areas — including sphingomonads, some strains infamous for their ability to survive in the most toxic sites — splashed in a kind of jungle above the faucet.

A hot spot of unrivaled biodiversity was discovered on the stove exhaust vent, probably the result of forced air and settling. The counter and refrigerator, places seemingly as disparate as temperate and alpine grasslands, shared a similar assemblage of microbial species — probably less because of temperature and more a consequence of cleaning. Dr. Fierer’s lab also found a few potential pathogens, like Campylobacter, lurking on the cupboards. There was evidence of the bacterium on a microwave panel, too, presumably a microbial “fingerprint” left by a cook handling raw chicken.

If a kitchen represents a temperate forest, few of its plants would be poison ivy. Most of the inhabitants are relatively benign. In any event, eradicating them is neither possible nor desirable. Dr. Fierer wants to make visible this intrinsic, if unseen, aspect of everyday life. “For a lot of the general public, they don’t care what’s in soil,” he said. “People care more about what’s on their pillowcase.” (Spoiler alert: The microbes living on your pillowcase are not all that different from those living on your toilet seat. Both surfaces come in regular contact with exposed skin.)

Read the entire article after the jump.

Image: Animals commonly found in the home. Courtesy of North Carolina State University.

You Can Check Out Anytime You Like…

“… But You Can Never Leave”. So goes one of the most memorable of lyrical phrases from The Eagles (Hotel California).

Of late, it seems that this state of affairs also applies to a vast collection of people on Facebook; many wish to leave but lack the social capital or wisdom or backbone to do so.

From the Washington Post:

Bad news, everyone. We’re trapped. We may well be stuck here for the rest of our lives. I hope you brought canned goods.

A dreary line of tagged pictures and status updates stretches before us from here to the tomb.

Like life, Facebook seems to get less exciting the longer we spend there. And now everyone hates Facebook, officially.

Last week, Pew reported that 94 percent of teenagers are on Facebook, but that they are miserable about it. Then again, when are teenagers anything else? Pew’s focus groups of teens complained about the drama, said Twitter felt more natural, said that it seemed like a lot of effort to keep up with everyone you’d ever met, found the cliques and competition for friends offputting –

All right, teenagers. You have a point. And it doesn’t get better.

The trouble with Facebook is that 94 percent of people are there. Anything with 94 Percent of People involved ceases to have a personality and becomes a kind of public utility. There’s no broad generalization you can make about people who use flush toilets. Sure, toilets are a little odd, and they become quickly ridiculous when you stare at them long enough, the way a word used too often falls apart into meaningless letters under scrutiny, but we don’t think of them as peculiar. Everyone’s got one. The only thing weirder than having one of those funny porcelain thrones in your home would be not having one.

Facebook is like that, and not just because we deposit the same sort of thing in both. It used to define a particular crowd. But it’s no longer the bastion of college students and high schoolers avoiding parental scrutiny. Mom’s there. Heck, Velveeta Cheesy Skillets are there.

It’s just another space in which all the daily drama of actual life plays out. All the interactions that used only to be annoying to the people in the room with you at the time are now played out indelibly in text and pictures that can be seen from great distances by anyone who wants to take an afternoon and stalk you. Oscar Wilde complained about married couples who flirted with each other, saying that it was like washing clean linen in public. Well, just look at the wall exchanges of You Know The Couple I Mean. “Nothing is more irritating than not being invited to a party you wouldn’t be seen dead at,” Bill Vaughan said. On Facebook, that’s magnified to parties in entirely different states.

Facebook has been doing its best to approximate our actual social experience — that creepy foray into chairs aside. But what it forgot was that our actual social experience leaves much to be desired. After spending time with Other People smiling politely at news of what their sonograms are doing, we often want to rush from the room screaming wordlessly and bang our heads into something.

Hell is other people, updating their statuses with news that Yay The Strange Growth Checked Out Just Fine.

This is the point where someone says, “Well, if it’s that annoying, why don’t you unsubscribe?”

But you can’t.

Read the entire article here.

Image: Facebook logo courtesy of Mirror / Facebook.

Frankenlanguage

An interesting story on the adoption of pop culture words into our common lexicon. Beware! The next blockbuster sci-fi movie that you see may influence your next choice of noun.

From the Guardian:

Water cooler conversation at a dictionary company tends towards the odd. A while ago I was chatting with one of my colleagues about our respective defining batches. “I’m not sure,” he said, “what to do about the plural of ‘hobbit’. There are some citations for ‘hobbitses’, but I think they may be facetious uses. Have any thoughts?”

I did: “We enter ‘hobbit’ into the dictionary?” You learn something new every day.

Pop culture is a goldmine of neologisms, and science fiction and fantasy is one rich seam that has been contributing to English for hundreds of years. Yes, hundreds: because what is Gulliver’s Travels but a fantasy satire of 18th-century travel novels? And what is Frankenstein but science fiction? The name of Mary Shelley’s monster lives on both as its own word and as a combining form used in words like “frankenfood”. And Swift’s fantasy novel was so evocative, we adopted a number of words from it, such as “Lilliputian”, the tongue-twisting “Brobdingnagian”, and – surprise – “yahoo”.

Don’t be surprised. Many words have their origins in science fiction and fantasy writing, but have been so far removed from their original contexts that we’ve forgotten. George Orwell gave us “doublespeak”; Carl Sagan is responsible for the term “nuclear winter”; and Isaac Asimov coined “microcomputer” and “robotics”. And, yes, “blaster”, as in “Hokey religions and ancient weapons are no match for a good blaster at your side, kid.”

Which brings us to the familiar and more modern era of sci-fi and fantasy, ones filled with tricorders, lightsabers, dark lords in fiery mountain fortresses, and space cowboys. Indeed, we have whole cable channels devoted to sci-fi and fantasy shows, and the big blockbuster movie this season is Star Trek (again). So why haven’t we seen “tricorder” and “lightsaber” entered into the dictionary? When will the dictionary give “Quidditch” its due? Whither “gorram”?

All fields have their own vocabulary and, as often happens, that vocabulary is often isolated to that field. When an ad executive talks about a “deck”, they are not referring to the same “deck” that poker players use, or the same “deck” that sailors work on. When specialized vocabulary does appear outside of its particular field and in more general literature, it’s often long after its initial point of origin. This process is no different with words from science fiction and fantasy. “Tricorder”, for instance, is used in print, but most often only to refer to the medical diagnostic device used in the Star Trek movies. It’s not quite generic enough to merit entry as a general vocabulary word.

In some cases, the people who gave us the word aren’t keen to see it taken outside of its intended world and used with an extended meaning. Consequently, some coinages don’t get into print as often as you’d think: “Jedi mind trick” only appears four times in the Corpus of Contemporary American English. That corpus contains over 450 million indexed words.

Savvy writers of each genre also liked to resurrect and breathe new life into old words. JRR Tolkien not only gave us “hobbit”, he also popularized the plural “dwarves”, which has appeared in English with increasing frequency since the publication of The Hobbit in 1968. “Eldritch”, which dates to the 1500s, is linked in the modern mind almost exclusively to the stories of HP Lovecraft. The verb “terraform” that was most recently popularized by Joss Whedon’s show Firefly dates back to the 1940s, though it was uncommon until Firefly aired. Prior to 1977, storm troopers were Nazis.

Even new words can look old: JK Rowling’s “muggle” is a coinage of her own devising – but there are earlier, rarer “muggles” entered into the Oxford English Dictionary (one meaning “a tail resembling that of a fish”, and another meaning “a young woman or sweetheart”), along with a “dumbledore” (“a bumble-bee”) and a “hagrid” (a variant of “hag-ridden” meaning “afflicted by nightmares”).

More interesting to the lexicographer is that, in spite of the devoted following that sci-fi and fantasy each have – of the top 10 highest-grossing film franchises in history, at least five of them are science fiction or fantasy – we haven’t adopted more sci-fi and fantasy words into general use. Perhaps, in the case of sci-fi, we just need to wait for technology to improve to the point that we can talk with our co-workers about jumping into hyperspace or hanging out on the holodeck.

Read the entire article here.

Charting the Rise (and Fall) of Humanity

Rob Wile over at Business Insider has posted a selection of graphs that in his words “will restore your faith in humanity”. This should put many cynics on the defensive — after all, his charts clearly show that conflict is on the decline, and democracy is on the rise. But, look more closely and you’ll see that slavery is still with us, poverty and social injustice abounds, the wealthy are wealthier, conspicuous consumption is rising.

From Business Insider:

Lately, it feels like the news has been dominated by tragedies: natural disasters, evil people, and sometimes just carelessness.

But it would be a mistake to become cynical.

We’ve put together 31 charts that we think will help restore your faith in humanity.

2) Democracy’s in. Autocracy’s out.

3) Slavery is disappearing.

Read the entire article here.

Revisiting Drake

In 1960 radio astronomer Frank Drake began the first systematic search for intelligent signals emanating from space. He was not successful, but his pioneering efforts paved the way for numerous other programs, including SETI (Search for Extra-Terrestrial Intelligence). The Drake Equation is named for him, and put simply, gives an estimate of the number of active, extraterrestrial civilizations with methods of communication in our own galaxy. Drake postulated the equation as a way to get the scientific community engaged in the search for life beyond our home planet.

The Drake equation is:

N = R^{\ast} \cdot f_p \cdot n_e \cdot f_{\ell} \cdot f_i \cdot f_c \cdot L

where:

N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone); and

R* = the average number of star formation per year in our galaxy

fp = the fraction of those stars that have planets

ne = the average number of planets that can potentially support life per star that has planets

fl = the fraction of planets that could support life that actually develop life at some point

fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)

fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space

L = the length of time for which such civilizations release detectable signals into space

Now, based on recent discoveries of hundreds of extra-solar planets, or exoplanets (those beyond our solar system), by the Kepler space telescope and other Earth-bound observatories, researchers are fine-tuning the original Drake Equation for the 21st century.

From the New Scientist:

An iconic tool in the search for extraterrestrial life is getting a 21st-century reboot – just as our best planet-hunting telescope seems to have died. Though the loss of NASA’s Kepler telescope is a blow, the reboot could mean we find signs of life on extrasolar planets within a decade.

The new tool takes the form of an equation. In 1961 astronomer Frank Drake scribbled his now-famous equation for calculating the number of detectable civilisations in the Milky Way. The Drake equation includes a number of terms that at the time seemed unknowable – including the very existence of planets beyond our solar system.

But the past two decades have seen exoplanets pop up like weeds, particularly in the last few years thanks in large part to the Kepler space telescope. Launched in 2009Movie Camera, Kepler has found more than 130 worlds and detected 3000 or so more possibles. The bounty has given astronomers the first proper census of planets in one region of our galaxy, allowing us to make estimates of the total population of life-friendly worlds across the Milky Way.

With that kind of data in hand, Sara Seager at the Massachusetts Institute of Technology reckons the Drake equation is ripe for a revamp. Her version narrows a few of the original terms to account for our new best bets of finding life, based in part on what Kepler has revealed. If the original Drake equation was a hatchet, the new Seager equation is a scalpel.

Seager presented her work this week at a conference in Cambridge, Massachusetts, entitled “Exoplanets in the Post-Kepler Era”. The timing could not be more prescient. Last week Kepler suffered a surprise hardware failure that knocked out its ability to see planetary signals clearly. If it can’t be fixed, the mission is over.

“When we talked about the post-Kepler era, we thought that would be three to four years from now,” co-organiser David Charbonneau of the Harvard-Smithsonian Center for Astrophysics said last week. “We now know the post-Kepler era probably started two days ago.”

But Kepler has collected data for four years, slightly longer than the mission’s original goal, and so far only the first 18 months’ worth have been analysed. That means it may have already gathered enough information to give alien-hunters a fighting chance.

The original Drake equation includes seven terms, which multiplied together give the number of intelligent alien civilisations we could hope to detect (see diagram). Kepler was supposed to pin down two terms: the fraction of stars that have planets, and the number of those planets that are habitable.

To do that, Kepler had been staring unflinchingly at some 150,000 stars near the constellation Cygnus, looking for periodic changes in brightness caused by a planet crossing, or transiting, a star’s face as seen from Earth. This method tells us a planet’s size and its rough distance from its host star.

Size gives a clue to a planet’s composition, which tells us whether it is rocky like Earth or gassy like Neptune. Before Kepler, only a few exoplanets had been identified as small enough to be rocky, because other search methods were better suited to spotting larger, gas giant worlds.

“Kepler is the single most revolutionary project that has ever been undertaken in exoplanets,” says Charbonneau. “It broke open the piggybank and rocky planets poured out.” A planet’s distance from its star is also crucial, because that tells us whether the temperature is right for liquid water – and so perhaps life – to exist.

But with Kepler’s recent woes, hopes of finding enough potentially habitable planets, or Earth twins, to satisfy the Drake equation have dimmed. The mission was supposed to run for three-and-a-half years, which should have been enough to pinpoint Earth-sized planets with years of a similar length. After the telescope came online, the mission team realised that other sun-like stars are more active than ours, and they bounce around too much in the telescope’s field of view. To find enough Earths, they would need seven or eight years of data.

Read the entire article here.

Image courtesy of the BBC. Drake Equation courtesy of Wikipedia.

Violence to the English Language

If you are an English speaker and are over the age of 39 you may be pondering the fate of the English language. As the younger generations fill cyberspace with terabytes of misspelled texts and tweets do you not wonder if gorgeous grammatical language will survive? Are the technophobes and anti-Twitterites doomed to a future world of #hashtag-driven conversation and ADHD-like literature? Those of us who care are reminded of George Orwell’s 1946 essay “Politics and the English Language”, in which he decried the swelling ugliness of the language at the time.

Orwell opens his essay thus,

Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language — so the argument runs — must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.

My, how Orwell would squirm in his Oxfordshire grave were he to be exposed to his mother tongue, as tweeted, in 2013.

From the Guardian:

Some while ago, with reference to Orwell’s essay on “Politics and the English language”, I addressed the language of the internet, an issue that stubbornly refuses to go away. Perhaps now, more than ever, we need to consider afresh what’s happening to English prose in cyberspace.

To paraphrase Orwell, the English of the world wide web – loose, informal, and distressingly dyspeptic – is not really the kind people want to read in a book, a magazine, or even a newspaper. But there’s an assumption that that, because it’s part of the all-conquering internet, we cannot do a thing about it. Twenty-first century civilisation has been transformed in a way without precedent since the invention of moveable type. English prose, so one argument runs, must adapt to the new lexicon with all its grammatical violations and banality. Language is normative; it has – some will say – no choice. The violence the internet does to the English language is simply the cost of doing business in the digital age.

From this, any struggle against the abuse and impoverishment of English online (notably, in blogs and emails) becomes what Orwell called “a sentimental archaism”. Behind this belief lies the recognition that language is a natural growth and not an instrument we can police for better self-expression. To argue differently is to line up behind Jonathan Swift and the prescriptivists (see Swift’s essay “A Proposal for Correcting, Improving and Ascertaining the English Tongue”).

If you refer to “Politics and the English Language” (a famous essay actually commissioned for in-house consumption by Orwell’s boss, the Observer editor David Astor) you will find that I have basically adapted his more general concerns about language to the machinations of cyberspace and the ebb and flow of language on the internet.

And why not? First, he puts it very well. Second, among Orwell’s heirs (the writers, bloggers and journalists of today), there’s still a subconscious, half-admitted anxiety about what’s happening to English prose in the unpoliced cyber-wilderness. This, too, is a recurrent theme with deep roots. As long ago as 1946, Orwell said that English was “in a bad way”. Look it up: the examples he cited are both amusingly archaic, but also appropriately gruesome.

Sixty-something years on, in 2013, quite a lot of people would probably concede a similar anxiety: or at least some mild dismay at the overall crassness of English prose in the age of global communications.

Read the entire article here.

Image: Politics and the English language, book cover. Courtesy of George Orwell estate / Apple.

From RNA Chemistry to Cell Biology

Each day we inch towards a better scientific understanding of how life is thought to have begun on our planet. Over the last decade researchers have shown how molecules like the nucleotides that make up complex chains of RNA (ribonucleic acid) and DNA (deoxyribonucleic acid) may have formed in the primaeval chemical soup of the early Earth. But it’s altogether a much greater leap to get from RNA (or DNA) to even a simple biological cell. Some recent work sheds more light and suggests that the chemical to biological chasm between long-strands of RNA and a complex cell may not be as wide to cross as once thought.

From ars technica:

Origin of life researchers have made impressive progress in recent years, showing that simple chemicals can combine to make nucleotides, the building blocks of DNA and RNA. Given the right conditions, these nucleotides can combine into ever-longer stretches of RNA. A lot of work has demonstrated that RNAs can perform all sorts of interesting chemistry, specifically binding other molecules and catalyzing reactions.

So the case for life getting its start in an RNA world has gotten very strong in the past decade, but the difference between a collection of interesting RNAs and anything like a primitive cell—surrounded by membranes, filled with both RNA and proteins, and running a simple metabolism—remains a very wide chasm. Or so it seems. A set of papers that came out in the past several days suggest that the chasm might not be as large as we’d tend to think.

Ironing out metabolism

A lot of the basic chemistry that drives the cell is based on electron transport, typically involving proteins that contain an iron atom. These reactions not only create some of the basic chemicals that are necessary for life, they’re also essential to powering the cell. Both photosynthesis and the breakdown of sugars involve the transfer of electrons to and from proteins that contain an iron atom.

DNA and RNA tend to have nothing to do with iron, interacting with magnesium instead. But some researchers at Georgia Tech have considered that fact a historical accident. Since photosynthesis put so much oxygen into the atmosphere, most of the iron has been oxidized into a state where it’s not soluble in water. If you go back to before photosynthesis was around, the oceans were filled with dissolved iron. Previously, the group had shown that, in oxygen-free and iron rich conditions, RNAs would happily work with iron instead and that its presence could speed up their catalytic activity.

Now the group is back with a new paper showing that if you put a bunch of random RNAs into the same conditions, some of them can catalyze electron transfer reactions. By “random,” I mean RNAs that are currently used by cells to do completely unrelated things (specifically, ribosomal and transfer RNAs). The reactions they catalyze are very simple, but remember: these RNAs don’t normally function as a catalyst at all. It wouldn’t surprise me if, after a number of rounds of evolutionary selection, an iron-RNA combination could be found that catalyzes a reaction that’s a lot closer to modern metabolism.

All of which suggests that the basics of a metabolism could have gotten started without proteins around.

Proteins build membranes

Clearly, proteins showed up at some point. They certainly didn’t look much like the proteins we see today, which may have hundreds or thousands of amino acids linked together. In fact, they may not have looked much like proteins at all, if a paper from Jack Szostak’s group is any indication. Szostak’s found that just two amino acids linked together may have catalytic activity. Some of that activity can help them engage in competition over another key element of the first cells: membrane material.

The work starts with a two amino acid long chemical called a peptide. If that peptide happens to be serine linked to histidine (two amino acids in use by life today), it has an interesting chemical activity: very slowly and poorly, it links other amino acids together to form more peptides. This weak activity is especially true if the amino acids are phenylalanine and leucine, two water-hating chemicals. Once they’re linked, they will precipitate out of a water solution.

The authors added a fatty acid membrane, figuring that it would soak up the reaction product. That definitely worked, with the catalytic efficiency of serine-histidine going up as a result. But something else happened as well: membranes that incorporated the reaction product started growing. It turns out that its presence in the membrane made it an efficient scrounger of other membrane material. As they grew, these membranes extended as long filaments that would break up into smaller parts with a gentle agitation and then start growing all over again.

In fact, the authors could set up a bit of a Darwinian competition between membranes based on how much starting catalyst each had. All of which suggests that proteins might have found their way into the cell as very simple chemicals that, at least initially, weren’t in any way connected to genetic and biochemical functions performed by RNA. But any cell-like things that evolved an RNA that made short proteins could have a big advantage over its competition.

Read the entire article here.

Documentary Filmmaker or Smartphone Voyeur?

Yesterday’s murderous atrocity on a busy street in Woolwich, South East London has shocked many proud and stoic Londoners to the core. For two reasons. First, that a heinous act such as this can continue to be wrought by one human against another in honor of misguided and barbaric politics and under the guise of distorted religious fanaticism. Second, that many witnesses at close range recorded the unfolding scene on their smartphones for later dissemination via social media, but did nothing to prevent the ensuing carnage or to aid the victim and those few who did run to help.

Our thoughts go to the family and friends of the victim. Words cannot express the sadness.

To the perpetrators: you and your ideas will be consigned to the trash heap of history. To the voyeurs: you are complicit through your inaction; it would have been wiser to have used your smartphones as projectiles or to call the authorities, rather than to watch and record and tweet the bloodshed. You should be troubled and ashamed.

Your State Bird

The official national bird of the United States is the Bald Eagle. For that matter, it’s also the official animal. Thankfully it was removed from the endangered species list a mere 5 years ago. Aside from the bird itself Americans love the symbolism that the eagle implies — strength, speed, leadership and achievement. But do Americans know their State bird. A recent article from the bird-lovers over at Slate will refresh your memory, and also recommend a more relevant alternative.

From Slate:

I drove over a bridge from Maryland into Virginia today and on the big “Welcome to Virginia” sign was an image of the state bird, the northern cardinal—with a yellow bill. I should have scoffed, but it hardly registered. Everyone knows that state birds are a big joke. There are a million cardinals, a scattering of robins, and just a general lack of thought put into the whole thing.

States should have to put more thought into their state bird than I put into picking my socks in the morning. “Ugh, state bird? I dunno, what’re the guys next to us doing? Cardinal? OK, let’s do that too. Yeah put it on all the signs. Nah, no time to research the bill color, let’s just go.” It’s the official state bird! Well, since all these jackanape states are too busy passing laws requiring everyone to own guns or whatever to consider what their state bird should be, I guess I’ll have to do it.

1. Alabama. Official state bird: yellowhammer

Right out of the gate with this thing. Yellowhammer? C’mon. I Asked Jeeves and it told me that Yellowhammer is some backwoods name for a yellow-shafted flicker. The origin story dates to the Civil War, when some Alabama troops wore yellow-trimmed uniforms. Sorry, but that’s dumb, mostly because it’s just a coincidence and has nothing to do with the actual bird. If you want a woodpecker, go for something with a little more cachet, something that’s at least a full species.

What it should be: red-cockaded woodpecker

2. Alaska. Official state bird: willow ptarmigan

Willow Ptarmigans are the dumbest-sounding birds on Earth, sorry. They sound like rejected Star Wars aliens, angrily standing outside the Mos Eisley Cantina because their IDs were rejected. Why go with these dopes, Alaska, when you’re the best state to see the most awesome falcon on Earth?

What it should be: gyrfalcon

3. Arizona. Official state bird: cactus wren

Cactus Wren is like the only boring bird in the entire state. I can’t believe it.

What it should be: red-faced warbler

4. Arkansas. Official state bird: northern mockingbird

Christ. What makes this even less funny is that there are like eight other states with mockingbird as their official bird. I’m convinced that the guy whose job it was to report to the state’s legislature on what the official bird should be forgot until the day it was due and he was in line for a breakfast sandwich at Burger King. In a panic he walked outside and selected the first bird he could find, a dirty mockingbird singing its stupid head off on top of a dumpster.

What it should be: painted bunting

5. California. Official state bird: California quail

… Or perhaps the largest, most radical bird on the continent?

What it should be: California condor

6. Colorado. Official state bird: lark bunting

I’m actually OK with this. A nice choice. But why not go with one of the birds that are (or are pretty much) endemic in your state?

What it should be: brown-capped rosy-finch or Gunnison sage-grouse

Read the entire article here.

Image: Bald Eagle, Kodiak Alaska, 2010. Courtesy of Yathin S Krishnappa / Wikipedia.

Friendships of Utility

The average Facebook user is said to have 142 “friends”, and many active members have over 500. This certainly seems to be a textbook case of quantity over quality in the increasingly competitive status wars and popularity stakes of online neo- or pseudo-celebrity. That said, and regardless of your relationship with online social media, the one good to come from the likes — a small pun intended — of Facebook is that social scientists can now dissect and analyze your online behaviors and relationships as never before.

So, while Facebook, and its peers, may not represent a qualitative leap in human relationships the data and experiences that come from it may help future generations figure out what is truly important.

From the Wall Street Journal:

Facebook has made an indelible mark on my generation’s concept of friendship. The average Facebook user has 142 friends (many people I know have upward of 500). Without Facebook many of us “Millennials” wouldn’t know what our friends are up to or what their babies or boyfriends look like. We wouldn’t even remember their birthdays. Is this progress?

Aristotle wrote that friendship involves a degree of love. If we were to ask ourselves whether all of our Facebook friends were those we loved, we’d certainly answer that they’re not. These days, we devote equal if not more time to tracking the people we have had very limited human interaction with than to those whom we truly love. Aristotle would call the former “friendships of utility,” which, he wrote, are “for the commercially minded.”

I’d venture to guess that at least 90% of Facebook friendships are those of utility. Knowing this instinctively, we increasingly use Facebook as a vehicle for self-promotion rather than as a means to stay connected to those whom we love. Instead of sharing our lives, we compare and contrast them, based on carefully calculated posts, always striving to put our best face forward.

Friendship also, as Aristotle described it, can be based on pleasure. All of the comments, well-wishes and “likes” we can get from our numerous Facebook friends may give us pleasure. But something feels false about this. Aristotle wrote: “Those who love for the sake of pleasure do so for the sake of what is pleasant to themselves, and not insofar as the other is the person loved.” Few of us expect the dozens of Facebook friends who wish us a happy birthday ever to share a birthday celebration with us, let alone care for us when we’re sick or in need.

One thing’s for sure, my generation’s friendships are less personal than my parents’ or grandparents’ generation. Since we can rely on Facebook to manage our friendships, it’s easy to neglect more human forms of communication. Why visit a person, write a letter, deliver a card, or even pick up the phone when we can simply click a “like” button?

The ultimate form of friendship is described by Aristotle as “virtuous”—meaning the kind that involves a concern for our friend’s sake and not for our own. “Perfect friendship is the friendship of men who are good, and alike in virtue . . . . But it is natural that such friendships should be infrequent; for such men are rare.”

Those who came before the Millennial generation still say as much. My father and grandfather always told me that the number of such “true” friends can be counted on one hand over the course of a lifetime. Has Facebook increased our capacity for true friendship? I suspect Aristotle would say no.

Ms. Kelly joined Facebook in 2004 and quit in 2013.

Read the entire article here.

MondayMap: Global Intolerance

Following on from last week’s MondayMap post on intolerance and hatred within the United States — according to tweets on the social media site Twitter — we expand our view this week to cover the globe. This map is a based on a more detailed, global research study of people’s attitudes to having neighbors of a different race.

From the Washington Post:

When two Swedish economists set out to examine whether economic freedom made people any more or less racist, they knew how they would gauge economic freedom, but they needed to find a way to measure a country’s level of racial tolerance. So they turned to something called the World Values Survey, which has been measuring global attitudes and opinions for decades.

Among the dozens of questions that World Values asks, the Swedish economists found one that, they believe, could be a pretty good indicator of tolerance for other races. The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society. (The study concluded that economic freedom had no correlation with racial tolerance, but it does appear to correlate with tolerance toward homosexuals.)

Unfortunately, the Swedish economists did not include all of the World Values Survey data in their final research paper. So I went back to the source, compiled the original data and mapped it out on the infographic above. In the bluer countries, fewer people said they would not want neighbors of a different race; in red countries, more people did.

If we treat this data as indicative of racial tolerance, then we might conclude that people in the bluer countries are the least likely to express racist attitudes, while the people in red countries are the most likely.

Update: Compare the results to this map of the world’s most and least diverse countries.

Before we dive into the data, a couple of caveats. First, it’s entirely likely that some people lied when answering this question; it would be surprising if they hadn’t. But the operative question, unanswerable, is whether people in certain countries were more or less likely to answer the question honestly. For example, while the data suggest that Swedes are more racially tolerant than Finns, it’s possible that the two groups are equally tolerant but that Finns are just more honest. The willingness to state such a preference out loud, though, might be an indicator of racial attitudes in itself. Second, the survey is not conducted every year; some of the results are very recent and some are several years old, so we’re assuming the results are static, which might not be the case.

• Anglo and Latin countries most tolerant. People in the survey were most likely to embrace a racially diverse neighbor in the United Kingdom and its Anglo former colonies (the United States, Canada, Australia and New Zealand) and in Latin America. The only real exceptions were oil-rich Venezuela, where income inequality sometimes breaks along racial lines, and the Dominican Republic, perhaps because of its adjacency to troubled Haiti. Scandinavian countries also scored high.

• India, Jordan, Bangladesh and Hong Kong by far the least tolerant. In only three of 81 surveyed countries, more than 40 percent of respondents said they would not want a neighbor of a different race. This included 43.5 percent of Indians, 51.4 percent of Jordanians and an astonishingly high 71.8 percent of Hong Kongers and 71.7 percent of Bangladeshis.

Read more about this map here.

Pain Ray

We humans are capable of the most sublime creations, from soaring literary inventions to intensely moving music and gorgeous works of visual art. This stands in stark and paradoxical contrast to our range of inventions that enable efficient mass destruction, torture and death. The latest in this sad catalog of human tools of terror is the “pain ray”, otherwise known by its military euphemism as an Active Denial weapon. The good news is that it only delivers intense pain, rather than death. How inventive we humans really are — we should be so proud.

[tube]J1w4g2vr7B4[/tube]

From the New Scientist:

THE pain, when it comes, is unbearable. At first it’s comparable to a hairdryer blast on the skin. But within a couple of seconds, most of the body surface feels roasted to an excruciating degree. Nobody has ever resisted it: the deep-rooted instinct to writhe and escape is too strong.

The source of this pain is an entirely new type of weapon, originally developed in secret by the US military – and now ready for use. It is a genuine pain ray, designed to subdue people in war zones, prisons and riots. Its name is Active Denial. In the last decade, no other non-lethal weapon has had as much research and testing, and some $120 million has already been spent on development in the US.

Many want to shelve this pain ray before it is fired for real but the argument is far from cut and dried. Active Denial’s supporters claim that its introduction will save lives: the chances of serious injury are tiny, they claim, and it causes less harm than tasers, rubber bullets or batons. It is a persuasive argument. Until, that is, you bring the dark side of human nature into the equation.

The idea for Active Denial can be traced back to research on the effects of radar on biological tissue. Since the 1940s, researchers have known that the microwave radiation produced by radar devices at certain frequencies could heat the skin of bystanders. But attempts to use such microwave energy as a non-lethal weapon only began in the late 1980s, in secret, at the Air Force Research Laboratory (AFRL) at Kirtland Air Force Base in Albuquerque, New Mexico.

The first question facing the AFRL researchers was whether microwaves could trigger pain without causing skin damage. Radiation equivalent to that used in oven microwaves, for example, was out of the question since it penetrates deep into objects, and causes cells to break down within seconds.

The AFRL team found that the key was to use millimetre waves, very-short-wavelength microwaves, with a frequency of about 95 gigahertz. By conducting tests on human volunteers, they discovered that these waves would penetrate only the outer 0.4 millimetres of skin, because they are absorbed by water in surface tissue. So long as the beam power was capped – keeping the energy per square centimetre of skin below a certain level – the tissue temperature would not exceed 55 °C, which is just below the threshold for damaging cells (Bioelectromagnetics, vol 18, p 403).

The sensation, however, was extremely painful, because the outer skin holds a type of pain receptor called thermal nociceptors. These respond rapidly to threats and trigger reflexive “repel” reactions when stimulated (see diagram).

To build a weapon, the next step was to produce a high-power beam capable of reaching hundreds of metres. At the time, it was possible to beam longer-wavelength microwaves over great distances – as with radar systems – but it was not feasible to use the same underlying technology to produce millimetre waves.

Working with the AFRL, the military contractor Raytheon Company, based in Waltham, Massachusetts, built a prototype with a key bit of hardware: a gyrotron, a device for amplifying millimetre microwaves. Gyrotrons generate a rotating ring of electrons, held in a magnetic field by powerful cryogenically cooled superconducting magnets. The frequency at which these electrons rotate matches the frequency of millimetre microwaves, causing a resonating effect. The souped-up millimetre waves then pass to an antenna, which fires the beam.

The first working prototype of the Active Denial weapon, dubbed “System 0”, was completed in 2000. At 7.5 tonnes, it was too big to be easily transported. A few years later, it was followed by mobile versions that could be carried on heavy vehicles.

Today’s Active Denial device, designed for military use, looks similar to a large, flat satellite dish mounted on a truck. The microwave beam it produces has a diameter of about 2 metres and can reach targets several hundred metres away. It fires in bursts of about 3 to 5 seconds.

Those who have been at the wrong end of the beam report that the pain is impossible to resist. “You might think you can withstand getting blasted. Your body disagrees quite strongly,” says Spencer Ackerman, a reporter for Wired magazine’s blog, Danger Room. He stood in the beam at an event arranged for the media last year. “One second my shoulder and upper chest were at a crisp, early-spring outdoor temperature on a Virginia field. Literally the next second, they felt like they were roasted, with what can be likened to a super-hot tingling feeling. The sensation causes your nerves to take control of your feeble consciousness, so it wasn’t like I thought getting out of the way of the beam was a good idea – I did what my body told me to do.” There’s also little chance of shielding yourself; the waves penetrate clothing.

Read the entire article here.

Related video courtesy of CBS 60 Minutes.

Please Press 1 to Avoid Phone Menu Hell

Good customer service once meant that a store or service employee would know you by name. This person would know your previous purchasing habits and your preferences; this person would know the names of your kids and your dog. Great customer service once meant that an employee could use this knowledge to anticipate your needs or personalize a specific deal. Well, this type of service still exists — in some places — but many businesses have outsourced it to offshore call center personnel or to machines, or both. Service may seem personal, but it’s not — service is customized to suit your profile, but it’s not personal in the same sense that once held true.

And, to rub more salt into the customer service wound, businesses now use their automated phone systems seemingly to shield themselves from you, rather than to provide you with the service you want. After all, when was the last time you managed to speak to a real customer service employee after making it through “please press 1 for English“, the poor choice of musak or sponsored ads and the never-ending phone menus?

Now thanks to an enterprising and extremely patient soul there is an answer to phone menu hell.

Welcome to Please Press 1. Founded by Nigel Clarke (alumnus of 400 year old Dame Alice Owens School in London), Please Press 1 provides shortcuts for customer service phone menus for many of the top businesses in Britain [ed: we desperately need this service in the United States].

 

From the MailOnline:

A frustrated IT manager who has spent seven years making 12,000 calls to automated phone centres has launched a new website listing ‘short cut’ codes which can shave up to eight minutes off calls.

Nigel Clarke, 53, has painstakingly catalogued the intricate phone menus of hundreds of leading multi-national companies – some of which have up to 80 options.

He has now formulated his results into the website pleasepress1.com, which lists which number options to press to reach the desired department.

The father-of-three, from Fawkham, Kent, reckons the free service can save consumers more than eight minutes by cutting out up to seven menu options.

For example, a Lloyds TSB home insurance customer who wishes to report a water leak would normally have to wade through 78 menu options over seven levels to get through to the correct department.

But the new service informs callers that the combination 1-3-2-1-1-5-4 will get them straight through – saving over four minutes of waiting.

Mr Clarke reckons the service could save consumers up to one billion minutes a year.

He said: ‘Everyone knows that calling your insurance or gas company is a pain but for most, it’s not an everyday problem.

‘However, the cumulative effect of these calls is really quite devastating when you’re moving house or having an issue.

‘I’ve been working in IT for over 30 years and nothing gets me riled up like having my time wasted through inefficient design.

‘This is why I’ve devoted the best part of seven years to solving this issue.’

Mr Clarke describes call centre menu options as the ‘modern equivalent of Dante’s circles of hell’.

He sites the HMRC as one of the worst offenders, where callers can take up to six minutes to reach the correct department.

As one of the UK’s busiest call centres, the Revenue receives 79 million calls per year, or a potential 4.3 million working hours just navigating menus.

Mr Clarke believes that with better menu design, at least three million caller hours could be saved here alone.

He began his quest seven years ago as a self-confessed ‘call centre menu enthusiast’.

‘The idea began with the frustration of being met with a seemingly endless list of menu options,’ he said.

‘Whether calling my phone, insurance or energy company, they each had a different and often worse way of trying to “help” me.

‘I could sit there for minutes that seemed like hours, trying to get through their phone menus only to end up at the wrong place and having to redial and start again.’

He began noting down the menu options and soon realised he could shave several minutes off the waiting time.

Mr Clarke said: ‘When I called numbers regularly, I started keeping notes of the options to press. The numbers didn’t change very often and then it hit me.

Read the entire article here and visit Please Press 1, here.

Images courtesy of Time and Please Press 1.

The Internet of Things and Your (Lack of) Privacy

Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?

From the Guardian:

The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.

Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Read the entire article here.

Image: Big Brother, 1984. Poster. Courtesy of Telegraph.

Ultra-Conservation of Words

Linguists have traditionally held that words in a language have an average lifespan of around 8,000 years. Words change and are often discarded or replaced over time as the language evolves and co-opts other words from other tongues. English has been particularly adept at collecting many new words from different languages, which partly explains its global popularity.

Recently however, linguists have found that a small group of words have a lifespan that far exceeds the usual understanding. These 15,000-20,000 year old ultra-conserved words may be the linguistic precursors to common cognates — words with similar sound and meaning — that now span many different language families containing hundreds of languages.

From the Washington Post:

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.

A team of researchers has come up with a list of two dozen “ultraconserved words” that have survived 150 centuries. It includes some predictable entries: “mother,” “not,” “what,” “to hear” and “man.” It also contains surprises: “to flow,” “ashes” and “worm.”

The existence of the long-lived words suggests there was a “proto-Eurasiatic” language that was the common ancestor to about 700 contemporary languages that are the native tongues of more than half the world’s people.

“We’ve never heard this language, and it’s not written down anywhere,” said Mark Pagel, an evolutionary theorist at the University of Reading in England who headed the study published Monday in the Proceedings of the National Academy of Sciences. “But this ancestral language was spoken and heard. People sitting around campfires used it to talk to each other.”

In all, “proto-Eurasiatic” gave birth to seven language families. Several of the world’s important language families, however, fall outside that lineage, such as the one that includes Chinese and Tibetan; several African language families, and those of American Indians and Australian aborigines.

That a spoken sound carrying a specific meaning could remain unchanged over 15,000 years is a controversial idea for most historical linguists.

“Their general view is pessimistic,” said William Croft, a professor of linguistics at the University of New Mexico who studies the evolution of language and was not involved in the study. “They basically think there’s too little evidence to even propose a family like Eurasiatic.” In Croft’s view, however, the new study supports the plausibility of an ancestral language whose audible relics cross tongues today.

Pagel and three collaborators studied “cognates,” which are words that have the same meaning and a similar sound in different languages. Father (English), padre (Italian), pere (French), pater (Latin) and pitar (Sanskrit) are cognates. Those words, however, are from languages in one family, the Indo-European. The researchers looked much further afield, examining seven language families in all.

Read the entire article here and be sure to check out the interactive audio.

Age is All in the Mind (Hypothalamus)

Researchers are continuing to make great progress in unraveling the complexities of aging. While some fingers point to the shortening of telomeres — end caps — in our chromosomal DNA as a contributing factor, other research points to the hypothalamus. This small sub-region of the brain has been found to play a major role in aging and death (though, at the moment only in mice).

From the New Scientist:

The brain’s mechanism for controlling ageing has been discovered – and manipulated to shorten and extend the lives of mice. Drugs to slow ageing could follow

Tick tock, tick tock… A mechanism that controls ageing, counting down to inevitable death, has been identified in the hypothalamus?– a part of the brain that controls most of the basic functions of life.

By manipulating this mechanism, researchers have both shortened and lengthened the lifespan of mice. The discovery reveals several new drug targets that, if not quite an elixir of youth, may at least delay the onset of age-related disease.

The hypothalamus is an almond-sized puppetmaster in the brain. “It has a global effect,” says Dongsheng Cai at the Albert Einstein College of Medicine in New York. Sitting on top of the brain stem, it is the interface between the brain and the rest of the body, and is involved in, among other things, controlling our automatic response to the world around us, our hormone levels, sleep-wake cycles, immunity and reproduction.

While investigating ageing processes in the brain, Cai and his colleagues noticed that ageing mice produce increasing levels of nuclear factor kB (NF-kB)? ?– a protein complex that plays a major role in regulating immune responses. NF-kB is barely active in the hypothalamus of 3 to 4-month-old mice but becomes very active in old mice, aged 22 to 24 months.

To see whether it was possible to affect ageing by manipulating levels of this protein complex, Cai’s team tested three groups of middle-aged mice. One group was given gene therapy that inhibits NF-kB, the second had gene therapy to activate NF-kB, while the third was left to age naturally.

This last group lived, as expected, between 600 and 1000 days. Mice with activated NF-kB all died within 900 days, while the animals with NF-kB inhibition lived for up to 1100 days.

Crucially, the mice that lived the longest not only increased their lifespan but also remained mentally and physically fit for longer. Six months after receiving gene therapy, all the mice were given a series of tests involving cognitive and physical ability.

In all of the tests, the mice that subsequently lived the longest outperformed the controls, while the short-lived mice performed the worst.

Post-mortem examinations of muscle and bone in the longest-living rodents also showed that they had many chemical and physical qualities of younger mice.

Further investigation revealed that NF-kB reduces the level of a chemical produced by the hypothalamus called gonadotropin-releasing hormone (GnRH) ?– better known for its involvement in the regulation of puberty and fertility, and the production of eggs and sperm.

To see if they could control lifespan using this hormone, the team gave another group of mice??– 20 to 24 months old??– daily subcutaneous injections of GnRH for five to eight weeks. These mice lived longer too, by a length of time similar to that of mice with inhibited NF-kB.

GnRH injections also resulted in new neurons in the brain. What’s more, when injected directly into the hypothalamus, GnRH influenced other brain regions, reversing widespread age-related decline and further supporting the idea that the hypothalamus could be a master controller for many ageing processes.

GnRH injections even delayed ageing in the mice that had been given gene therapy to activate NF-kB and would otherwise have aged more quickly than usual. None of the mice in the study showed serious side effects.

So could regular doses of GnRH keep death at bay? Cai hopes to find out how different doses affect lifespan, but says the hormone is unlikely to prolong life indefinitely since GnRH is only one of many factors at play. “Ageing is the most complicated biological process,” he says.

Read the entire article after the jump.

Image: Location of Hypothalamus. Courtesy of Colorado State University / Wikipedia.

MondayMap: Intolerance and Hatred

A fascinating map of tweets espousing hatred and racism across the United States. The data analysis and map were developed by researchers at Humboldt State University.

From the Guardian:

[T]he students and professors at Humboldt State University who produced this map read the entirety of the 150,000 geo-coded tweets they analysed.

Using humans rather than machines means that this research was able to avoid the basic pitfall of most semantic analysis where a tweet stating ‘the word homo is unacceptable’ would still be classed as hate speech. The data has also been ‘normalised’, meaning that the scale accounts for the total twitter traffic in each county so that the final result is something that shows the frequency of hateful words on Twitter. The only question that remains is whether the views of US Twitter users can be a reliable indication of the views of US citizens.

See the interactive map and read the entire article here.

Big Data at the Personal Level

Stephen Wolfram, physicist, mathematician and complexity theorist, has taken big data ideas to an entirely new level — he’s quantifying himself and his relationships. He calls this discipline personal analytics.

While examining every phone call and computer keystroke he’s made may be rather useful to the FBI or to marketers, it is not until that personal data is tracked for physiological and medical purposes that it could become extremely valuable. But then again who wants their every move tracked 24 hours a day, even for medical science?

From ars technica:

Don’t be surprised if Stephen Wolfram, the renowned complexity theorist, software company CEO, and night owl, wants to schedule a work call with you at 9 p.m. In fact, after a decade of logging every phone call he makes, Wolfram knows the exact probability he’ll be on the phone with someone at that time: 39 percent.

Wolfram, a British-born physicist who earned a doctorate at age 20, is obsessed with data and the rules that explain it. He is the creator of the software Mathematica and of Wolfram Alpha, the nerdy “computational knowledge engine” that can tell you the distance to the moon right now, in units including light-seconds.

Now Wolfram wants to apply the same techniques to people’s personal data, an idea he calls “personal analytics.” He started with himself. In a blog post last year, Wolfram disclosed and analyzed a detailed record of his life stretching back three decades, including documents, hundreds of thousands of e-mails, and 10 years of computer keystrokes, a tally of which is e-mailed to him each morning so he can track his productivity the day before.

Last year, his company released its first consumer product in this vein, called Personal Analytics for Facebook. In under a minute, the software generates a detailed study of a person’s relationships and behavior on the site. My own report was revealing enough. It told me which friend lives at the highest latitude (Wicklow, Ireland) and the lowest (Brisbane, Australia), the percentage who are married (76.7 percent), and everyone’s local time. More of my friends are Scorpios than any other sign of the zodiac.

It looks just like a dashboard for your life, which Wolfram says is exactly the point. In a phone call that was recorded and whose start and stop time was entered into Wolfram’s life log, he discussed why personal analytics will make people more efficient at work and in their personal lives.

What do you typically record about yourself?

E-mails, documents, and normally, if I was in front of my computer, it would be recording keystrokes. I have a motion sensor for the room that records when I pace up and down. Also a pedometer, and I am trying to get an eye-tracking system set up, but I haven’t done that yet. Oh, and I’ve been wearing a sensor to measure my posture.

Do you think that you’re the most quantified person on the planet?

I couldn’t imagine that that was the case until maybe a year ago, when I collected together a bunch of this data and wrote a blog post on it. I was expecting that there would be people who would come forward and say, “Gosh, I’ve got way more than you.” But nobody’s come forward. I think by default that may mean I’m it, so to speak.

You coined this term “personal analytics.” What does it mean?

There’s organizational analytics, which is looking at an organization and trying to understand what the data says about its operation. Personal analytics is what you can figure out applying analytics to the person, to understand the operation of the person.

Read the entire article after the jump.

Image courtesy of Stephen Wolfram.

More CO2 is Good, Right?

Yesterday, May 10, 2013, scientists published new measures of atmospheric carbon dioxide (CO2). For the first time in human history CO2 levels reached an average of 400 parts per million (ppm). This is particularly troubling since CO2 has long been known as the most potent heat trapping component of the atmosphere. The sobering milestone was recorded from the Mauna Loa Observatory in Hawaii — monitoring has been underway at the site since the mid-1950s.

This has many climate scientists re-doubling their efforts to warn of the consequences of climate change, which is believed to be driven by human activity and specifically the generation of atmospheric CO2 in ever increasing quantities. But not to be outdone, the venerable Wall Street Journal — seldom known for its well-reasoned scientific journalism — chimed in with an op-ed on the subject. According to the WSJ we have nothing to worry about because increased levels of CO2 are good for certain crops and the Earth had historically much higher levels of CO2 (though pre-humanity).

Ashutosh Jogalekar over at The Curious Wavefunction dissects the WSJ article line by line:

Since we were discussing the differences between climate change “skeptics” and “deniers” (or “denialists”, whatever you want to call them) the other day this piece is timely. The Wall Street Journal is not exactly known for reasoned discussion of climate change, but this Op-Ed piece may set a new standard even for its own naysayers and skeptics. It’s a piece by William Happer and Harrison Schmitt that’s so one-sided, sparse on detail, misleading and ultimately pointless that I am wondering if it’s a spoof.

Happer and Schmitt’s thesis can be summed up in one line: More CO2 in the atmosphere is a good thing because it’s good for one particular type of crop plant. That’s basically it. No discussion of the downsides, not even a pretense of a balanced perspective. Unfortunately it’s not hard to classify their piece as a denialist article because it conforms to some of the classic features of denial; it’s entirely one sided, it’s very short on detail, it does a poor job even with the little details that it does present and it simply ignores the massive amount of research done on the topic. In short it’s grossly misleading.

First of all Happer and Schmitt simply dismiss any connection that might exist between CO2 levels and rising temperatures, in the process consigning a fair amount of basic physics and chemistry to the dustbin. There are no references and no actual discussion of why they don’t believe there’s a connection. That’s a shoddy start to put it mildly; you would expect a legitimate skeptic to start with some actual evidence and references. Most of the article after that consists of a discussion of the differences between so-called C3 plants (like rice) and C4 plants (like corn and sugarcane). This is standard stuff found in college biochemistry textbooks, nothing revealing here. But Happer and Schmitt leverage a fundamental difference between the two – the fact that C4 plants can utilize CO2 more efficiently than C3 plants under certain conditions – into an argument for increasing CO2 levels in the atmosphere.

This of course completely ignores all the other potentially catastrophic effects that CO2 could have on agriculture, climate, biodiversity etc. You don’t even have to be a big believer in climate change to realize that focusing on only a single effect of a parameter on a complicated system is just bad science. Happer and Schmitt’s argument is akin to the argument that everyone should get themselves addicted to meth because one of meth’s effects is euphoria. So ramping up meth consumption will make everyone feel happier, right?

But even if you consider that extremely narrowly defined effect of CO2 on C3 and C4 plants, there’s still a problem. What’s interesting is that the argument has been countered by Matt Ridley in the pages of this very publication:

But it is not quite that simple. Surprisingly, the C4 strategy first became common in the repeated ice ages that began about four million years ago. This was because the ice ages were a very dry time in the tropics and carbon-dioxide levels were very low—about half today’s levels. C4 plants are better at scavenging carbon dioxide (the source of carbon for sugars) from the air and waste much less water doing so. In each glacial cold spell, forests gave way to seasonal grasslands on a huge scale. Only about 4% of plant species use C4, but nearly half of all grasses do, and grasses are among the newest kids on the ecological block.

So whereas rising temperatures benefit C4, rising carbon-dioxide levels do not. In fact, C3 plants get a greater boost from high carbon dioxide levels than C4. Nearly 500 separate experiments confirm that if carbon-dioxide levels roughly double from preindustrial levels, rice and wheat yields will be on average 36% and 33% higher, while corn yields will increase by only 24%.

So no, the situation is more subtle than the authors think. In fact I am surprised that, given that C4 plants actually do grow better at higher temperatures, Happer and Schmitt missed an opportunity for making the case for a warmer planet. In any case, there’s a big difference between improving yields of C4 plants under controlled greenhouse conditions and expecting these yields to improve without affecting other components of the ecosystem by doing a giant planetary experiment.

Read the entire article after the jump.

Image courtesy of Sierra Club.

 

Menu Engineering

We live in a world of brands, pitches, advertising, promotions, PR, consumer research, product placement, focus groups, and 24/7 spin. So, it should come as no surprise that even that ubiquitous and utilitarian listing of food and drink items from your local restaurant — the menu — would come in for some 21st century marketing treatment.

Fast food chains have been optimizing the look and feel of their menus for years, often right down to the font, color (artificial) and placement of menu items. Now, many upscale restaurants are following suit. Some call it menu engineering.

From the Guardian:

It’s not always easy trying to read a menu while hungry like the wolf, woozy from aperitif and exchanging pleasantries with a dining partner. The eyes flit about like a pinball, pinging between set meal options, side dishes and today’s specials. Do I want comforting treats or something healthy? What’s cheap? Will I end up bitterly coveting my companion’s dinner? Is it immoral to fuss over such petty, first-world dilemmas? Oh God, the waiter’s coming over.

Why is it so hard to decide what to have? New research from Bournemouth University shows that most menus crowbar in far more dishes than people want to choose from. And when it comes to choosing food and drink, as an influential psychophysicist by the name of Howard Moskowitz once said: “The mind knows not what the tongue wants.”

Malcolm Gladwell cites an interesting nugget from his work for Nescafé. When asked what kind of coffee they like, most Americans will say: “a dark, rich, hearty roast”. But actually, only 25-27% want that. Most prefer weak, milky coffee. Judgement is clouded by aspiration, peer pressure and marketing messages.

The burden of choice

Perhaps this is part of the joy of a tasting or set menu – the removal of responsibility. And maybe the recent trend for tapas-style sharing plates has been so popular because it relieves the decision-making pressure if all your eggs are not in one basket. Is there a perfect amount of choice?

Bournemouth University’s new study has sought to answer this very question. “We were trying to establish the ideal number of starters, mains and puddings on a menu,” says Professor John Edwards. The study’s findings show that restaurant customers, across all ages and genders, do have an optimum number of menu items, below which they feel there’s too little choice and above which it all becomes disconcerting. In fast-food joints, people wanted six items per category (starters, chicken dishes, fish, vegetarian and pasta dishes, grills and classic meat dishes, steaks and burgers, desserts), while in fine dining establishments, they preferred seven starters and desserts, and 10 main courses, thank you very much.

Nightmare menu layouts

Befuddling menu design doesn’t help. A few years back, the author William Poundstone rather brilliantly annotated the menu from Balthazar in New York to reveal the marketing bells and whistles it uses to herd customers into parting with the maximum amount of cash. Professor Brian Wansink, author of Slim by Design, Mindless Eating Solutions to Every Day Life, has extensively researched menu psychology, or as he puts it, menu engineering. “What ends up initially catching the eye,” he says, “has an unfair advantage over anything a person sees later on.” There’s some debate about how people’s eyes naturally travel around menus, but Wansink reckons “we generally scan the menu in a z-shaped fashion starting at the top-left hand corner.” Whatever the pattern, though, we’re easily interrupted by items being placed in boxes, next to pictures or icons, bolded or in a different colour.

The language of food

The Oxford experimental psychologist Charles Spence has an upcoming review paper on the effect the name of a dish has on diners. “Give it an ethnic label,” he says, “such as an Italian name, and people will rate the food as more authentic.” Add an evocative description, and people will make far more positive comments about a dish’s appeal and taste. “A label directs a person’s attention towards a feature in a dish, and hence helps bring out certain flavours and textures,” he says.

But we are seeing a backlash against the menu cliches (drizzled, homemade, infused) that have arisen from this thinking. For some time now, at Fergus Henderson’s acclaimed restaurant, St John, they have let the ingredients speak for themselves, in simple lists. And if you eat at one of Russell Norman’s Polpo group of restaurants in London, you will see almost no adjectives (or boxes and other “flim-flam”, as he calls it), and he’s doing a roaring trade. “I’m particularly unsympathetic to florid descriptions,” he says.

However, Norman’s menus employ their own, subtle techniques to reel diners in. Take his flagship restaurant Polpo’s menu. Venetian dishes are printed on Italian butchers’ paper, which goes with the distressed, rough-hewn feel of the place. I don’t use a huge amount of Italian,” he says, “but I occasionally use it so that customers say ‘what is that?'” He picks an easy-to-pronounce word like suppli (rice balls), to start a conversation between diner and waiter.

Read the entire article here.

Image courtesy of Multyshades.

Your Weekly Groceries

Photographer Peter Menzel traveled to over 20 countries to compile his culinary atlas Hungry Planet. But this is no ordinary cookbook or trove of local delicacies. The book is a visual catalog of a family’s average weekly grocery shopping.

It is both enlightening and sobering to see the nutritional inventory of a Western family juxtaposed with that of a sub-Saharan African family. It puts into perspective the internal debate within the United States of the 1 percent versus the 99 percent. Those of us lucky enough to have been born in one of the world’s richer nations, even though we may be part of the 99 percent are still truly in the group of haves, rather than the have-nots.

For more on Menzel’s book jump over to Amazon.

The Melander family from Bargteheide, Germany, who spend around £320 [$480] on a week’s worth of food.

 

The Aboubakar family from Darfur, Sudan, in the Breidjing refugee camp in Chad. Their weekly food, which feeds six people, costs 79p [$1.19].

 

The Revis family from Raleigh in North Carolina. Their weekly shopping costs £219 [$328.50].

 

The Namgay family from Shingkhey, Bhutan, with a week’s worth of food that costs them around £3.20 [$4.80].

Images courtesy of Peter Menzel /Barcroft Media.

Media Multi-Tasking, School Work and Poor Memory

It’s official — teens can’t stay off social media for more than 15 minutes. It’s no secret that many kids aged between 8 and 18 spend most of their time texting, tweeting and checking their real-time social status. The profound psychological and sociological consequences of this behavior will only start to become apparent ten to fifteen year from now. In the meantime, researchers are finding a general degradation in kids’ memory skills from using social media and multi-tasking while studying.

From Slate:

Living rooms, dens, kitchens, even bedrooms: Investigators followed students into the spaces where homework gets done. Pens poised over their “study observation forms,” the observers watched intently as the students—in middle school, high school, and college, 263 in all—opened their books and turned on their computers.

For a quarter of an hour, the investigators from the lab of Larry Rosen, a psychology professor at California State University–Dominguez Hills, marked down once a minute what the students were doing as they studied. A checklist on the form included: reading a book, writing on paper, typing on the computer—and also using email, looking at Facebook, engaging in instant messaging, texting, talking on the phone, watching television, listening to music, surfing the Web. Sitting unobtrusively at the back of the room, the observers counted the number of windows open on the students’ screens and noted whether the students were wearing earbuds.

Although the students had been told at the outset that they should “study something important, including homework, an upcoming examination or project, or reading a book for a course,” it wasn’t long before their attention drifted: Students’ “on-task behavior” started declining around the two-minute mark as they began responding to arriving texts or checking their Facebook feeds. By the time the 15 minutes were up, they had spent only about 65 percent of the observation period actually doing their schoolwork.

“We were amazed at how frequently they multitasked, even though they knew someone was watching,” Rosen says. “It really seems that they could not go for 15 minutes without engaging their devices,” adding, “It was kind of scary, actually.”

Concern about young people’s use of technology is nothing new, of course. But Rosen’s study, published in the May issue of Computers in Human Behavior, is part of a growing body of research focused on a very particular use of technology: media multitasking while learning. Attending to multiple streams of information and entertainment while studying, doing homework, or even sitting in class has become common behavior among young people—so common that many of them rarely write a paper or complete a problem set any other way.

But evidence from psychology, cognitive science, and neuroscience suggests that when students multitask while doing schoolwork, their learning is far spottier and shallower than if the work had their full attention. They understand and remember less, and they have greater difficulty transferring their learning to new contexts. So detrimental is this practice that some researchers are proposing that a new prerequisite for academic and even professional success—the new marshmallow test of self-discipline—is the ability to resist a blinking inbox or a buzzing phone.

The media multitasking habit starts early. In “Generation M2: Media in the Lives of 8- to 18-Year-Olds,” a survey conducted by the Kaiser Family Foundation and published in 2010, almost a third of those surveyed said that when they were doing homework, “most of the time” they were also watching TV, texting, listening to music, or using some other medium. The lead author of the study was Victoria Rideout, then a vice president at Kaiser and now an independent research and policy consultant. Although the study looked at all aspects of kids’ media use, Rideout told me she was particularly troubled by its findings regarding media multitasking while doing schoolwork.

“This is a concern we should have distinct from worrying about how much kids are online or how much kids are media multitasking overall. It’s multitasking while learning that has the biggest potential downside,” she says. “I don’t care if a kid wants to tweet while she’s watching American Idol, or have music on while he plays a video game. But when students are doing serious work with their minds, they have to have focus.”

For older students, the media multitasking habit extends into the classroom. While most middle and high school students don’t have the opportunity to text, email, and surf the Internet during class, studies show the practice is nearly universal among students in college and professional school. One large survey found that 80 percent of college students admit to texting during class; 15 percent say they send 11 or more texts in a single class period.

During the first meeting of his courses, Rosen makes a practice of calling on a student who is busy with his phone. “I ask him, ‘What was on the slide I just showed to the class?’ The student always pulls a blank,” Rosen reports. “Young people have a wildly inflated idea of how many things they can attend to at once, and this demonstration helps drive the point home: If you’re paying attention to your phone, you’re not paying attention to what’s going on in class.” Other professors have taken a more surreptitious approach, installing electronic spyware or planting human observers to record whether students are taking notes on their laptops or using them for other, unauthorized purposes.

Read the entire article here.

Image courtesy of Examiner.

The Academic Con Artist

Strangely we don’t normally associate the hushed halls and ivory towers of academia with lies and frauds. We are more inclined to see con artists on street corners hawking dodgy wares or doing much the same from corner offices on Wall Street, for much princelier sums, of course, and with much more catastrophic consequences.

Humans being humans, cheating does go on in academic circles as well. We know that some students cheat — they plagiarize and fabricate work, they have others write their papers. More notably, some academics do this as well, but on a grander scale. And, while much cheating is probably minor and inconsequential, some fraud is intricate and grandiose, spanning many years of work, affecting subsequent work, diverting grants and research funds, altering policy and widely held public opinion. Meet one of its principal actors — Diederik Stapel, social psychologist and academic con artist.

From the New York Times:

One summer night in 2011, a tall, 40-something professor named Diederik Stapel stepped out of his elegant brick house in the Dutch city of Tilburg to visit a friend around the corner. It was close to midnight, but his colleague Marcel Zeelenberg had called and texted Stapel that evening to say that he wanted to see him about an urgent matter. The two had known each other since the early ’90s, when they were Ph.D. students at the University of Amsterdam; now both were psychologists at Tilburg University. In 2010, Stapel became dean of the university’s School of Social and Behavioral Sciences and Zeelenberg head of the social psychology department. Stapel and his wife, Marcelle, had supported Zeelenberg through a difficult divorce a few years earlier. As he approached Zeelenberg’s door, Stapel wondered if his colleague was having problems with his new girlfriend.

Zeelenberg, a stocky man with a shaved head, led Stapel into his living room. “What’s up?” Stapel asked, settling onto a couch. Two graduate students had made an accusation, Zeelenberg explained. His eyes began to fill with tears. “They suspect you have been committing research fraud.”

Stapel was an academic star in the Netherlands and abroad, the author of several well-regarded studies on human attitudes and behavior. That spring, he published a widely publicized study in Science about an experiment done at the Utrecht train station showing that a trash-filled environment tended to bring out racist tendencies in individuals. And just days earlier, he received more media attention for a study indicating that eating meat made people selfish and less social.

His enemies were targeting him because of changes he initiated as dean, Stapel replied, quoting a Dutch proverb about high trees catching a lot of wind. When Zeelenberg challenged him with specifics — to explain why certain facts and figures he reported in different studies appeared to be identical — Stapel promised to be more careful in the future. As Zeelenberg pressed him, Stapel grew increasingly agitated.

Finally, Zeelenberg said: “I have to ask you if you’re faking data.”

“No, that’s ridiculous,” Stapel replied. “Of course not.”

That weekend, Zeelenberg relayed the allegations to the university rector, a law professor named Philip Eijlander, who often played tennis with Stapel. After a brief meeting on Sunday, Eijlander invited Stapel to come by his house on Tuesday morning. Sitting in Eijlander’s living room, Stapel mounted what Eijlander described to me as a spirited defense, highlighting his work as dean and characterizing his research methods as unusual. The conversation lasted about five hours. Then Eijlander politely escorted Stapel to the door but made it plain that he was not convinced of Stapel’s innocence.

That same day, Stapel drove to the University of Groningen, nearly three hours away, where he was a professor from 2000 to 2006. The campus there was one of the places where he claimed to have collected experimental data for several of his studies; to defend himself, he would need details from the place. But when he arrived that afternoon, the school looked very different from the way he remembered it being five years earlier. Stapel started to despair when he realized that he didn’t know what buildings had been around at the time of his study. Then he saw a structure that he recognized, a computer center. “That’s where it happened,” he said to himself; that’s where he did his experiments with undergraduate volunteers. “This is going to work.”

On his return trip to Tilburg, Stapel stopped at the train station in Utrecht. This was the site of his study linking racism to environmental untidiness, supposedly conducted during a strike by sanitation workers. In the experiment described in the Science paper, white volunteers were invited to fill out a questionnaire in a seat among a row of six chairs; the row was empty except for the first chair, which was taken by a black occupant or a white one. Stapel and his co-author claimed that white volunteers tended to sit farther away from the black person when the surrounding area was strewn with garbage. Now, looking around during rush hour, as people streamed on and off the platforms, Stapel could not find a location that matched the conditions described in his experiment.

“No, Diederik, this is ridiculous,” he told himself at last. “You really need to give it up.”

After he got home that night, he confessed to his wife. A week later, the university suspended him from his job and held a news conference to announce his fraud. It became the lead story in the Netherlands and would dominate headlines for months. Overnight, Stapel went from being a respected professor to perhaps the biggest con man in academic science.

Read the entire article after the jump.

Image courtesy of FBI.

Lesson: Fail Often, Fail Fast

One of our favorite thinkers, Nasim Nicholas Taleb, calls this tinkering — the iterative process by which ideas and actions can take root and become successful. Evolution is a wonderful example of this tinkering — repetitive failure and incremental progress. Many entrepreneurs in Silicon Valley take this to heart.

Tech entrepreneur, Michele Serro, describes some key elements to successful tinkering below.

From the Wall Street Journal:

If there was ever a cliche about entrepreneurialism, it’s this: Joe or Jane McEntrepreneur were trying to book a flight/find flattering support garments/rent a car and were profoundly dissatisfied with the experience. Incensed, they set out to design a better way — and did, earning millions in the process.

It seems that, for entrepreneurs, it’s dissatisfaction rather than necessity that is the mother of invention. And while this cliche certainly has its foundation in truth, it’s woefully incomplete. The full truth is, the average startup iterates multiple times before they find the right product, often drawing on one or many approaches along the way before finding traction. Here are five of the most common I’ve come across within the startup community.

Algebra. There’s an old yarn you learn in film school about the power of the pithy pitch (say that five times fast). The story goes that when screenwriters were shopping the original Alien movie, they allegedly got the green light when they summed it up to studio execs by saying ”It’s Jaws. In space.”

In many ways, the same thing is happening in the startup world. “It’s Facebook FB -2.27%. But for pets,” or “It’s Artsy meets Dropbox meets Fab.” Our tendency to do this speaks to the fact that there are very few — if any — truly new ideas. Most entrepreneurs are applying old ideas to new industries, or combining two seemingly unrelated ideas (or existing businesses) together – whether they’re doing it consciously, or not.

Subtraction. Many great ideas begin with a seemingly straightforward question: “How could I make this easier?” Half the genius of some of the greatest entrepreneurs — Steve Jobs springs immediately to mind — is the ability to remove the superfluous, unnecessary or unwieldy from an existing system, product or experience. A good exercise when you are in search of an idea is simply to ask yourself “What is it about an existing product, service, or experience that could — and therefore should — be less of a hassle?”

Singularity. There’s an old saying that goes: “Figure out what you love to do and you’ll never work a day in your life.” Entrepreneurs are born out of the desire to spend one’s life pursuing a passion — assuming that they’re fortunate enough to have identified it early. The fact is that any kind of startup is really, really hard work. No matter how fast a vesting schedule or how convivial an office culture, the only thing that can truly sustain you through the bad days is having a deep, personal interest in your area of focus. The most successful entrepreneurs genuinely love what they do, and not simply because of the potential payoff. I once met a pair of British entrepreneurs living in France who loved nothing more than spending all day in a pub — meeting up with friends, watching a soccer game, and giving each other the requisite hard time about just about everything.

For their entrepreneurial class as part of their MBA coursework at Insead, they decided to draft the business plan for an English-style microbrewery in Paris — mainly because the research phase would involve a lot of sitting around in bars. But during the process of launching their fictitious company, they realized there really was an opportunity to make a living doing exactly what they loved, and went on to successfully launch seven such pubs, sprinkled all over the city.

When hiring at Doorsteps, I start by asking people what they would do with their lives if every career paid the same. If the gap between their truest desires and the job on offer is simply too wide, I encourage them to keep looking. Not because they can’t be successful with us, too, but because they’ll likely be even more successful elsewhere — when they are driven by passion as much as profit.

Optimization. Sometimes entrepreneurs benefit by letting someone else lay the groundwork for their ideas. Indeed, a great many startups are born by simply building a better mousetrap; that’s to say observing a compelling business already in existence but that’s struggling to find traction. These entrepreneurs have the ability to recognize that the idea itself is sound but the execution is flawed. In this case, they simply address the oversight of the previous version. Instagram quite famously beat Hipstamatic to the jaw dropping $1 billion dollar prize by understanding the role social needed to play in the app’s experience. By the time Hipstamatic realized their error, Instagram had almost four times the amount of users, largely muscling them out of a competitive niche market.

Read the entire article following the jump.

First Came Phishing, Now We Have Catfishing

The internet has revolutionized retailing, the music business, and the media landscape. It has anointed countless entrepreneurial millionaires and billionaires and helped launch arrays of new businesses in all spheres of life.

Of course, due to the peculiarities of human nature the internet has also become an enabler and/or a new home to less upstanding ventures such as online pornography, spamming, identify theft and phishing.

Now comes “catfishing“: posting false information online with the intent of reeling someone in (usually found on online dating sites). While this behavior is nothing new in the vast catalog of human deviousness, the internet has enabled an explosion in “catfishers“. This fascinating infographic below gives a neat summary.

Infographic courtesy of Checkmate.

What’s In a Name?

Recently we posted a fascinating story about a legal ruling in Iceland that allowed parents to set aside centuries of Icelandic history by naming their girl “Blaer” — a traditionally male name. You see Iceland has an official organization — the Iceland Naming Committee — that regulates and decides if a given name is acceptable (by Icelandic standards).

Well, this got us thinking about rules and conventions in other nations. For instance, New Zealand will not allow parents to name a child “Pluto”, however “Number 16 Bus Shelter” and “Violence” recently got the thumbs up. Some misguided or innovative (depending upon your perspective) New Zealanders have unsuccessfully tried to name their offspring: “*” (yes, asterisk), “.” (period or full-stop), “V”, and “Emperor”.

Not to be outdone, a U.S. citizen recently legally changed his name to “In God” (first name) “We Trust” (last name). Humans are indeed a strange species.

From CNN:

Lucifer cannot be born in New Zealand.

And there’s no place for Christ or a Messiah either.

In New Zealand, parents have to run by the government any name they want to bestow on their baby.

And each year, there’s a bevy of unusual ones too bizarre to pass the taste test.

The country’s Registrar of Births, Deaths and Marriages shared that growing list with CNN on Wednesday.

Four words:

What were they thinking?

In the past 12 years, the agency had to turn down not one, not two, but six sets of parents who wanted to name their child “Lucifer.”

Also shot down were parents who wanted to grace their child with the name “Messiah.” That happened twice.

“Christ,” too, was rejected.

Specific rules

As the agency put it, acceptable names must not cause offense to a reasonable person, not be unreasonably long and should not resemble an official title and rank.

It’s no surprise then that the names nixed most often since 2001 are “Justice” (62 times) and “King” (31 times).

Some of the other entries scored points in the creativity department — but clearly didn’t take into account the lifetime of pain they’d bring.

“Mafia No Fear.” “4Real.” “Anal.”

Oh, come on!

Then there were the parents who preferred brevity through punctuation. The ones who picked ‘”*” (the asterisk) or ‘”.”(period).

Slipping through

Still, some quirky names do make it through.

In 2008, the country made made international news when the naming agency allowed a set of twins to be named ‘

“Benson” and “Hedges” — a popular cigarette brand — and OK’d the names “Violence” and “Number 16 Bus Shelter.”

Asked about those examples, Michael Mead of the Internal Affairs Department (under which the agency falls) said, “All names registered with the Department since 1995 have conformed to these rules.”

And what happens when parents don’t conform?

Four years ago, a 9-year-old girl was taken away from her parents by the state so that her name could be changed from “Talula Does the Hula From Hawaii.”

Not alone

To be sure, New Zealand is not the only country to act as editor for some parent’s wacky ideas.

Sweden also has a naming law and has nixed attempts to name children “Superman,” “Metallica,” and the oh-so-easy-to-pronounce “Brfxxccxxmnpcccclllmmnprxvclmnckssqlbb11116.”

In 2009, the Dominican Republic contemplated banning unusual names after a host of parents began naming their children after cars or fruit.

In the United States, however, naming fights have centered on adults.

In 2008, a judge allowed an Illinois school bus driver to legally change his first name to “In God” and his last name to “We Trust.”

But the same year, an appeals court in New Mexico ruled against a man — named Variable — who wanted to change his name to “F— Censorship!”

Here is a list of some the names banned in New Zealand since 2001 — and how many times they came up

Justice:62

King:31

Princess:28

Prince:27

Royal:25

Duke:10

Major:9

Bishop:9

Majesty:7

J:6

Lucifer:6

using brackets around middle names:4

Knight:4

Lady:3

using back slash between names:8

Judge:3

Royale:2

Messiah:2

T:2

I:2

Queen:2

II:2

Sir:2

III:2

Jr:2

E:2

V:2

Justus:2

Master:2

Constable:1

Queen Victoria:1

Regal:1

Emperor:1

Christ:1

Juztice:1

3rd:1

C J :1

G:1

Roman numerals III:1

General:1

Saint:1

Lord:1

. (full stop):1

89:1

Eminence:1

M:1

VI:1

Mafia No Fear:1

2nd:1

Majesti:1

Rogue:1

4real:1

* (star symbol):1

5th:1

S P:1

C:1

Sargent:1

Honour:1

D:1

Minister:1

MJ:1

Chief:1

Mr:1

V8:1

President:1

MC:1

Anal:1

A.J:1

Baron:1

L B:1

H-Q:1

Queen V:1

Read the entire article following the jump.

Anti-Eco-Friendly Consumption

It should come as no surprise that those who deny the science of climate change and human-propelled impact on the environment would also shirk from purchasing products and services that are friendly to the environment.

A recent study shows how extreme political persuasion sways purchasing behavior of light bulbs: conservatives are more likely to purchase incandescent bulbs, while moderates and liberals lean towards more eco-friendly bulbs.

Joe Barton, U.S. Representative from Texas, sums up the issue of light bulb choice quite neatly, “… it is about personal freedom”. All the while our children shake their heads in disbelief.

Presumably many climate change skeptics prefer to purchase items that are harmful to the environment and also to humans just to make a political statement. This might include continuing to purchase products containing dangerous levels of unpronounceable acronyms and questionable chemicals: rBGH (recombinant Bovine Growth Hormone) in milk, BPA (Bisphenol_A) in plastic utensils and bottles, KBrO3 (Potassium Bromate) in highly processed flour, BHA (Butylated Hydroxyanisole) food preservative, Azodicarbonamide in dough.

Freedom truly does come at a cost.

From the Guardian:

Eco-friendly labels on energy-saving bulbs are a turn-off for conservative shoppers, a new study has found.

The findings, published this week in the Proceedings of the National Academy of Sciences, suggest that it could be counterproductive to advertise the environmental benefits of efficient bulbs in the US. This could make it even more difficult for America to adopt energy-saving technologies as a solution to climate change.

Consumers took their ideological beliefs with them when they went shopping, and conservatives switched off when they saw labels reading “protect the environment”, the researchers said.

The study looked at the choices of 210 consumers, about two-thirds of them women. All were briefed on the benefits of compact fluorescent (CFL) bulbs over old-fashioned incandescents.

When both bulbs were priced the same, shoppers across the political spectrum were uniformly inclined to choose CFL bulbs over incandescents, even those with environmental labels, the study found.

But when the fluorescent bulb cost more – $1.50 instead of $0.50 for an incandescent – the conservatives who reached for the CFL bulb chose the one without the eco-friendly label.

“The more moderate and conservative participants preferred to bear a long-term financial cost to avoid purchasing an item associated with valuing environmental protections,” the study said.

The findings suggest the extreme political polarisation over environment and climate change had now expanded to energy-savings devices – which were once supported by right and left because of their money-saving potential.

“The research demonstrates how promoting the environment can negatively affect adoption of energy efficiency in the United States because of the political polarisation surrounding environmental issues,” the researchers said.

Earlier this year Harvard academic Theda Skocpol produced a paper tracking how climate change and the environment became a defining issue for conservatives, and for Republican-elected officials.

Conservative activists elevated opposition to the science behind climate change, and to action on climate change, to core beliefs, Skocpol wrote.

There was even a special place for incandescent bulbs. Republicans in Congress two years ago fought hard to repeal a law phasing out incandescent bulbs – even over the objections of manufacturers who had already switched their product lines to the new energy-saving technology.

Republicans at the time cast the battle of the bulb as an issue of liberty. “This is about more than just energy consumption. It is about personal freedom,” said Joe Barton, the Texas Republican behind the effort to keep the outdated bulbs burning.

Read the entire article following the jump.

Image courtesy of Housecraft.

YBAs Twenty-Five Years On

That a small group of Young British Artists (YBA) made an impact on the art scene in the UK and across the globe over the last 25 years is without question. Though, whether the public at large will, 10, 25 or 50 years from now (and beyond), recognize a Damien Hirst spin painting or Tracy Emin’s “My Bed” or a Sarah Lucas self-portrait — “The Artist Eating a Banana” springs to mind — remains an open question.

The group first came to prominence in the late 1980s, mostly through works and events designed to shock the sensibilities of the then dreadfully boring and insular British art scene. With that aim in mind they certainly succeeded, and some, notably Hirst, have since become art superstars. So, while the majority of artists never experience fame within their own lifetimes, many YBAs have managed to buck convention. Though, whether their art will live long and prosper is debatable.

Jonathan Jones over at the On Art blog, chimes in with a different and altogether kinder opinion.

From the Guardian:

It’s 25 years since an ambitious unknown called Damien Hirst curated an exhibition of his friends and contemporaries called Freeze. This is generally taken as the foundation of the art movement that by the 1990s got the label “YBA”. Promoted by exhibitions such as Brilliant!, launched into public debate by the Turner prize and eventually set in stone at the Royal Academy with Sensation, Young British Art still shapes our cultural scene. A Damien Hirst spin painting closed the Olympics.

Even where artists are obviously resisting the showmanship and saleability of the Hirst generation (and such resistance has been the key to fashionable esteem for at least a decade), that generation’s ideas – that art should be young and part of popular culture – remain dominant. Artists on this year’s Turner shortlist may hate the thought that they are YBAs but they really are, in their high valuation of youth and pop. If we are all Thatcherites now, our artists are definitely all YBAs. Except for David Hockney.

From “classic” YBAs like Sarah Lucas and Marc Quinn to this year’s art school graduates, the drive to be new, modern, young and brave that Freeze announced in 1988 still shapes British art. And where has that left us? Where is British art, after 25 years of being young?

Let’s start with the best – and the worst. None of the artists who exploded on to the scene back then were as exciting and promising as Damien Hirst. He orchestrated the whole idea of a movement, and really it was a backdrop for his own daring imagination. Hirst’s animals in formaldehyde were provocations and surrealist dreams. He spun pop art in a new, visceral direction.

Today he is a national shame – our most famous artist has become a hack painter and kitsch sculptor who goes to inordinate lengths to demonstrate his lack of talent. Never has promise been more spectacularly misleading.

And what of the mood he created? Some of the artists who appeared in Freeze, such as Mat Collishaw, still make excellent work. But as for enduring masterpieces that will stand the test of time – how many of those has British art produced since 1988?

Well – the art of Sarah Lucas is acridly memorable. That of Rachel Whiteread is profound. The works of Jake and Dinos Chapman will keep scholars chortling in the library a century or two from now.

What is an artistic masterpiece anyway? Britain has never been good at creating sublime works in marble. But consider the collection of Georgian satirical prints in the Prints and Drawings room at the British Museum. Artists such as Gillray and Rowlandson are our heritage: rude, crude and subversive. Think about Hogarth too – an edgy artist critics snootily dismiss as a so-so painter.

Face it, all ye who rail at modern British art: YBA art and its living aftermath, from pickled fish to David Shrigley, fits beautifully into the Great British tradition of Hogarthian hilarity.

The difference is that while Hogarth had a chip on his shoulder about European art lording it over local talent, the YBA revolution made London world-famous as an art city, with Glasgow coming up in the side lane.

Warts and all, this has been the best 25 years in the history of British art. It never mattered more.

Read the entire article after the jump.

Image: My Bed by Tracey Emin. Courtesy of Tracey Emin / The Saatchi Gallery.