All posts by Mike

The Death of Photojournalism

Really, it was only a matter of time. First, digital cameras killed off their film-dependent predecessors and then dealt a death knell for Kodak. Now social media and the #hashtag is doing the same to the professional photographer.

Camera-enabled smartphones are ubiquitous, making everyone a photographer. And, with almost everyone jacked into at least one social network or photo-sharing site it takes only one point and a couple of clicks to get a fresh image posted to the internet. Ironically, the newsprint media, despite being in the business of news, have failed to recognize this news until recently.

So, now with an eye to cutting costs, and making images more immediate and compelling — via citizens — news organizations are re-tooling their staffs in four ways: first, fire the photographers; second, re-train reporters to take photographs with their smartphones; third, video, video, video; fourth, rely on the ever willing public to snap images, post, tweet, #hashtag and like — for free of course.

From Cult of Mac:

The Chicago Sun-Times, one of the remnants of traditional paper journalism, has let go its entire photography staff of 28 people. Now its reporters will start receiving “iPhone photography basics” training to start producing their own photos and videos.

The move is part of a growing trend towards publications using the iPhone as a replacement for fancy, expensive DSLRs. It’s a also a sign of how traditional journalism is being changed by technology like the iPhone and the advent of digital publishing.

Screen Shot 2013-05-31 at 1.58.39 PM

When Hurricane Sandy hit New York City, reporters for Time used the iPhone to take photos on the field and upload to the publication’s Instagram account. Even the cover photo used on the corresponding issue of Time was taken on an iPhone.

Sun-Times photographer Alex Garcia argues that the “idea that freelancers and reporters could replace a photo staff with iPhones is idiotic at worst, and hopelessly uninformed at best.” Garcia believes that reporters are incapable of writing articles and also producing quality media, but she’s fighting an uphill battle.

Big newspaper companies aren’t making anywhere near the amount of money they used to due to the popularity of online publications and blogs. Free news is a click away nowadays. Getting rid of professional photographers and equipping reporters with iPhones is another way to cut costs.

The iPhone has a better camera than most digital point-and-shoots, and more importantly, it is in everyone’s pocket. It’s a great camera that’s always with you, and that makes it an invaluable tool for any journalist. There will always be a need for videographers and pro photographers that can make studio-level work, but the iPhone is proving to be an invaluable tool for reporters in the modern world.

Read the entire article here.

Image: Kodak 1949-56 Retina IIa 35mm Camera. Courtesy of Wikipedia / Kodak.

Surveillance of the People for the People

The U.S. government is spying on your phone calls with the hushed assistance of companies like Verizon. While the National Security Agency (NSA) may not be listening to your actual conversations (yet), its agents are actively gathering data about your calls: who you call, from where you call, when you call, how long the call lasts.

Here’s the top secret court order delineating the government’s unfettered powers of domestic surveillance.

The price of freedom is becoming ever more expensive, and with broad clandestine activities like this underway — with no specific target — our precious freedoms continue to erode. Surely, this must delight our foes, who will gain relish from the self-inflicted curtailment of civil liberties — the societal consequences are much more far-reaching than those from any improvised explosive device (IED) however heinous and destructive.

From the Guardian:

The National Security Agency is currently collecting the telephone records of millions of US customers of Verizon, one of America’s largest telecoms providers, under a top secret court order issued in April.

The order, a copy of which has been obtained by the Guardian, requires Verizon on an “ongoing, daily basis” to give the NSA information on all telephone calls in its systems, both within the US and between the US and other countries.

The document shows for the first time that under the Obama administration the communication records of millions of US citizens are being collected indiscriminately and in bulk – regardless of whether they are suspected of any wrongdoing.

The secret Foreign Intelligence Surveillance Court (Fisa) granted the order to the FBI on April 25, giving the government unlimited authority to obtain the data for a specified three-month period ending on July 19.

Under the terms of the blanket order, the numbers of both parties on a call are handed over, as is location data, call duration, unique identifiers, and the time and duration of all calls. The contents of the conversation itself are not covered.

The disclosure is likely to reignite longstanding debates in the US over the proper extent of the government’s domestic spying powers.

Under the Bush administration, officials in security agencies had disclosed to reporters the large-scale collection of call records data by the NSA, but this is the first time significant and top-secret documents have revealed the continuation of the practice on a massive scale under President Obama.

The unlimited nature of the records being handed over to the NSA is extremely unusual. Fisa court orders typically direct the production of records pertaining to a specific named target who is suspected of being an agent of a terrorist group or foreign state, or a finite set of individually named targets.

The Guardian approached the National Security Agency, the White House and the Department of Justice for comment in advance of publication on Wednesday. All declined. The agencies were also offered the opportunity to raise specific security concerns regarding the publication of the court order.

The court order expressly bars Verizon from disclosing to the public either the existence of the FBI’s request for its customers’ records, or the court order itself.

“We decline comment,” said Ed McFadden, a Washington-based Verizon spokesman.

The order, signed by Judge Roger Vinson, compels Verizon to produce to the NSA electronic copies of “all call detail records or ‘telephony metadata’ created by Verizon for communications between the United States and abroad” or “wholly within the United States, including local telephone calls”.

The order directs Verizon to “continue production on an ongoing daily basis thereafter for the duration of this order”. It specifies that the records to be produced include “session identifying information”, such as “originating and terminating number”, the duration of each call, telephone calling card numbers, trunk identifiers, International Mobile Subscriber Identity (IMSI) number, and “comprehensive communication routing information”.

The information is classed as “metadata”, or transactional information, rather than communications, and so does not require individual warrants to access. The document also specifies that such “metadata” is not limited to the aforementioned items. A 2005 court ruling judged that cell site location data – the nearest cell tower a phone was connected to – was also transactional data, and so could potentially fall under the scope of the order.

While the order itself does not include either the contents of messages or the personal information of the subscriber of any particular cell number, its collection would allow the NSA to build easily a comprehensive picture of who any individual contacted, how and when, and possibly from where, retrospectively.

It is not known whether Verizon is the only cell-phone provider to be targeted with such an order, although previous reporting has suggested the NSA has collected cell records from all major mobile networks. It is also unclear from the leaked document whether the three-month order was a one-off, or the latest in a series of similar orders.

Read the entire article here.

Beware! RoboBee May Be Watching You

History will probably show that humans are the likely cause for the mass disappearance and death of honey bees around the world.

So, while ecologists try to understand why and how to reverse bee death and colony collapse, engineers are busy building alternatives to our once nectar-loving friends. Meet RoboBee, also known as the Micro Air Vehicles Project.

From Scientific American:

We take for granted the effortless flight of insects, thinking nothing of swatting a pesky fly and crushing its wings. But this insect is a model of complexity. After 12 years of work, researchers at the Harvard School of Engineering and Applied Sciences have succeeded in creating a fly-like robot. And in early May, they announced that their tiny RoboBee (yes, it’s called a RoboBee even though it’s based on the mechanics of a fly) took flight. In the future, that could mean big things for everything from disaster relief to colony collapse disorder.

The RoboBee isn’t the only miniature flying robot in existence, but the 80-milligram, quarter-sized robot is certainly one of the smallest. “The motivations are really thinking about this as a platform to drive a host of really challenging open questions and drive new technology and engineering,” says Harvard professor Robert Wood, the engineering team lead for the project.

When Wood and his colleagues first set out to create a robotic fly, there were no off the shelf parts for them to use. “There were no motors small enough, no sensors that could fit on board. The microcontrollers, the microprocessors–everything had to be developed fresh,” says Wood. As a result, the RoboBee project has led to numerous innovations, including vision sensors for the bot, high power density piezoelectric actuators (ceramic strips that expand and contract when exposed to an electrical field), and a new kind of rapid manufacturing that involves layering laser-cut materials that fold like a pop-up book. The actuators assist with the bot’s wing-flapping, while the vision sensors monitor the world in relation to the RoboBee.

“Manufacturing took us quite awhile. Then it was control, how do you design the thing so we can fly it around, and the next one is going to be power, how we develop and integrate power sources,” says Wood. In a paper recently published by Science, the researchers describe the RoboBee’s power quandary: it can fly for just 20 seconds–and that’s while it’s tethered to a power source. “Batteries don’t exist at the size that we would want,” explains Wood. The researchers explain further in the report: ” If we implement on-board power with current technologies, we estimate no more than a few minutes of untethered, powered flight. Long duration power autonomy awaits advances in small, high-energy-density power sources.”

The RoboBees don’t last a particularly long time–Wood says the flight time is “on the order of tens of minutes”–but they can keep flapping their wings long enough for the Harvard researchers to learn everything they need to know from each successive generation of bots. For commercial applications, however, the RoboBees would need to be more durable.

Read the entire article here.

Image courtesy of Micro Air Vehicles Project, Harvard.

Leadership and the Tyranny of Big Data

“There are three kinds of lies: lies, damned lies, and statistics”, goes the adage popularized by author Mark Twain.

Most people take for granted that numbers can be persuasive — just take a look at your bank balance. Also, most accept the notion that data can be used, misused, misinterpreted, re-interpreted and distorted to support or counter almost any argument. Just listen to a politician quote polling numbers and then hear an opposing politician make a contrary argument using the very same statistics. Or, better still, familiarize yourself with pseudo-science of economics.

Authors Kenneth Cukier (data editor for The Economist) and Viktor Mayer-Schönberger (professor of Internet governance) examine this phenomenon in their book Big Data: A Revolution That Will Transform How We Live, Work, and Think. They eloquently present the example of Robert McNamara, U.S. defense secretary during the Vietnam war, who in(famously) used his detailed spreadsheets — including daily body count — to manage and measure progress. Following the end of the war, many U.S. generals later described this over-reliance on numbers as misguided dictatorship that led many to make ill-informed decisions — based solely on numbers — and to fudge their figures.

This classic example leads them to a timely and important caution: as the range and scale of big data becomes ever greater, and while it may offer us great benefits, it can and will be used to mislead.

From Technology review:

Big data is poised to transform society, from how we diagnose illness to how we educate children, even making it possible for a car to drive itself. Information is emerging as a new economic input, a vital resource. Companies, governments, and even individuals will be measuring and optimizing everything possible.

But there is a dark side. Big data erodes privacy. And when it is used to make predictions about what we are likely to do but haven’t yet done, it threatens freedom as well. Yet big data also exacerbates a very old problem: relying on the numbers when they are far more fallible than we think. Nothing underscores the consequences of data analysis gone awry more than the story of Robert McNamara.

McNamara was a numbers guy. Appointed the U.S. secretary of defense when tensions in Vietnam rose in the early 1960s, he insisted on getting data on everything he could. Only by applying statistical rigor, he believed, could decision makers understand a complex situation and make the right choices. The world in his view was a mass of unruly information that—if delineated, denoted, demarcated, and quantified—could be tamed by human hand and fall under human will. McNamara sought Truth, and that Truth could be found in data. Among the numbers that came back to him was the “body count.”

McNamara developed his love of numbers as a student at Harvard Business School and then as its youngest assistant professor at age 24. He applied this rigor during the Second World War as part of an elite Pentagon team called Statistical Control, which brought data-driven decision making to one of the world’s largest bureaucracies. Before this, the military was blind. It didn’t know, for instance, the type, quantity, or location of spare airplane parts. Data came to the rescue. Just making armament procurement more efficient saved $3.6 billion in 1943. Modern war demanded the efficient allocation of resources; the team’s work was a stunning success.

At war’s end, the members of this group offered their skills to corporate America. The Ford Motor Company was floundering, and a desperate Henry Ford II handed them the reins. Just as they knew nothing about the military when they helped win the war, so too were they clueless about making cars. Still, the so-called “Whiz Kids” turned the company around.

McNamara rose swiftly up the ranks, trotting out a data point for every situation. Harried factory managers produced the figures he demanded—whether they were correct or not. When an edict came down that all inventory from one car model must be used before a new model could begin production, exasperated line managers simply dumped excess parts into a nearby river. The joke at the factory was that a fellow could walk on water—atop rusted pieces of 1950 and 1951 cars.

McNamara epitomized the hyper-rational executive who relied on numbers rather than sentiments, and who could apply his quantitative skills to any industry he turned them to. In 1960 he was named president of Ford, a position he held for only a few weeks before being tapped to join President Kennedy’s cabinet as secretary of defense.

As the Vietnam conflict escalated and the United States sent more troops, it became clear that this was a war of wills, not of territory. America’s strategy was to pound the Viet Cong to the negotiation table. The way to measure progress, therefore, was by the number of enemy killed. The body count was published daily in the newspapers. To the war’s supporters it was proof of progress; to critics, evidence of its immorality. The body count was the data point that defined an era.

McNamara relied on the figures, fetishized them. With his perfectly combed-back hair and his flawlessly knotted tie, McNamara felt he could comprehend what was happening on the ground only by staring at a spreadsheet—at all those orderly rows and columns, calculations and charts, whose mastery seemed to bring him one standard deviation closer to God.

In 1977, two years after the last helicopter lifted off the rooftop of the U.S. embassy in Saigon, a retired Army general, Douglas Kinnard, published a landmark survey called The War Managers that revealed the quagmire of quantification. A mere 2 percent of America’s generals considered the body count a valid way to measure progress. “A fake—totally worthless,” wrote one general in his comments. “Often blatant lies,” wrote another. “They were grossly exaggerated by many units primarily because of the incredible interest shown by people like McNamara,” said a third.

Read the entire article after the jump.

Image: Robert McNamara at a cabinet meeting, 22 Nov 1967. Courtesy of Wikipedia / Public domain.

MondayMap: Your Taxes and Google Street View

The fear of an annual tax audit brings many people to their knees. It’s one of many techniques that government authorities use to milk their citizens of every last penny of taxes. Well, authorities now have an even more powerful weapon to add to their tax collecting arsenal — Google Street View. And, if you are reading this from Lithuania you will know what we are talking about.

From the Wall Street Journal:

One day last summer, a woman was about to climb into a hammock in the front yard of a suburban house here when a photographer for the Google Inc. Street View service snapped her picture.

The apparently innocuous photograph is now being used as evidence in a tax-evasion case brought by Lithuanian authorities against the undisclosed owners of the home.

Some European countries have been going after Google, complaining that the search giant is invading the privacy of their citizens. But tax inspectors here have turned to the prying eyes of Street View for their own purposes.

After Google’s car-borne cameras were driven through the Vilnius area last year, the tax men in this small Baltic nation got busy. They have spent months combing through footage looking for unreported taxable wealth.

“We were very impressed,” said Modestas Kaseliauskas, head of the State Tax Authority. “We realized that we could do more with less and in shorter time.”

More than 100 people have been identified so far after investigators compared Street View images of about 500 properties with state property registries looking for undeclared construction.

Two recent cases netted $130,000 in taxes and penalties after investigators found houses photographed by Google that weren’t on official maps.

From aerial surveillance to dedicated iPhone apps, cash-strapped governments across Europe are employing increasingly unconventional measures against tax cheats to raise revenue. In some countries, authorities have tried to enlist citizens to help keep watch. Customers in Greece, for instance, are insisting on getting receipts for what they buy.

For Lithuania, which only two decades ago began its transition away from communist central planning and remains one of the poorest countries in the European Union, Street View has been a big help. After the global financial crisis struck in 2008, belt tightening cut the tax authority’s budget by a third. A quarter of its employees were let go, leaving it with fewer resources just as it was being asked to do more.

“We were pressured to increase tax revenue,” said the authority’s Mr. Kaseliauskas.

Street View has let Mr. Kaseliauskas’s team see things it would have otherwise missed. Its images are better—and cheaper—than aerial photos, which authorities complain often aren’t clear enough to be useful.

Sitting in their city office 10 miles away, they were able to detect that, contrary to official records, the house with the hammock existed and that, in one photograph, three cars were parked in the driveway.

An undeclared semidetached house owned by the former board chairman of Bank Snoras, Raimundas Baranauskas, was recently identified using Street View and is estimated by the government to be worth about $260,000. Authorities knew Mr. Baranauskas owned land there, but not buildings. A quick look online led to the discovery of several houses on his land, in a quiet residential street of Vilnius.

Read the entire article here.

Image courtesy of (who else?), Google Maps.

Ai Weiwei – China’s Warhol

Artist Ai Weiwei has suffered at the hands of the Chinese authorities much more so than Andy Warhol’s brushes with surveillance from the FBI. Yet the two are remarkably similar: brash and polarizing views, distinctive art and creative processes, masterful self-promotion, savvy media manipulation and global ubiquity. This is all the more astounding given Ai Weiwei’s arrest, detentions and prohibition on travel outside of Beijing. He’s even made it to the Venice Biennale this year — only his art of course.

From the= Guardian:

To some, he is verging on a saint and martyr, singlehandedly standing against the forces of Chinese political repression. For others he is a canny manipulator, utterly in control of his reputation and place in the art world and market. For others still, he is all these things: an artist who outdoes even Andy Warhol in his ubiquity, his nimbleness at self-promotion and his use of every medium at his disposal to promulgate his work and his activism.

Whatever your views on the Chinese artist Ai Weiwei, one thing is clear: he is everywhere, from the Hampstead theatre in London, where Howard Brenton’s play about the 81 days Ai spent in detention in 2011 is underway, to the web, where his the video for his heavy metal song Dumbass is circulating, to the Venice Biennale, where not one but three of his large-scale works are on display – perhaps the most exposure for any single artist at the international festival.

One of the works, Bang, a forest of hundreds of tangled wooden stools, is the most prominent piece in the German national pavilion. Then, in the Zuecca Project Space on the island of Giudecca, is his installation Straight: 150 tons of crushed rebar from schools flattened in the Sichuan earthquake of 2008, recovered by the artist and his team, who bought the crumpled steel rods as scrap before painstakingly straightening them and piling them up in a wave-like sculptural arrangement.

By far the most revealing about Ai’s own experience, though, is the third piece, SACRED. Situated in the church of Sant’Antonin, it consists of six large iron boxes, into which visitors can peek to see sculptures recreating scenes from the artist’s detention. Here is a miniature Ai being interrogated; here a miniature Ai showers or sits on the lavatory while two uniformed guards stand over him. Other scenes show him sleeping and eating – always in the same tiny space, always under double guard. (The music video refers to some of these scenes with a lightly satirical tone that is absent from the sculpture.)

According to Greg Hilty of London’s Lisson Gallery, under whose auspices SACRED is being shown, and who saw Ai in China a week ago, the work is a form of “therapy or exorcism – it was something he had to get out. It is an experience that we might see as newsworthy, but for him, he was the one in it.”

Read the entire article here.

Image: Waking nightmare … Ai Weiwei’s Entropy (Sleep), from SACRED (2013). Courtesy of David Levene / Guardian.

Dead Man Talking

Graham is a man very much alive. But, his mind has convinced him that his brain is dead and that he killed it.

From the New Scientist:

Name: Graham
Condition: Cotard’s syndrome

“When I was in hospital I kept on telling them that the tablets weren’t going to do me any good ’cause my brain was dead. I lost my sense of smell and taste. I didn’t need to eat, or speak, or do anything. I ended up spending time in the graveyard because that was the closest I could get to death.”

Nine years ago, Graham woke up and discovered he was dead.

He was in the grip of Cotard’s syndrome. People with this rare condition believe that they, or parts of their body, no longer exist.

For Graham, it was his brain that was dead, and he believed that he had killed it. Suffering from severe depression, he had tried to commit suicide by taking an electrical appliance with him into the bath.

Eight months later, he told his doctor his brain had died or was, at best, missing. “It’s really hard to explain,” he says. “I just felt like my brain didn’t exist any more. I kept on telling the doctors that the tablets weren’t going to do me any good because I didn’t have a brain. I’d fried it in the bath.”

Doctors found trying to rationalise with Graham was impossible. Even as he sat there talking, breathing – living – he could not accept that his brain was alive. “I just got annoyed. I didn’t know how I could speak or do anything with no brain, but as far as I was concerned I hadn’t got one.”

Baffled, they eventually put him in touch with neurologists Adam Zeman at the University of Exeter, UK, and Steven Laureys at the University of Liège in Belgium.

“It’s the first and only time my secretary has said to me: ‘It’s really important for you to come and speak to this patient because he’s telling me he’s dead,'” says Laureys.

Limbo state

“He was a really unusual patient,” says Zeman. Graham’s belief “was a metaphor for how he felt about the world – his experiences no longer moved him. He felt he was in a limbo state caught between life and death”.

No one knows how common Cotard’s syndrome may be. A study published in 1995 of 349 elderly psychiatric patients in Hong Kong found two with symptoms resembling Cotard’s (General Hospital Psychiatry, DOI: 10.1016/0163-8343(94)00066-M). But with successful and quick treatments for mental states such as depression – the condition from which Cotard’s appears to arise most often – readily available, researchers suspect the syndrome is exceptionally rare today. Most academic work on the syndrome is limited to single case studies like Graham.

Some people with Cotard’s have reportedly died of starvation, believing they no longer needed to eat. Others have attempted to get rid of their body using acid, which they saw as the only way they could free themselves of being the “walking dead”.

Graham’s brother and carers made sure he ate, and looked after him. But it was a joyless existence. “I didn’t want to face people. There was no point,” he says, “I didn’t feel pleasure in anything. I used to idolise my car, but I didn’t go near it. All the things I was interested in went away.”

Even the cigarettes he used to relish no longer gave him a hit. “I lost my sense of smell and my sense of taste. There was no point in eating because I was dead. It was a waste of time speaking as I never had anything to say. I didn’t even really have any thoughts. Everything was meaningless.”

Low metabolism

A peek inside Graham’s brain provided Zeman and Laureys with some explanation. They used positron emission tomography to monitor metabolism across his brain. It was the first PET scan ever taken of a person with Cotard’s. What they found was shocking: metabolic activity across large areas of the frontal and parietal brain regions was so low that it resembled that of someone in a vegetative state.

Graham says he didn’t really have any thoughts about his future during that time. “I had no other option other than to accept the fact that I had no way to actually die. It was a nightmare.”

Graveyard haunt

This feeling prompted him on occasion to visit the local graveyard. “I just felt I might as well stay there. It was the closest I could get to death. The police would come and get me, though, and take me back home.”

There were some unexplained consequences of the disorder. Graham says he used to have “nice hairy legs”. But after he got Cotard’s, all the hairs fell out. “I looked like a plucked chicken! Saves shaving them I suppose…”

It’s nice to hear him joke. Over time, and with a lot of psychotherapy and drug treatment, Graham has gradually improved and is no longer in the grip of the disorder. He is now able to live independently. “His Cotard’s has ebbed away and his capacity to take pleasure in life has returned,” says Zeman.

“I couldn’t say I’m really back to normal, but I feel a lot better now and go out and do things around the house,” says Graham. “I don’t feel that brain-dead any more. Things just feel a bit bizarre sometimes.” And has the experience changed his feeling about death? “I’m not afraid of death,” he says. “But that’s not to do with what happened – we’re all going to die sometime. I’m just lucky to be alive now.”

Read the entire article here.

Image courtesy of Wikimedia / Public domain.

Big Data and Even Bigger Problems

First a definition. Big data: typically a collection of large and complex datasets that are too cumbersome to process and analyze using traditional computational approaches and database applications. Usually the big data moniker will be accompanied by an IT vendor’s pitch for shiny new software (and possible hardware) solution able to crunch through petabytes (one petabyte is a million gigabytes) of data and produce a visualizable result that mere mortals can decipher.

Many companies see big data and related solutions as a panacea to a range of business challenges: customer service, medical diagnostics, product development, shipping and logistics, climate change studies, genomic analysis and so on. A great example was the last U.S. election. Many political wonks — from both sides of the aisle — agreed that President Obama was significantly aided in his won re-election with the help of big data. So, with that in mind, many are now looking at more important big data problems.

From Technology Review:

As chief scientist for President Obama’s reëlection effort, Rayid Ghani helped revolutionize the use of data in politics. During the final 18 months of the campaign, he joined a sprawling team of data and software experts who sifted, collated, and combined dozens of pieces of information on each registered U.S. voter to discover patterns that let them target fund-raising appeals and ads.

Now, with Obama again ensconced in the Oval Office, some veterans of the campaign’s data squad are applying lessons from the campaign to tackle social issues such as education and environmental stewardship. Edgeflip, a startup Ghani founded in January with two other campaign members, plans to turn the ad hoc data analysis tools developed for Obama for America into software that can make nonprofits more effective at raising money and recruiting volunteers.

Ghani isn’t the only one thinking along these lines. In Chicago, Ghani’s hometown and the site of Obama for America headquarters, some campaign members are helping the city make available records of utility usage and crime statistics so developers can build apps that attempt to improve life there. It’s all part of a bigger idea to engineer social systems by scanning the numerical exhaust from mundane activities for patterns that might bear on everything from traffic snarls to human trafficking. Among those pursuing such humanitarian goals are startups like DataKind as well as large companies like IBM, which is redrawing bus routes in Ivory Coast (see “African Bus Routes Redrawn Using Cell-Phone Data”), and Google, with its flu-tracking software (see “Sick Searchers Help Track Flu”).

Ghani, who is 35, has had a longstanding interest in social causes, like tutoring disadvantaged kids. But he developed his data-mining savvy during 10 years as director of analytics at Accenture, helping retail chains forecast sales, creating models of consumer behavior, and writing papers with titles like “Data Mining for Business Applications.”

Before joining the Obama campaign in July 2011, Ghani wasn’t even sure his expertise in machine learning and predicting online prices could have an impact on a social cause. But the campaign’s success in applying such methods on the fly to sway voters is now recognized as having been potentially decisive in the election’s outcome (see “A More Perfect Union”).

“I realized two things,” says Ghani. “It’s doable at the massive scale of the campaign, and that means it’s doable in the context of other problems.”

At Obama for America, Ghani helped build statistical models that assessed each voter along five axes: support for the president; susceptibility to being persuaded to support the president; willingness to donate money; willingness to volunteer; and likelihood of casting a vote. These models allowed the campaign to target door knocks, phone calls, TV spots, and online ads to where they were most likely to benefit Obama.

One of the most important ideas he developed, dubbed “targeted sharing,” now forms the basis of Edgeflip’s first product. It’s a Facebook app that prompts people to share information from a nonprofit, but only with those friends predicted to respond favorably. That’s a big change from the usual scattershot approach of posting pleas for money or help and hoping they’ll reach the right people.

Edgeflip’s app, like the one Ghani conceived for Obama, will ask people who share a post to provide access to their list of friends. This will pull in not only friends’ names but also personal details, like their age, that can feed models of who is most likely to help.

Say a hurricane strikes the southeastern United States and the Red Cross needs clean-up workers. The app would ask Facebook users to share the Red Cross message, but only with friends who live in the storm zone, are young and likely to do manual labor, and have previously shown interest in content shared by that user. But if the same person shared an appeal for donations instead, he or she would be prompted to pass it along to friends who are older, live farther away, and have donated money in the past.

Michael Slaby, a senior technology official for Obama who hired Ghani for the 2012 election season, sees great promise in the targeted sharing technique. “It’s one of the most compelling innovations to come out of the campaign,” says Slaby. “It has the potential to make online activism much more efficient and effective.”

For instance, Ghani has been working with Fidel Vargas, CEO of the Hispanic Scholarship Fund, to increase that organization’s analytical savvy. Vargas thinks social data could predict which scholarship recipients are most likely to contribute to the fund after they graduate. “Then you’d be able to give away scholarships to qualified students who would have a higher probability of giving back,” he says. “Everyone would be much better off.”

Ghani sees a far bigger role for technology in the social sphere. He imagines online petitions that act like open-source software, getting passed around and improved. Social programs, too, could get constantly tested and improved. “I can imagine policies being designed a lot more collaboratively,” he says. “I don’t know if the politicians are ready to deal with it.” He also thinks there’s a huge amount of untapped information out there about childhood obesity, gang membership, and infant mortality, all ready for big data’s touch.

Read the entire article here.

Inforgraphic courtesy of visua.ly. See the original here.

Your Home As Eco-System

For centuries biologists, zoologists and ecologists have been mapping the wildlife that surrounds us in the great outdoors. Now a group led by microbiologist Noah Fierer at the University of Colorado Boulder is pursuing flora and fauna in one of the last unexplored eco-systems — the home. (Not for the faint of heart).

From the New York Times:

On a sunny Wednesday, with a faint haze hanging over the Rockies, Noah Fierer eyed the field site from the back of his colleague’s Ford Explorer. Two blocks east of a strip mall in Longmont, one of the world’s last underexplored ecosystems had come into view: a sandstone-colored ranch house, code-named Q. A pair of dogs barked in the backyard.

Dr. Fierer, 39, a microbiologist at the University of Colorado Boulder and self-described “natural historian of cooties,” walked across the front lawn and into the house, joining a team of researchers inside. One swabbed surfaces with sterile cotton swabs. Others logged the findings from two humming air samplers: clothing fibers, dog hair, skin flakes, particulate matter and microbial life.

Ecologists like Dr. Fierer have begun peering into an intimate, overlooked world that barely existed 100,000 years ago: the great indoors. They want to know what lives in our homes with us and how we “colonize” spaces with other species — viruses, bacteria, microbes. Homes, they’ve found, contain identifiable ecological signatures of their human inhabitants. Even dogs exert a significant influence on the tiny life-forms living on our pillows and television screens. Once ecologists have more thoroughly identified indoor species, they hope to come up with strategies to scientifically manage homes, by eliminating harmful taxa and fostering species beneficial to our health.

But the first step is simply to take a census of what’s already living with us, said Dr. Fierer; only then can scientists start making sense of their effects. “We need to know what’s out there first. If you don’t know that, you’re wandering blind in the wilderness.”

Here’s an undeniable fact: We are an indoor species. We spend close to 90 percent of our lives in drywalled caves. Yet traditionally, ecologists ventured outdoors to observe nature’s biodiversity, in the Amazon jungles, the hot springs of Yellowstone or the subglacial lakes of Antarctica. (“When you train as an ecologist, you imagine yourself tromping around in the forest,” Dr. Fierer said. “You don’t imagine yourself swabbing a toilet seat.”)

But as humdrum as a home might first appear, it is a veritable wonderland. Ecology does not stop at the front door; a home to you is also home to an incredible array of wildlife.

Besides the charismatic fauna commonly observed in North American homes — dogs, cats, the occasional freshwater fish — ants and roaches, crickets and carpet bugs, mites and millions upon millions of microbes, including hundreds of multicellular species and thousands of unicellular species, also thrive in them. The “built environment” doubles as a complex ecosystem that evolves under the selective pressure of its inhabitants, their behavior and the building materials. As microbial ecologists swab DNA from our homes, they’re creating an atlas of life much as 19th-century naturalists like Alfred Russel Wallace once logged flora and fauna on the Malay Archipelago.

Take an average kitchen. In a study published in February in the journal Environmental Microbiology, Dr. Fierer’s lab examined 82 surfaces in four Boulder kitchens. Predictable patterns emerged. Bacterial species associated with human skin, like Staphylococcaceae or Corynebacteriaceae, predominated. Evidence of soil showed up on the floor, and species associated with raw produce (Enterobacteriaceae, for example) appeared on countertops. Microbes common in moist areas — including sphingomonads, some strains infamous for their ability to survive in the most toxic sites — splashed in a kind of jungle above the faucet.

A hot spot of unrivaled biodiversity was discovered on the stove exhaust vent, probably the result of forced air and settling. The counter and refrigerator, places seemingly as disparate as temperate and alpine grasslands, shared a similar assemblage of microbial species — probably less because of temperature and more a consequence of cleaning. Dr. Fierer’s lab also found a few potential pathogens, like Campylobacter, lurking on the cupboards. There was evidence of the bacterium on a microwave panel, too, presumably a microbial “fingerprint” left by a cook handling raw chicken.

If a kitchen represents a temperate forest, few of its plants would be poison ivy. Most of the inhabitants are relatively benign. In any event, eradicating them is neither possible nor desirable. Dr. Fierer wants to make visible this intrinsic, if unseen, aspect of everyday life. “For a lot of the general public, they don’t care what’s in soil,” he said. “People care more about what’s on their pillowcase.” (Spoiler alert: The microbes living on your pillowcase are not all that different from those living on your toilet seat. Both surfaces come in regular contact with exposed skin.)

Read the entire article after the jump.

Image: Animals commonly found in the home. Courtesy of North Carolina State University.

You Can Check Out Anytime You Like…

“… But You Can Never Leave”. So goes one of the most memorable of lyrical phrases from The Eagles (Hotel California).

Of late, it seems that this state of affairs also applies to a vast collection of people on Facebook; many wish to leave but lack the social capital or wisdom or backbone to do so.

From the Washington Post:

Bad news, everyone. We’re trapped. We may well be stuck here for the rest of our lives. I hope you brought canned goods.

A dreary line of tagged pictures and status updates stretches before us from here to the tomb.

Like life, Facebook seems to get less exciting the longer we spend there. And now everyone hates Facebook, officially.

Last week, Pew reported that 94 percent of teenagers are on Facebook, but that they are miserable about it. Then again, when are teenagers anything else? Pew’s focus groups of teens complained about the drama, said Twitter felt more natural, said that it seemed like a lot of effort to keep up with everyone you’d ever met, found the cliques and competition for friends offputting –

All right, teenagers. You have a point. And it doesn’t get better.

The trouble with Facebook is that 94 percent of people are there. Anything with 94 Percent of People involved ceases to have a personality and becomes a kind of public utility. There’s no broad generalization you can make about people who use flush toilets. Sure, toilets are a little odd, and they become quickly ridiculous when you stare at them long enough, the way a word used too often falls apart into meaningless letters under scrutiny, but we don’t think of them as peculiar. Everyone’s got one. The only thing weirder than having one of those funny porcelain thrones in your home would be not having one.

Facebook is like that, and not just because we deposit the same sort of thing in both. It used to define a particular crowd. But it’s no longer the bastion of college students and high schoolers avoiding parental scrutiny. Mom’s there. Heck, Velveeta Cheesy Skillets are there.

It’s just another space in which all the daily drama of actual life plays out. All the interactions that used only to be annoying to the people in the room with you at the time are now played out indelibly in text and pictures that can be seen from great distances by anyone who wants to take an afternoon and stalk you. Oscar Wilde complained about married couples who flirted with each other, saying that it was like washing clean linen in public. Well, just look at the wall exchanges of You Know The Couple I Mean. “Nothing is more irritating than not being invited to a party you wouldn’t be seen dead at,” Bill Vaughan said. On Facebook, that’s magnified to parties in entirely different states.

Facebook has been doing its best to approximate our actual social experience — that creepy foray into chairs aside. But what it forgot was that our actual social experience leaves much to be desired. After spending time with Other People smiling politely at news of what their sonograms are doing, we often want to rush from the room screaming wordlessly and bang our heads into something.

Hell is other people, updating their statuses with news that Yay The Strange Growth Checked Out Just Fine.

This is the point where someone says, “Well, if it’s that annoying, why don’t you unsubscribe?”

But you can’t.

Read the entire article here.

Image: Facebook logo courtesy of Mirror / Facebook.

Frankenlanguage

An interesting story on the adoption of pop culture words into our common lexicon. Beware! The next blockbuster sci-fi movie that you see may influence your next choice of noun.

From the Guardian:

Water cooler conversation at a dictionary company tends towards the odd. A while ago I was chatting with one of my colleagues about our respective defining batches. “I’m not sure,” he said, “what to do about the plural of ‘hobbit’. There are some citations for ‘hobbitses’, but I think they may be facetious uses. Have any thoughts?”

I did: “We enter ‘hobbit’ into the dictionary?” You learn something new every day.

Pop culture is a goldmine of neologisms, and science fiction and fantasy is one rich seam that has been contributing to English for hundreds of years. Yes, hundreds: because what is Gulliver’s Travels but a fantasy satire of 18th-century travel novels? And what is Frankenstein but science fiction? The name of Mary Shelley’s monster lives on both as its own word and as a combining form used in words like “frankenfood”. And Swift’s fantasy novel was so evocative, we adopted a number of words from it, such as “Lilliputian”, the tongue-twisting “Brobdingnagian”, and – surprise – “yahoo”.

Don’t be surprised. Many words have their origins in science fiction and fantasy writing, but have been so far removed from their original contexts that we’ve forgotten. George Orwell gave us “doublespeak”; Carl Sagan is responsible for the term “nuclear winter”; and Isaac Asimov coined “microcomputer” and “robotics”. And, yes, “blaster”, as in “Hokey religions and ancient weapons are no match for a good blaster at your side, kid.”

Which brings us to the familiar and more modern era of sci-fi and fantasy, ones filled with tricorders, lightsabers, dark lords in fiery mountain fortresses, and space cowboys. Indeed, we have whole cable channels devoted to sci-fi and fantasy shows, and the big blockbuster movie this season is Star Trek (again). So why haven’t we seen “tricorder” and “lightsaber” entered into the dictionary? When will the dictionary give “Quidditch” its due? Whither “gorram”?

All fields have their own vocabulary and, as often happens, that vocabulary is often isolated to that field. When an ad executive talks about a “deck”, they are not referring to the same “deck” that poker players use, or the same “deck” that sailors work on. When specialized vocabulary does appear outside of its particular field and in more general literature, it’s often long after its initial point of origin. This process is no different with words from science fiction and fantasy. “Tricorder”, for instance, is used in print, but most often only to refer to the medical diagnostic device used in the Star Trek movies. It’s not quite generic enough to merit entry as a general vocabulary word.

In some cases, the people who gave us the word aren’t keen to see it taken outside of its intended world and used with an extended meaning. Consequently, some coinages don’t get into print as often as you’d think: “Jedi mind trick” only appears four times in the Corpus of Contemporary American English. That corpus contains over 450 million indexed words.

Savvy writers of each genre also liked to resurrect and breathe new life into old words. JRR Tolkien not only gave us “hobbit”, he also popularized the plural “dwarves”, which has appeared in English with increasing frequency since the publication of The Hobbit in 1968. “Eldritch”, which dates to the 1500s, is linked in the modern mind almost exclusively to the stories of HP Lovecraft. The verb “terraform” that was most recently popularized by Joss Whedon’s show Firefly dates back to the 1940s, though it was uncommon until Firefly aired. Prior to 1977, storm troopers were Nazis.

Even new words can look old: JK Rowling’s “muggle” is a coinage of her own devising – but there are earlier, rarer “muggles” entered into the Oxford English Dictionary (one meaning “a tail resembling that of a fish”, and another meaning “a young woman or sweetheart”), along with a “dumbledore” (“a bumble-bee”) and a “hagrid” (a variant of “hag-ridden” meaning “afflicted by nightmares”).

More interesting to the lexicographer is that, in spite of the devoted following that sci-fi and fantasy each have – of the top 10 highest-grossing film franchises in history, at least five of them are science fiction or fantasy – we haven’t adopted more sci-fi and fantasy words into general use. Perhaps, in the case of sci-fi, we just need to wait for technology to improve to the point that we can talk with our co-workers about jumping into hyperspace or hanging out on the holodeck.

Read the entire article here.

Charting the Rise (and Fall) of Humanity

Rob Wile over at Business Insider has posted a selection of graphs that in his words “will restore your faith in humanity”. This should put many cynics on the defensive — after all, his charts clearly show that conflict is on the decline, and democracy is on the rise. But, look more closely and you’ll see that slavery is still with us, poverty and social injustice abounds, the wealthy are wealthier, conspicuous consumption is rising.

From Business Insider:

Lately, it feels like the news has been dominated by tragedies: natural disasters, evil people, and sometimes just carelessness.

But it would be a mistake to become cynical.

We’ve put together 31 charts that we think will help restore your faith in humanity.

2) Democracy’s in. Autocracy’s out.

3) Slavery is disappearing.

Read the entire article here.

Revisiting Drake

In 1960 radio astronomer Frank Drake began the first systematic search for intelligent signals emanating from space. He was not successful, but his pioneering efforts paved the way for numerous other programs, including SETI (Search for Extra-Terrestrial Intelligence). The Drake Equation is named for him, and put simply, gives an estimate of the number of active, extraterrestrial civilizations with methods of communication in our own galaxy. Drake postulated the equation as a way to get the scientific community engaged in the search for life beyond our home planet.

The Drake equation is:

N = R^{\ast} \cdot f_p \cdot n_e \cdot f_{\ell} \cdot f_i \cdot f_c \cdot L

where:

N = the number of civilizations in our galaxy with which communication might be possible (i.e. which are on our current past light cone); and

R* = the average number of star formation per year in our galaxy

fp = the fraction of those stars that have planets

ne = the average number of planets that can potentially support life per star that has planets

fl = the fraction of planets that could support life that actually develop life at some point

fi = the fraction of planets with life that actually go on to develop intelligent life (civilizations)

fc = the fraction of civilizations that develop a technology that releases detectable signs of their existence into space

L = the length of time for which such civilizations release detectable signals into space

Now, based on recent discoveries of hundreds of extra-solar planets, or exoplanets (those beyond our solar system), by the Kepler space telescope and other Earth-bound observatories, researchers are fine-tuning the original Drake Equation for the 21st century.

From the New Scientist:

An iconic tool in the search for extraterrestrial life is getting a 21st-century reboot – just as our best planet-hunting telescope seems to have died. Though the loss of NASA’s Kepler telescope is a blow, the reboot could mean we find signs of life on extrasolar planets within a decade.

The new tool takes the form of an equation. In 1961 astronomer Frank Drake scribbled his now-famous equation for calculating the number of detectable civilisations in the Milky Way. The Drake equation includes a number of terms that at the time seemed unknowable – including the very existence of planets beyond our solar system.

But the past two decades have seen exoplanets pop up like weeds, particularly in the last few years thanks in large part to the Kepler space telescope. Launched in 2009Movie Camera, Kepler has found more than 130 worlds and detected 3000 or so more possibles. The bounty has given astronomers the first proper census of planets in one region of our galaxy, allowing us to make estimates of the total population of life-friendly worlds across the Milky Way.

With that kind of data in hand, Sara Seager at the Massachusetts Institute of Technology reckons the Drake equation is ripe for a revamp. Her version narrows a few of the original terms to account for our new best bets of finding life, based in part on what Kepler has revealed. If the original Drake equation was a hatchet, the new Seager equation is a scalpel.

Seager presented her work this week at a conference in Cambridge, Massachusetts, entitled “Exoplanets in the Post-Kepler Era”. The timing could not be more prescient. Last week Kepler suffered a surprise hardware failure that knocked out its ability to see planetary signals clearly. If it can’t be fixed, the mission is over.

“When we talked about the post-Kepler era, we thought that would be three to four years from now,” co-organiser David Charbonneau of the Harvard-Smithsonian Center for Astrophysics said last week. “We now know the post-Kepler era probably started two days ago.”

But Kepler has collected data for four years, slightly longer than the mission’s original goal, and so far only the first 18 months’ worth have been analysed. That means it may have already gathered enough information to give alien-hunters a fighting chance.

The original Drake equation includes seven terms, which multiplied together give the number of intelligent alien civilisations we could hope to detect (see diagram). Kepler was supposed to pin down two terms: the fraction of stars that have planets, and the number of those planets that are habitable.

To do that, Kepler had been staring unflinchingly at some 150,000 stars near the constellation Cygnus, looking for periodic changes in brightness caused by a planet crossing, or transiting, a star’s face as seen from Earth. This method tells us a planet’s size and its rough distance from its host star.

Size gives a clue to a planet’s composition, which tells us whether it is rocky like Earth or gassy like Neptune. Before Kepler, only a few exoplanets had been identified as small enough to be rocky, because other search methods were better suited to spotting larger, gas giant worlds.

“Kepler is the single most revolutionary project that has ever been undertaken in exoplanets,” says Charbonneau. “It broke open the piggybank and rocky planets poured out.” A planet’s distance from its star is also crucial, because that tells us whether the temperature is right for liquid water – and so perhaps life – to exist.

But with Kepler’s recent woes, hopes of finding enough potentially habitable planets, or Earth twins, to satisfy the Drake equation have dimmed. The mission was supposed to run for three-and-a-half years, which should have been enough to pinpoint Earth-sized planets with years of a similar length. After the telescope came online, the mission team realised that other sun-like stars are more active than ours, and they bounce around too much in the telescope’s field of view. To find enough Earths, they would need seven or eight years of data.

Read the entire article here.

Image courtesy of the BBC. Drake Equation courtesy of Wikipedia.

Violence to the English Language

If you are an English speaker and are over the age of 39 you may be pondering the fate of the English language. As the younger generations fill cyberspace with terabytes of misspelled texts and tweets do you not wonder if gorgeous grammatical language will survive? Are the technophobes and anti-Twitterites doomed to a future world of #hashtag-driven conversation and ADHD-like literature? Those of us who care are reminded of George Orwell’s 1946 essay “Politics and the English Language”, in which he decried the swelling ugliness of the language at the time.

Orwell opens his essay thus,

Most people who bother with the matter at all would admit that the English language is in a bad way, but it is generally assumed that we cannot by conscious action do anything about it. Our civilization is decadent and our language — so the argument runs — must inevitably share in the general collapse. It follows that any struggle against the abuse of language is a sentimental archaism, like preferring candles to electric light or hansom cabs to aeroplanes. Underneath this lies the half-conscious belief that language is a natural growth and not an instrument which we shape for our own purposes.

My, how Orwell would squirm in his Oxfordshire grave were he to be exposed to his mother tongue, as tweeted, in 2013.

From the Guardian:

Some while ago, with reference to Orwell’s essay on “Politics and the English language”, I addressed the language of the internet, an issue that stubbornly refuses to go away. Perhaps now, more than ever, we need to consider afresh what’s happening to English prose in cyberspace.

To paraphrase Orwell, the English of the world wide web – loose, informal, and distressingly dyspeptic – is not really the kind people want to read in a book, a magazine, or even a newspaper. But there’s an assumption that that, because it’s part of the all-conquering internet, we cannot do a thing about it. Twenty-first century civilisation has been transformed in a way without precedent since the invention of moveable type. English prose, so one argument runs, must adapt to the new lexicon with all its grammatical violations and banality. Language is normative; it has – some will say – no choice. The violence the internet does to the English language is simply the cost of doing business in the digital age.

From this, any struggle against the abuse and impoverishment of English online (notably, in blogs and emails) becomes what Orwell called “a sentimental archaism”. Behind this belief lies the recognition that language is a natural growth and not an instrument we can police for better self-expression. To argue differently is to line up behind Jonathan Swift and the prescriptivists (see Swift’s essay “A Proposal for Correcting, Improving and Ascertaining the English Tongue”).

If you refer to “Politics and the English Language” (a famous essay actually commissioned for in-house consumption by Orwell’s boss, the Observer editor David Astor) you will find that I have basically adapted his more general concerns about language to the machinations of cyberspace and the ebb and flow of language on the internet.

And why not? First, he puts it very well. Second, among Orwell’s heirs (the writers, bloggers and journalists of today), there’s still a subconscious, half-admitted anxiety about what’s happening to English prose in the unpoliced cyber-wilderness. This, too, is a recurrent theme with deep roots. As long ago as 1946, Orwell said that English was “in a bad way”. Look it up: the examples he cited are both amusingly archaic, but also appropriately gruesome.

Sixty-something years on, in 2013, quite a lot of people would probably concede a similar anxiety: or at least some mild dismay at the overall crassness of English prose in the age of global communications.

Read the entire article here.

Image: Politics and the English language, book cover. Courtesy of George Orwell estate / Apple.

From RNA Chemistry to Cell Biology

Each day we inch towards a better scientific understanding of how life is thought to have begun on our planet. Over the last decade researchers have shown how molecules like the nucleotides that make up complex chains of RNA (ribonucleic acid) and DNA (deoxyribonucleic acid) may have formed in the primaeval chemical soup of the early Earth. But it’s altogether a much greater leap to get from RNA (or DNA) to even a simple biological cell. Some recent work sheds more light and suggests that the chemical to biological chasm between long-strands of RNA and a complex cell may not be as wide to cross as once thought.

From ars technica:

Origin of life researchers have made impressive progress in recent years, showing that simple chemicals can combine to make nucleotides, the building blocks of DNA and RNA. Given the right conditions, these nucleotides can combine into ever-longer stretches of RNA. A lot of work has demonstrated that RNAs can perform all sorts of interesting chemistry, specifically binding other molecules and catalyzing reactions.

So the case for life getting its start in an RNA world has gotten very strong in the past decade, but the difference between a collection of interesting RNAs and anything like a primitive cell—surrounded by membranes, filled with both RNA and proteins, and running a simple metabolism—remains a very wide chasm. Or so it seems. A set of papers that came out in the past several days suggest that the chasm might not be as large as we’d tend to think.

Ironing out metabolism

A lot of the basic chemistry that drives the cell is based on electron transport, typically involving proteins that contain an iron atom. These reactions not only create some of the basic chemicals that are necessary for life, they’re also essential to powering the cell. Both photosynthesis and the breakdown of sugars involve the transfer of electrons to and from proteins that contain an iron atom.

DNA and RNA tend to have nothing to do with iron, interacting with magnesium instead. But some researchers at Georgia Tech have considered that fact a historical accident. Since photosynthesis put so much oxygen into the atmosphere, most of the iron has been oxidized into a state where it’s not soluble in water. If you go back to before photosynthesis was around, the oceans were filled with dissolved iron. Previously, the group had shown that, in oxygen-free and iron rich conditions, RNAs would happily work with iron instead and that its presence could speed up their catalytic activity.

Now the group is back with a new paper showing that if you put a bunch of random RNAs into the same conditions, some of them can catalyze electron transfer reactions. By “random,” I mean RNAs that are currently used by cells to do completely unrelated things (specifically, ribosomal and transfer RNAs). The reactions they catalyze are very simple, but remember: these RNAs don’t normally function as a catalyst at all. It wouldn’t surprise me if, after a number of rounds of evolutionary selection, an iron-RNA combination could be found that catalyzes a reaction that’s a lot closer to modern metabolism.

All of which suggests that the basics of a metabolism could have gotten started without proteins around.

Proteins build membranes

Clearly, proteins showed up at some point. They certainly didn’t look much like the proteins we see today, which may have hundreds or thousands of amino acids linked together. In fact, they may not have looked much like proteins at all, if a paper from Jack Szostak’s group is any indication. Szostak’s found that just two amino acids linked together may have catalytic activity. Some of that activity can help them engage in competition over another key element of the first cells: membrane material.

The work starts with a two amino acid long chemical called a peptide. If that peptide happens to be serine linked to histidine (two amino acids in use by life today), it has an interesting chemical activity: very slowly and poorly, it links other amino acids together to form more peptides. This weak activity is especially true if the amino acids are phenylalanine and leucine, two water-hating chemicals. Once they’re linked, they will precipitate out of a water solution.

The authors added a fatty acid membrane, figuring that it would soak up the reaction product. That definitely worked, with the catalytic efficiency of serine-histidine going up as a result. But something else happened as well: membranes that incorporated the reaction product started growing. It turns out that its presence in the membrane made it an efficient scrounger of other membrane material. As they grew, these membranes extended as long filaments that would break up into smaller parts with a gentle agitation and then start growing all over again.

In fact, the authors could set up a bit of a Darwinian competition between membranes based on how much starting catalyst each had. All of which suggests that proteins might have found their way into the cell as very simple chemicals that, at least initially, weren’t in any way connected to genetic and biochemical functions performed by RNA. But any cell-like things that evolved an RNA that made short proteins could have a big advantage over its competition.

Read the entire article here.

Documentary Filmmaker or Smartphone Voyeur?

Yesterday’s murderous atrocity on a busy street in Woolwich, South East London has shocked many proud and stoic Londoners to the core. For two reasons. First, that a heinous act such as this can continue to be wrought by one human against another in honor of misguided and barbaric politics and under the guise of distorted religious fanaticism. Second, that many witnesses at close range recorded the unfolding scene on their smartphones for later dissemination via social media, but did nothing to prevent the ensuing carnage or to aid the victim and those few who did run to help.

Our thoughts go to the family and friends of the victim. Words cannot express the sadness.

To the perpetrators: you and your ideas will be consigned to the trash heap of history. To the voyeurs: you are complicit through your inaction; it would have been wiser to have used your smartphones as projectiles or to call the authorities, rather than to watch and record and tweet the bloodshed. You should be troubled and ashamed.

Your State Bird

The official national bird of the United States is the Bald Eagle. For that matter, it’s also the official animal. Thankfully it was removed from the endangered species list a mere 5 years ago. Aside from the bird itself Americans love the symbolism that the eagle implies — strength, speed, leadership and achievement. But do Americans know their State bird. A recent article from the bird-lovers over at Slate will refresh your memory, and also recommend a more relevant alternative.

From Slate:

I drove over a bridge from Maryland into Virginia today and on the big “Welcome to Virginia” sign was an image of the state bird, the northern cardinal—with a yellow bill. I should have scoffed, but it hardly registered. Everyone knows that state birds are a big joke. There are a million cardinals, a scattering of robins, and just a general lack of thought put into the whole thing.

States should have to put more thought into their state bird than I put into picking my socks in the morning. “Ugh, state bird? I dunno, what’re the guys next to us doing? Cardinal? OK, let’s do that too. Yeah put it on all the signs. Nah, no time to research the bill color, let’s just go.” It’s the official state bird! Well, since all these jackanape states are too busy passing laws requiring everyone to own guns or whatever to consider what their state bird should be, I guess I’ll have to do it.

1. Alabama. Official state bird: yellowhammer

Right out of the gate with this thing. Yellowhammer? C’mon. I Asked Jeeves and it told me that Yellowhammer is some backwoods name for a yellow-shafted flicker. The origin story dates to the Civil War, when some Alabama troops wore yellow-trimmed uniforms. Sorry, but that’s dumb, mostly because it’s just a coincidence and has nothing to do with the actual bird. If you want a woodpecker, go for something with a little more cachet, something that’s at least a full species.

What it should be: red-cockaded woodpecker

2. Alaska. Official state bird: willow ptarmigan

Willow Ptarmigans are the dumbest-sounding birds on Earth, sorry. They sound like rejected Star Wars aliens, angrily standing outside the Mos Eisley Cantina because their IDs were rejected. Why go with these dopes, Alaska, when you’re the best state to see the most awesome falcon on Earth?

What it should be: gyrfalcon

3. Arizona. Official state bird: cactus wren

Cactus Wren is like the only boring bird in the entire state. I can’t believe it.

What it should be: red-faced warbler

4. Arkansas. Official state bird: northern mockingbird

Christ. What makes this even less funny is that there are like eight other states with mockingbird as their official bird. I’m convinced that the guy whose job it was to report to the state’s legislature on what the official bird should be forgot until the day it was due and he was in line for a breakfast sandwich at Burger King. In a panic he walked outside and selected the first bird he could find, a dirty mockingbird singing its stupid head off on top of a dumpster.

What it should be: painted bunting

5. California. Official state bird: California quail

… Or perhaps the largest, most radical bird on the continent?

What it should be: California condor

6. Colorado. Official state bird: lark bunting

I’m actually OK with this. A nice choice. But why not go with one of the birds that are (or are pretty much) endemic in your state?

What it should be: brown-capped rosy-finch or Gunnison sage-grouse

Read the entire article here.

Image: Bald Eagle, Kodiak Alaska, 2010. Courtesy of Yathin S Krishnappa / Wikipedia.

Friendships of Utility

The average Facebook user is said to have 142 “friends”, and many active members have over 500. This certainly seems to be a textbook case of quantity over quality in the increasingly competitive status wars and popularity stakes of online neo- or pseudo-celebrity. That said, and regardless of your relationship with online social media, the one good to come from the likes — a small pun intended — of Facebook is that social scientists can now dissect and analyze your online behaviors and relationships as never before.

So, while Facebook, and its peers, may not represent a qualitative leap in human relationships the data and experiences that come from it may help future generations figure out what is truly important.

From the Wall Street Journal:

Facebook has made an indelible mark on my generation’s concept of friendship. The average Facebook user has 142 friends (many people I know have upward of 500). Without Facebook many of us “Millennials” wouldn’t know what our friends are up to or what their babies or boyfriends look like. We wouldn’t even remember their birthdays. Is this progress?

Aristotle wrote that friendship involves a degree of love. If we were to ask ourselves whether all of our Facebook friends were those we loved, we’d certainly answer that they’re not. These days, we devote equal if not more time to tracking the people we have had very limited human interaction with than to those whom we truly love. Aristotle would call the former “friendships of utility,” which, he wrote, are “for the commercially minded.”

I’d venture to guess that at least 90% of Facebook friendships are those of utility. Knowing this instinctively, we increasingly use Facebook as a vehicle for self-promotion rather than as a means to stay connected to those whom we love. Instead of sharing our lives, we compare and contrast them, based on carefully calculated posts, always striving to put our best face forward.

Friendship also, as Aristotle described it, can be based on pleasure. All of the comments, well-wishes and “likes” we can get from our numerous Facebook friends may give us pleasure. But something feels false about this. Aristotle wrote: “Those who love for the sake of pleasure do so for the sake of what is pleasant to themselves, and not insofar as the other is the person loved.” Few of us expect the dozens of Facebook friends who wish us a happy birthday ever to share a birthday celebration with us, let alone care for us when we’re sick or in need.

One thing’s for sure, my generation’s friendships are less personal than my parents’ or grandparents’ generation. Since we can rely on Facebook to manage our friendships, it’s easy to neglect more human forms of communication. Why visit a person, write a letter, deliver a card, or even pick up the phone when we can simply click a “like” button?

The ultimate form of friendship is described by Aristotle as “virtuous”—meaning the kind that involves a concern for our friend’s sake and not for our own. “Perfect friendship is the friendship of men who are good, and alike in virtue . . . . But it is natural that such friendships should be infrequent; for such men are rare.”

Those who came before the Millennial generation still say as much. My father and grandfather always told me that the number of such “true” friends can be counted on one hand over the course of a lifetime. Has Facebook increased our capacity for true friendship? I suspect Aristotle would say no.

Ms. Kelly joined Facebook in 2004 and quit in 2013.

Read the entire article here.

MondayMap: Global Intolerance

Following on from last week’s MondayMap post on intolerance and hatred within the United States — according to tweets on the social media site Twitter — we expand our view this week to cover the globe. This map is a based on a more detailed, global research study of people’s attitudes to having neighbors of a different race.

From the Washington Post:

When two Swedish economists set out to examine whether economic freedom made people any more or less racist, they knew how they would gauge economic freedom, but they needed to find a way to measure a country’s level of racial tolerance. So they turned to something called the World Values Survey, which has been measuring global attitudes and opinions for decades.

Among the dozens of questions that World Values asks, the Swedish economists found one that, they believe, could be a pretty good indicator of tolerance for other races. The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society. (The study concluded that economic freedom had no correlation with racial tolerance, but it does appear to correlate with tolerance toward homosexuals.)

Unfortunately, the Swedish economists did not include all of the World Values Survey data in their final research paper. So I went back to the source, compiled the original data and mapped it out on the infographic above. In the bluer countries, fewer people said they would not want neighbors of a different race; in red countries, more people did.

If we treat this data as indicative of racial tolerance, then we might conclude that people in the bluer countries are the least likely to express racist attitudes, while the people in red countries are the most likely.

Update: Compare the results to this map of the world’s most and least diverse countries.

Before we dive into the data, a couple of caveats. First, it’s entirely likely that some people lied when answering this question; it would be surprising if they hadn’t. But the operative question, unanswerable, is whether people in certain countries were more or less likely to answer the question honestly. For example, while the data suggest that Swedes are more racially tolerant than Finns, it’s possible that the two groups are equally tolerant but that Finns are just more honest. The willingness to state such a preference out loud, though, might be an indicator of racial attitudes in itself. Second, the survey is not conducted every year; some of the results are very recent and some are several years old, so we’re assuming the results are static, which might not be the case.

• Anglo and Latin countries most tolerant. People in the survey were most likely to embrace a racially diverse neighbor in the United Kingdom and its Anglo former colonies (the United States, Canada, Australia and New Zealand) and in Latin America. The only real exceptions were oil-rich Venezuela, where income inequality sometimes breaks along racial lines, and the Dominican Republic, perhaps because of its adjacency to troubled Haiti. Scandinavian countries also scored high.

• India, Jordan, Bangladesh and Hong Kong by far the least tolerant. In only three of 81 surveyed countries, more than 40 percent of respondents said they would not want a neighbor of a different race. This included 43.5 percent of Indians, 51.4 percent of Jordanians and an astonishingly high 71.8 percent of Hong Kongers and 71.7 percent of Bangladeshis.

Read more about this map here.

Pain Ray

We humans are capable of the most sublime creations, from soaring literary inventions to intensely moving music and gorgeous works of visual art. This stands in stark and paradoxical contrast to our range of inventions that enable efficient mass destruction, torture and death. The latest in this sad catalog of human tools of terror is the “pain ray”, otherwise known by its military euphemism as an Active Denial weapon. The good news is that it only delivers intense pain, rather than death. How inventive we humans really are — we should be so proud.

[tube]J1w4g2vr7B4[/tube]

From the New Scientist:

THE pain, when it comes, is unbearable. At first it’s comparable to a hairdryer blast on the skin. But within a couple of seconds, most of the body surface feels roasted to an excruciating degree. Nobody has ever resisted it: the deep-rooted instinct to writhe and escape is too strong.

The source of this pain is an entirely new type of weapon, originally developed in secret by the US military – and now ready for use. It is a genuine pain ray, designed to subdue people in war zones, prisons and riots. Its name is Active Denial. In the last decade, no other non-lethal weapon has had as much research and testing, and some $120 million has already been spent on development in the US.

Many want to shelve this pain ray before it is fired for real but the argument is far from cut and dried. Active Denial’s supporters claim that its introduction will save lives: the chances of serious injury are tiny, they claim, and it causes less harm than tasers, rubber bullets or batons. It is a persuasive argument. Until, that is, you bring the dark side of human nature into the equation.

The idea for Active Denial can be traced back to research on the effects of radar on biological tissue. Since the 1940s, researchers have known that the microwave radiation produced by radar devices at certain frequencies could heat the skin of bystanders. But attempts to use such microwave energy as a non-lethal weapon only began in the late 1980s, in secret, at the Air Force Research Laboratory (AFRL) at Kirtland Air Force Base in Albuquerque, New Mexico.

The first question facing the AFRL researchers was whether microwaves could trigger pain without causing skin damage. Radiation equivalent to that used in oven microwaves, for example, was out of the question since it penetrates deep into objects, and causes cells to break down within seconds.

The AFRL team found that the key was to use millimetre waves, very-short-wavelength microwaves, with a frequency of about 95 gigahertz. By conducting tests on human volunteers, they discovered that these waves would penetrate only the outer 0.4 millimetres of skin, because they are absorbed by water in surface tissue. So long as the beam power was capped – keeping the energy per square centimetre of skin below a certain level – the tissue temperature would not exceed 55 °C, which is just below the threshold for damaging cells (Bioelectromagnetics, vol 18, p 403).

The sensation, however, was extremely painful, because the outer skin holds a type of pain receptor called thermal nociceptors. These respond rapidly to threats and trigger reflexive “repel” reactions when stimulated (see diagram).

To build a weapon, the next step was to produce a high-power beam capable of reaching hundreds of metres. At the time, it was possible to beam longer-wavelength microwaves over great distances – as with radar systems – but it was not feasible to use the same underlying technology to produce millimetre waves.

Working with the AFRL, the military contractor Raytheon Company, based in Waltham, Massachusetts, built a prototype with a key bit of hardware: a gyrotron, a device for amplifying millimetre microwaves. Gyrotrons generate a rotating ring of electrons, held in a magnetic field by powerful cryogenically cooled superconducting magnets. The frequency at which these electrons rotate matches the frequency of millimetre microwaves, causing a resonating effect. The souped-up millimetre waves then pass to an antenna, which fires the beam.

The first working prototype of the Active Denial weapon, dubbed “System 0”, was completed in 2000. At 7.5 tonnes, it was too big to be easily transported. A few years later, it was followed by mobile versions that could be carried on heavy vehicles.

Today’s Active Denial device, designed for military use, looks similar to a large, flat satellite dish mounted on a truck. The microwave beam it produces has a diameter of about 2 metres and can reach targets several hundred metres away. It fires in bursts of about 3 to 5 seconds.

Those who have been at the wrong end of the beam report that the pain is impossible to resist. “You might think you can withstand getting blasted. Your body disagrees quite strongly,” says Spencer Ackerman, a reporter for Wired magazine’s blog, Danger Room. He stood in the beam at an event arranged for the media last year. “One second my shoulder and upper chest were at a crisp, early-spring outdoor temperature on a Virginia field. Literally the next second, they felt like they were roasted, with what can be likened to a super-hot tingling feeling. The sensation causes your nerves to take control of your feeble consciousness, so it wasn’t like I thought getting out of the way of the beam was a good idea – I did what my body told me to do.” There’s also little chance of shielding yourself; the waves penetrate clothing.

Read the entire article here.

Related video courtesy of CBS 60 Minutes.

Please Press 1 to Avoid Phone Menu Hell

Good customer service once meant that a store or service employee would know you by name. This person would know your previous purchasing habits and your preferences; this person would know the names of your kids and your dog. Great customer service once meant that an employee could use this knowledge to anticipate your needs or personalize a specific deal. Well, this type of service still exists — in some places — but many businesses have outsourced it to offshore call center personnel or to machines, or both. Service may seem personal, but it’s not — service is customized to suit your profile, but it’s not personal in the same sense that once held true.

And, to rub more salt into the customer service wound, businesses now use their automated phone systems seemingly to shield themselves from you, rather than to provide you with the service you want. After all, when was the last time you managed to speak to a real customer service employee after making it through “please press 1 for English“, the poor choice of musak or sponsored ads and the never-ending phone menus?

Now thanks to an enterprising and extremely patient soul there is an answer to phone menu hell.

Welcome to Please Press 1. Founded by Nigel Clarke (alumnus of 400 year old Dame Alice Owens School in London), Please Press 1 provides shortcuts for customer service phone menus for many of the top businesses in Britain [ed: we desperately need this service in the United States].

 

From the MailOnline:

A frustrated IT manager who has spent seven years making 12,000 calls to automated phone centres has launched a new website listing ‘short cut’ codes which can shave up to eight minutes off calls.

Nigel Clarke, 53, has painstakingly catalogued the intricate phone menus of hundreds of leading multi-national companies – some of which have up to 80 options.

He has now formulated his results into the website pleasepress1.com, which lists which number options to press to reach the desired department.

The father-of-three, from Fawkham, Kent, reckons the free service can save consumers more than eight minutes by cutting out up to seven menu options.

For example, a Lloyds TSB home insurance customer who wishes to report a water leak would normally have to wade through 78 menu options over seven levels to get through to the correct department.

But the new service informs callers that the combination 1-3-2-1-1-5-4 will get them straight through – saving over four minutes of waiting.

Mr Clarke reckons the service could save consumers up to one billion minutes a year.

He said: ‘Everyone knows that calling your insurance or gas company is a pain but for most, it’s not an everyday problem.

‘However, the cumulative effect of these calls is really quite devastating when you’re moving house or having an issue.

‘I’ve been working in IT for over 30 years and nothing gets me riled up like having my time wasted through inefficient design.

‘This is why I’ve devoted the best part of seven years to solving this issue.’

Mr Clarke describes call centre menu options as the ‘modern equivalent of Dante’s circles of hell’.

He sites the HMRC as one of the worst offenders, where callers can take up to six minutes to reach the correct department.

As one of the UK’s busiest call centres, the Revenue receives 79 million calls per year, or a potential 4.3 million working hours just navigating menus.

Mr Clarke believes that with better menu design, at least three million caller hours could be saved here alone.

He began his quest seven years ago as a self-confessed ‘call centre menu enthusiast’.

‘The idea began with the frustration of being met with a seemingly endless list of menu options,’ he said.

‘Whether calling my phone, insurance or energy company, they each had a different and often worse way of trying to “help” me.

‘I could sit there for minutes that seemed like hours, trying to get through their phone menus only to end up at the wrong place and having to redial and start again.’

He began noting down the menu options and soon realised he could shave several minutes off the waiting time.

Mr Clarke said: ‘When I called numbers regularly, I started keeping notes of the options to press. The numbers didn’t change very often and then it hit me.

Read the entire article here and visit Please Press 1, here.

Images courtesy of Time and Please Press 1.

The Internet of Things and Your (Lack of) Privacy

Ubiquitous connectivity for, and between, individuals and businesses is widely held to be beneficial for all concerned. We can connect rapidly and reliably with family, friends and colleagues from almost anywhere to anywhere via a wide array of internet enabled devices. Yet, as these devices become more powerful and interconnected, and enabled with location-based awareness, such as GPS (Global Positioning System) services, we are likely to face an increasing acute dilemma — connectedness or privacy?

From the Guardian:

The internet has turned into a massive surveillance tool. We’re constantly monitored on the internet by hundreds of companies — both familiar and unfamiliar. Everything we do there is recorded, collected, and collated – sometimes by corporations wanting to sell us stuff and sometimes by governments wanting to keep an eye on us.

Ephemeral conversation is over. Wholesale surveillance is the norm. Maintaining privacy from these powerful entities is basically impossible, and any illusion of privacy we maintain is based either on ignorance or on our unwillingness to accept what’s really going on.

It’s about to get worse, though. Companies such as Google may know more about your personal interests than your spouse, but so far it’s been limited by the fact that these companies only see computer data. And even though your computer habits are increasingly being linked to your offline behaviour, it’s still only behaviour that involves computers.

The Internet of Things refers to a world where much more than our computers and cell phones is internet-enabled. Soon there will be internet-connected modules on our cars and home appliances. Internet-enabled medical devices will collect real-time health data about us. There’ll be internet-connected tags on our clothing. In its extreme, everything can be connected to the internet. It’s really just a matter of time, as these self-powered wireless-enabled computers become smaller and cheaper.

Lots has been written about the “Internet of Things” and how it will change society for the better. It’s true that it will make a lot of wonderful things possible, but the “Internet of Things” will also allow for an even greater amount of surveillance than there is today. The Internet of Things gives the governments and corporations that follow our every move something they don’t yet have: eyes and ears.

Soon everything we do, both online and offline, will be recorded and stored forever. The only question remaining is who will have access to all of this information, and under what rules.

We’re seeing an initial glimmer of this from how location sensors on your mobile phone are being used to track you. Of course your cell provider needs to know where you are; it can’t route your phone calls to your phone otherwise. But most of us broadcast our location information to many other companies whose apps we’ve installed on our phone. Google Maps certainly, but also a surprising number of app vendors who collect that information. It can be used to determine where you live, where you work, and who you spend time with.

Another early adopter was Nike, whose Nike+ shoes communicate with your iPod or iPhone and track your exercising. More generally, medical devices are starting to be internet-enabled, collecting and reporting a variety of health data. Wiring appliances to the internet is one of the pillars of the smart electric grid. Yes, there are huge potential savings associated with the smart grid, but it will also allow power companies – and anyone they decide to sell the data to – to monitor how people move about their house and how they spend their time.

Drones are the another “thing” moving onto the internet. As their price continues to drop and their capabilities increase, they will become a very powerful surveillance tool. Their cameras are powerful enough to see faces clearly, and there are enough tagged photographs on the internet to identify many of us. We’re not yet up to a real-time Google Earth equivalent, but it’s not more than a few years away. And drones are just a specific application of CCTV cameras, which have been monitoring us for years, and will increasingly be networked.

Google’s internet-enabled glasses – Google Glass – are another major step down this path of surveillance. Their ability to record both audio and video will bring ubiquitous surveillance to the next level. Once they’re common, you might never know when you’re being recorded in both audio and video. You might as well assume that everything you do and say will be recorded and saved forever.

In the near term, at least, the sheer volume of data will limit the sorts of conclusions that can be drawn. The invasiveness of these technologies depends on asking the right questions. For example, if a private investigator is watching you in the physical world, she or he might observe odd behaviour and investigate further based on that. Such serendipitous observations are harder to achieve when you’re filtering databases based on pre-programmed queries. In other words, it’s easier to ask questions about what you purchased and where you were than to ask what you did with your purchases and why you went where you did. These analytical limitations also mean that companies like Google and Facebook will benefit more from the Internet of Things than individuals – not only because they have access to more data, but also because they have more sophisticated query technology. And as technology continues to improve, the ability to automatically analyse this massive data stream will improve.

In the longer term, the Internet of Things means ubiquitous surveillance. If an object “knows” you have purchased it, and communicates via either Wi-Fi or the mobile network, then whoever or whatever it is communicating with will know where you are. Your car will know who is in it, who is driving, and what traffic laws that driver is following or ignoring. No need to show ID; your identity will already be known. Store clerks could know your name, address, and income level as soon as you walk through the door. Billboards will tailor ads to you, and record how you respond to them. Fast food restaurants will know what you usually order, and exactly how to entice you to order more. Lots of companies will know whom you spend your days – and nights – with. Facebook will know about any new relationship status before you bother to change it on your profile. And all of this information will all be saved, correlated, and studied. Even now, it feels a lot like science fiction.

Read the entire article here.

Image: Big Brother, 1984. Poster. Courtesy of Telegraph.

Ultra-Conservation of Words

Linguists have traditionally held that words in a language have an average lifespan of around 8,000 years. Words change and are often discarded or replaced over time as the language evolves and co-opts other words from other tongues. English has been particularly adept at collecting many new words from different languages, which partly explains its global popularity.

Recently however, linguists have found that a small group of words have a lifespan that far exceeds the usual understanding. These 15,000-20,000 year old ultra-conserved words may be the linguistic precursors to common cognates — words with similar sound and meaning — that now span many different language families containing hundreds of languages.

From the Washington Post:

You, hear me! Give this fire to that old man. Pull the black worm off the bark and give it to the mother. And no spitting in the ashes!

It’s an odd little speech. But if you went back 15,000 years and spoke these words to hunter-gatherers in Asia in any one of hundreds of modern languages, there is a chance they would understand at least some of what you were saying.

A team of researchers has come up with a list of two dozen “ultraconserved words” that have survived 150 centuries. It includes some predictable entries: “mother,” “not,” “what,” “to hear” and “man.” It also contains surprises: “to flow,” “ashes” and “worm.”

The existence of the long-lived words suggests there was a “proto-Eurasiatic” language that was the common ancestor to about 700 contemporary languages that are the native tongues of more than half the world’s people.

“We’ve never heard this language, and it’s not written down anywhere,” said Mark Pagel, an evolutionary theorist at the University of Reading in England who headed the study published Monday in the Proceedings of the National Academy of Sciences. “But this ancestral language was spoken and heard. People sitting around campfires used it to talk to each other.”

In all, “proto-Eurasiatic” gave birth to seven language families. Several of the world’s important language families, however, fall outside that lineage, such as the one that includes Chinese and Tibetan; several African language families, and those of American Indians and Australian aborigines.

That a spoken sound carrying a specific meaning could remain unchanged over 15,000 years is a controversial idea for most historical linguists.

“Their general view is pessimistic,” said William Croft, a professor of linguistics at the University of New Mexico who studies the evolution of language and was not involved in the study. “They basically think there’s too little evidence to even propose a family like Eurasiatic.” In Croft’s view, however, the new study supports the plausibility of an ancestral language whose audible relics cross tongues today.

Pagel and three collaborators studied “cognates,” which are words that have the same meaning and a similar sound in different languages. Father (English), padre (Italian), pere (French), pater (Latin) and pitar (Sanskrit) are cognates. Those words, however, are from languages in one family, the Indo-European. The researchers looked much further afield, examining seven language families in all.

Read the entire article here and be sure to check out the interactive audio.

Age is All in the Mind (Hypothalamus)

Researchers are continuing to make great progress in unraveling the complexities of aging. While some fingers point to the shortening of telomeres — end caps — in our chromosomal DNA as a contributing factor, other research points to the hypothalamus. This small sub-region of the brain has been found to play a major role in aging and death (though, at the moment only in mice).

From the New Scientist:

The brain’s mechanism for controlling ageing has been discovered – and manipulated to shorten and extend the lives of mice. Drugs to slow ageing could follow

Tick tock, tick tock… A mechanism that controls ageing, counting down to inevitable death, has been identified in the hypothalamus?– a part of the brain that controls most of the basic functions of life.

By manipulating this mechanism, researchers have both shortened and lengthened the lifespan of mice. The discovery reveals several new drug targets that, if not quite an elixir of youth, may at least delay the onset of age-related disease.

The hypothalamus is an almond-sized puppetmaster in the brain. “It has a global effect,” says Dongsheng Cai at the Albert Einstein College of Medicine in New York. Sitting on top of the brain stem, it is the interface between the brain and the rest of the body, and is involved in, among other things, controlling our automatic response to the world around us, our hormone levels, sleep-wake cycles, immunity and reproduction.

While investigating ageing processes in the brain, Cai and his colleagues noticed that ageing mice produce increasing levels of nuclear factor kB (NF-kB)? ?– a protein complex that plays a major role in regulating immune responses. NF-kB is barely active in the hypothalamus of 3 to 4-month-old mice but becomes very active in old mice, aged 22 to 24 months.

To see whether it was possible to affect ageing by manipulating levels of this protein complex, Cai’s team tested three groups of middle-aged mice. One group was given gene therapy that inhibits NF-kB, the second had gene therapy to activate NF-kB, while the third was left to age naturally.

This last group lived, as expected, between 600 and 1000 days. Mice with activated NF-kB all died within 900 days, while the animals with NF-kB inhibition lived for up to 1100 days.

Crucially, the mice that lived the longest not only increased their lifespan but also remained mentally and physically fit for longer. Six months after receiving gene therapy, all the mice were given a series of tests involving cognitive and physical ability.

In all of the tests, the mice that subsequently lived the longest outperformed the controls, while the short-lived mice performed the worst.

Post-mortem examinations of muscle and bone in the longest-living rodents also showed that they had many chemical and physical qualities of younger mice.

Further investigation revealed that NF-kB reduces the level of a chemical produced by the hypothalamus called gonadotropin-releasing hormone (GnRH) ?– better known for its involvement in the regulation of puberty and fertility, and the production of eggs and sperm.

To see if they could control lifespan using this hormone, the team gave another group of mice??– 20 to 24 months old??– daily subcutaneous injections of GnRH for five to eight weeks. These mice lived longer too, by a length of time similar to that of mice with inhibited NF-kB.

GnRH injections also resulted in new neurons in the brain. What’s more, when injected directly into the hypothalamus, GnRH influenced other brain regions, reversing widespread age-related decline and further supporting the idea that the hypothalamus could be a master controller for many ageing processes.

GnRH injections even delayed ageing in the mice that had been given gene therapy to activate NF-kB and would otherwise have aged more quickly than usual. None of the mice in the study showed serious side effects.

So could regular doses of GnRH keep death at bay? Cai hopes to find out how different doses affect lifespan, but says the hormone is unlikely to prolong life indefinitely since GnRH is only one of many factors at play. “Ageing is the most complicated biological process,” he says.

Read the entire article after the jump.

Image: Location of Hypothalamus. Courtesy of Colorado State University / Wikipedia.

MondayMap: Intolerance and Hatred

A fascinating map of tweets espousing hatred and racism across the United States. The data analysis and map were developed by researchers at Humboldt State University.

From the Guardian:

[T]he students and professors at Humboldt State University who produced this map read the entirety of the 150,000 geo-coded tweets they analysed.

Using humans rather than machines means that this research was able to avoid the basic pitfall of most semantic analysis where a tweet stating ‘the word homo is unacceptable’ would still be classed as hate speech. The data has also been ‘normalised’, meaning that the scale accounts for the total twitter traffic in each county so that the final result is something that shows the frequency of hateful words on Twitter. The only question that remains is whether the views of US Twitter users can be a reliable indication of the views of US citizens.

See the interactive map and read the entire article here.

Big Data at the Personal Level

Stephen Wolfram, physicist, mathematician and complexity theorist, has taken big data ideas to an entirely new level — he’s quantifying himself and his relationships. He calls this discipline personal analytics.

While examining every phone call and computer keystroke he’s made may be rather useful to the FBI or to marketers, it is not until that personal data is tracked for physiological and medical purposes that it could become extremely valuable. But then again who wants their every move tracked 24 hours a day, even for medical science?

From ars technica:

Don’t be surprised if Stephen Wolfram, the renowned complexity theorist, software company CEO, and night owl, wants to schedule a work call with you at 9 p.m. In fact, after a decade of logging every phone call he makes, Wolfram knows the exact probability he’ll be on the phone with someone at that time: 39 percent.

Wolfram, a British-born physicist who earned a doctorate at age 20, is obsessed with data and the rules that explain it. He is the creator of the software Mathematica and of Wolfram Alpha, the nerdy “computational knowledge engine” that can tell you the distance to the moon right now, in units including light-seconds.

Now Wolfram wants to apply the same techniques to people’s personal data, an idea he calls “personal analytics.” He started with himself. In a blog post last year, Wolfram disclosed and analyzed a detailed record of his life stretching back three decades, including documents, hundreds of thousands of e-mails, and 10 years of computer keystrokes, a tally of which is e-mailed to him each morning so he can track his productivity the day before.

Last year, his company released its first consumer product in this vein, called Personal Analytics for Facebook. In under a minute, the software generates a detailed study of a person’s relationships and behavior on the site. My own report was revealing enough. It told me which friend lives at the highest latitude (Wicklow, Ireland) and the lowest (Brisbane, Australia), the percentage who are married (76.7 percent), and everyone’s local time. More of my friends are Scorpios than any other sign of the zodiac.

It looks just like a dashboard for your life, which Wolfram says is exactly the point. In a phone call that was recorded and whose start and stop time was entered into Wolfram’s life log, he discussed why personal analytics will make people more efficient at work and in their personal lives.

What do you typically record about yourself?

E-mails, documents, and normally, if I was in front of my computer, it would be recording keystrokes. I have a motion sensor for the room that records when I pace up and down. Also a pedometer, and I am trying to get an eye-tracking system set up, but I haven’t done that yet. Oh, and I’ve been wearing a sensor to measure my posture.

Do you think that you’re the most quantified person on the planet?

I couldn’t imagine that that was the case until maybe a year ago, when I collected together a bunch of this data and wrote a blog post on it. I was expecting that there would be people who would come forward and say, “Gosh, I’ve got way more than you.” But nobody’s come forward. I think by default that may mean I’m it, so to speak.

You coined this term “personal analytics.” What does it mean?

There’s organizational analytics, which is looking at an organization and trying to understand what the data says about its operation. Personal analytics is what you can figure out applying analytics to the person, to understand the operation of the person.

Read the entire article after the jump.

Image courtesy of Stephen Wolfram.

More CO2 is Good, Right?

Yesterday, May 10, 2013, scientists published new measures of atmospheric carbon dioxide (CO2). For the first time in human history CO2 levels reached an average of 400 parts per million (ppm). This is particularly troubling since CO2 has long been known as the most potent heat trapping component of the atmosphere. The sobering milestone was recorded from the Mauna Loa Observatory in Hawaii — monitoring has been underway at the site since the mid-1950s.

This has many climate scientists re-doubling their efforts to warn of the consequences of climate change, which is believed to be driven by human activity and specifically the generation of atmospheric CO2 in ever increasing quantities. But not to be outdone, the venerable Wall Street Journal — seldom known for its well-reasoned scientific journalism — chimed in with an op-ed on the subject. According to the WSJ we have nothing to worry about because increased levels of CO2 are good for certain crops and the Earth had historically much higher levels of CO2 (though pre-humanity).

Ashutosh Jogalekar over at The Curious Wavefunction dissects the WSJ article line by line:

Since we were discussing the differences between climate change “skeptics” and “deniers” (or “denialists”, whatever you want to call them) the other day this piece is timely. The Wall Street Journal is not exactly known for reasoned discussion of climate change, but this Op-Ed piece may set a new standard even for its own naysayers and skeptics. It’s a piece by William Happer and Harrison Schmitt that’s so one-sided, sparse on detail, misleading and ultimately pointless that I am wondering if it’s a spoof.

Happer and Schmitt’s thesis can be summed up in one line: More CO2 in the atmosphere is a good thing because it’s good for one particular type of crop plant. That’s basically it. No discussion of the downsides, not even a pretense of a balanced perspective. Unfortunately it’s not hard to classify their piece as a denialist article because it conforms to some of the classic features of denial; it’s entirely one sided, it’s very short on detail, it does a poor job even with the little details that it does present and it simply ignores the massive amount of research done on the topic. In short it’s grossly misleading.

First of all Happer and Schmitt simply dismiss any connection that might exist between CO2 levels and rising temperatures, in the process consigning a fair amount of basic physics and chemistry to the dustbin. There are no references and no actual discussion of why they don’t believe there’s a connection. That’s a shoddy start to put it mildly; you would expect a legitimate skeptic to start with some actual evidence and references. Most of the article after that consists of a discussion of the differences between so-called C3 plants (like rice) and C4 plants (like corn and sugarcane). This is standard stuff found in college biochemistry textbooks, nothing revealing here. But Happer and Schmitt leverage a fundamental difference between the two – the fact that C4 plants can utilize CO2 more efficiently than C3 plants under certain conditions – into an argument for increasing CO2 levels in the atmosphere.

This of course completely ignores all the other potentially catastrophic effects that CO2 could have on agriculture, climate, biodiversity etc. You don’t even have to be a big believer in climate change to realize that focusing on only a single effect of a parameter on a complicated system is just bad science. Happer and Schmitt’s argument is akin to the argument that everyone should get themselves addicted to meth because one of meth’s effects is euphoria. So ramping up meth consumption will make everyone feel happier, right?

But even if you consider that extremely narrowly defined effect of CO2 on C3 and C4 plants, there’s still a problem. What’s interesting is that the argument has been countered by Matt Ridley in the pages of this very publication:

But it is not quite that simple. Surprisingly, the C4 strategy first became common in the repeated ice ages that began about four million years ago. This was because the ice ages were a very dry time in the tropics and carbon-dioxide levels were very low—about half today’s levels. C4 plants are better at scavenging carbon dioxide (the source of carbon for sugars) from the air and waste much less water doing so. In each glacial cold spell, forests gave way to seasonal grasslands on a huge scale. Only about 4% of plant species use C4, but nearly half of all grasses do, and grasses are among the newest kids on the ecological block.

So whereas rising temperatures benefit C4, rising carbon-dioxide levels do not. In fact, C3 plants get a greater boost from high carbon dioxide levels than C4. Nearly 500 separate experiments confirm that if carbon-dioxide levels roughly double from preindustrial levels, rice and wheat yields will be on average 36% and 33% higher, while corn yields will increase by only 24%.

So no, the situation is more subtle than the authors think. In fact I am surprised that, given that C4 plants actually do grow better at higher temperatures, Happer and Schmitt missed an opportunity for making the case for a warmer planet. In any case, there’s a big difference between improving yields of C4 plants under controlled greenhouse conditions and expecting these yields to improve without affecting other components of the ecosystem by doing a giant planetary experiment.

Read the entire article after the jump.

Image courtesy of Sierra Club.

 

Menu Engineering

We live in a world of brands, pitches, advertising, promotions, PR, consumer research, product placement, focus groups, and 24/7 spin. So, it should come as no surprise that even that ubiquitous and utilitarian listing of food and drink items from your local restaurant — the menu — would come in for some 21st century marketing treatment.

Fast food chains have been optimizing the look and feel of their menus for years, often right down to the font, color (artificial) and placement of menu items. Now, many upscale restaurants are following suit. Some call it menu engineering.

From the Guardian:

It’s not always easy trying to read a menu while hungry like the wolf, woozy from aperitif and exchanging pleasantries with a dining partner. The eyes flit about like a pinball, pinging between set meal options, side dishes and today’s specials. Do I want comforting treats or something healthy? What’s cheap? Will I end up bitterly coveting my companion’s dinner? Is it immoral to fuss over such petty, first-world dilemmas? Oh God, the waiter’s coming over.

Why is it so hard to decide what to have? New research from Bournemouth University shows that most menus crowbar in far more dishes than people want to choose from. And when it comes to choosing food and drink, as an influential psychophysicist by the name of Howard Moskowitz once said: “The mind knows not what the tongue wants.”

Malcolm Gladwell cites an interesting nugget from his work for Nescafé. When asked what kind of coffee they like, most Americans will say: “a dark, rich, hearty roast”. But actually, only 25-27% want that. Most prefer weak, milky coffee. Judgement is clouded by aspiration, peer pressure and marketing messages.

The burden of choice

Perhaps this is part of the joy of a tasting or set menu – the removal of responsibility. And maybe the recent trend for tapas-style sharing plates has been so popular because it relieves the decision-making pressure if all your eggs are not in one basket. Is there a perfect amount of choice?

Bournemouth University’s new study has sought to answer this very question. “We were trying to establish the ideal number of starters, mains and puddings on a menu,” says Professor John Edwards. The study’s findings show that restaurant customers, across all ages and genders, do have an optimum number of menu items, below which they feel there’s too little choice and above which it all becomes disconcerting. In fast-food joints, people wanted six items per category (starters, chicken dishes, fish, vegetarian and pasta dishes, grills and classic meat dishes, steaks and burgers, desserts), while in fine dining establishments, they preferred seven starters and desserts, and 10 main courses, thank you very much.

Nightmare menu layouts

Befuddling menu design doesn’t help. A few years back, the author William Poundstone rather brilliantly annotated the menu from Balthazar in New York to reveal the marketing bells and whistles it uses to herd customers into parting with the maximum amount of cash. Professor Brian Wansink, author of Slim by Design, Mindless Eating Solutions to Every Day Life, has extensively researched menu psychology, or as he puts it, menu engineering. “What ends up initially catching the eye,” he says, “has an unfair advantage over anything a person sees later on.” There’s some debate about how people’s eyes naturally travel around menus, but Wansink reckons “we generally scan the menu in a z-shaped fashion starting at the top-left hand corner.” Whatever the pattern, though, we’re easily interrupted by items being placed in boxes, next to pictures or icons, bolded or in a different colour.

The language of food

The Oxford experimental psychologist Charles Spence has an upcoming review paper on the effect the name of a dish has on diners. “Give it an ethnic label,” he says, “such as an Italian name, and people will rate the food as more authentic.” Add an evocative description, and people will make far more positive comments about a dish’s appeal and taste. “A label directs a person’s attention towards a feature in a dish, and hence helps bring out certain flavours and textures,” he says.

But we are seeing a backlash against the menu cliches (drizzled, homemade, infused) that have arisen from this thinking. For some time now, at Fergus Henderson’s acclaimed restaurant, St John, they have let the ingredients speak for themselves, in simple lists. And if you eat at one of Russell Norman’s Polpo group of restaurants in London, you will see almost no adjectives (or boxes and other “flim-flam”, as he calls it), and he’s doing a roaring trade. “I’m particularly unsympathetic to florid descriptions,” he says.

However, Norman’s menus employ their own, subtle techniques to reel diners in. Take his flagship restaurant Polpo’s menu. Venetian dishes are printed on Italian butchers’ paper, which goes with the distressed, rough-hewn feel of the place. I don’t use a huge amount of Italian,” he says, “but I occasionally use it so that customers say ‘what is that?'” He picks an easy-to-pronounce word like suppli (rice balls), to start a conversation between diner and waiter.

Read the entire article here.

Image courtesy of Multyshades.

Your Weekly Groceries

Photographer Peter Menzel traveled to over 20 countries to compile his culinary atlas Hungry Planet. But this is no ordinary cookbook or trove of local delicacies. The book is a visual catalog of a family’s average weekly grocery shopping.

It is both enlightening and sobering to see the nutritional inventory of a Western family juxtaposed with that of a sub-Saharan African family. It puts into perspective the internal debate within the United States of the 1 percent versus the 99 percent. Those of us lucky enough to have been born in one of the world’s richer nations, even though we may be part of the 99 percent are still truly in the group of haves, rather than the have-nots.

For more on Menzel’s book jump over to Amazon.

The Melander family from Bargteheide, Germany, who spend around £320 [$480] on a week’s worth of food.

 

The Aboubakar family from Darfur, Sudan, in the Breidjing refugee camp in Chad. Their weekly food, which feeds six people, costs 79p [$1.19].

 

The Revis family from Raleigh in North Carolina. Their weekly shopping costs £219 [$328.50].

 

The Namgay family from Shingkhey, Bhutan, with a week’s worth of food that costs them around £3.20 [$4.80].

Images courtesy of Peter Menzel /Barcroft Media.