Category Archives: Environs

Millionaires are So Yesterday

Not far from London’s beautiful Hampstead Heath lies The Bishops Avenue. From the 1930s until the mid-1970s this mile-long street became the archetypal symbol for new wealth; the nouveau riche millionaires made this the most sought after — and well-known — address for residential property in the nation (of course “old money” still preferred its stately mansions and castles). But since then, The Bishops Avenue has changed, with many properties now in the hands of billionaires, hedge fund investors and oil rich plutocrats.

From the Telegraph:

You can tell when a property is out of your price bracket if the estate agent’s particulars come not on a sheet of A4 but are presented in a 50-page hardback coffee-table book, with a separate section for the staff quarters.

Other giveaway signs, in case you were in any doubt, are the fact the lift is leather-lined, there are 62 internal CCTV cameras, a private cinema, an indoor swimming pool, sauna, steam room, and a series of dressing rooms – “for both summer and winter”, the estate agent informs me – which are larger than many central London flats.

But then any property on The Bishops Avenue in north London is out of most people’s price bracket – such as number 62, otherwise known as Jersey House, which is on the market for £38 million. I am being shown around by Grant Alexson, from Knight Frank estate agents, both of us in our socks to ensure that we do not grubby the miles of carpets or marble floors in the bathrooms (all of which have televisions set into the walls).

My hopes of picking up a knock-down bargain had been raised after the news this week that one property on The Bishops Avenue, Dryades, had been repossessed. The owners, the family of the former Pakistan privatisation minister Waqar Ahmed Khan, were unable to settle a row with their lender, Deutsche Bank.

It is not the only property in the hands of the receivers on this mile-long stretch. One was tied up in a Lehman Brothers property portfolio and remains boarded up. Meanwhile, the Saudi royal family, which bought 10 properties during the First Gulf War as boltholes in case Saddam Hussein invaded, has offloaded the entire package for a reported £80 million in recent weeks. And the most expensive property on the market, Heath Hall, had £35 million knocked off the asking price (taking it down to a mere £65 million).

This has all thrown the spotlight once again on this strange road, which has been nicknamed “Millionaires’ Row” since the 1930s – when a million meant something. Now, it is called “Billionaires’ Row”. It was designed, from its earliest days, to be home to the very wealthy. One of the first inhabitants was George Sainsbury, son of the supermarket founder; another was William Lyle, who used his sugar fortune to build a vast mansion in the Arts and Crafts style. Stars such as Gracie Fields also lived here.

But between the wars, the road became the butt of Music Hall comedians who joked about it being full of “des-reses” for the nouveaux riches such as Billy Butlin. Evelyn Waugh, the master of social nuance, made sure his swaggering newspaper baron Lord Copper of Scoop resided here. It was the 1970s, however, that saw the road vault from being home to millionaires to a pleasure ground for international plutocrats, who used their shipping or oil wealth to snap up properties, knock them down and build monstrous mansions in “Hollywood Tudor” style. Worse were the pastiches of Classical temples, the most notorious of which was built by the Turkish industrialist Halis Toprak, who decided the bath big enough to fit 20 people was not enough of a statement. So he slapped “Toprak Mansion” on the portico (causing locals to dub it “Top Whack Mansion”). It was sold a couple of years ago to the Kazakhstani billionairess Horelma Peramam, who renamed it Royal Mansion.

Perhaps the most famous of recent inhabitants was Lakshmi Mittal, the steel magnate, and for a long time Britain’s richest man. But he sold Summer Palace, for £38 million in 2011 to move to the much grander Kensington Palace Gardens, in the heart of London. The cast list became even more varied with the arrival of Salman Rushdie who hid behind bullet-proof glass and tycoon Asil Nadir, whose address is now HM Belmarsh Prison.

Of course, you can be hard-pressed to discover who owns these properties or how much anyone paid. These are not run-of-the-mill transactions between families moving home. Official Land Registry records reveal a complex web of deals between offshore companies. Miss Peramam holds Royal Mansion in the name of Hartwood Resources Company, registered in the British Virgin Islands, and the records suggest she paid closer to £40 million than the £50 million reported.

Alexson says the complexity of the deals are not just about avoiding stamp duty (which is now at 7 per cent for properties over £2 million). “Discretion first, tax second,” he argues. “Look, some of the Middle Eastern families own £500 billion. Stamp duty is not an issue for them.” Still, new tax rules this year, which increase stamp duty to 15 per cent if the property is bought through an offshore vehicle, have had an effect, according to Alexson, who says that the last five houses he sold have been bought by an individual, not a company.

But there is little sign of these individuals on the road itself. Walking down the main stretch of the Avenue from the beautiful Hampstead Heath to the booming A1, which bisects the road, more than 10 of these 39 houses are either boarded up or in a state of severe disrepair. Behind the high gates and walls, moss and weeds climb over the balustrades. Many others are clearly uninhabited, except for a crew of builders and a security guard. (Barnet council defends all the building work it has sanctioned, with Alexson pointing out that the new developments are invariably rectifying the worst atrocities of the 1980s.)

Read the entire article here.

Image: Toprak Mansion (now known as Royal Mansion), The Bishops Avenue. Courtesy of Daily Mail.

Hotels of the Future

Fantastic — in the original sense of the word — designs for some futuristic hotels, some of which have arrived in the present.

See more designs here.

Image: The Heart hotel, designed by Arina Agieieva and Dmitry Zhuikov, is a proposed design for a New York hotel. The project aims to draw local residents and hotel visitors closer together by embedding the hotel into city life; bedrooms are found in the converted offices that flank the core of the structure – its heart – and leisure facilities are available for the use of everyone. Courtesy of Telegraph.

Mid-21st Century Climate

Call it what you may, but regardless of labels most climate scientists agree that our future weather systems are much more likely to be more extreme: more prolonged and more violent.

From ars technica:

If there was one overarching point that the fifth Intergovernmental Panel on Climate Change report took pains to stress, it was that the degree of change in the global climate system since the mid-1950s is unusual in scope. Depending on what exactly you measure, the planet hasn’t seen conditions like these for decades to millennia. But that conclusion leaves us with a question: when exactly can we expect the climate to look radically new, with features that have no historical precedent?

The answer, according to a modeling study published in this week’s issue of Nature, is “very soon”—as soon as 2047 under a “business-as-usual” emission scenario and only 22 years later under a reduced emissions scenario. Tropical countries will likely be the first to enter this new age of climatic erraticness and could experience extreme temperatures monthly after 2050. This, the authors argue, underscores the need for robust efforts targeted not only at protecting those vulnerable countries but also the rich biodiversity that they harbor.

Developing an index, one model at a time

Before attempting to peer into the future, the authors, led by the University of Hawaii’s Camilo Mora, first had to ensure that they could accurately replicate the recent past. To do so, they pooled together the predictive capabilities of 39 different models, using near-surface air temperature as their indicator of choice.

For each model, they established the bounds of natural climate variability as the minimum and maximum values attained between 1860 and 2005. Simultaneously crunching the outputs from all of these models proved to be the right decision, as Mora and his colleagues consistently found that a multi-model average best fit the real data.

Next, they turned to two widely used emission scenarios, or Representative Concentration Pathways (RCP) as they’re known in modeling vernacular, to predict the arrival of different climates over a period extending from 2006 to 2100. The first scenario, RCP45, assumes a concerted mitigation initiative and anticipates CO2 concentrations of up to 538 ppm by 2100 (up from the current 393 ppm). The second, RCP85, is the trusty “business-as-usual” scenario that anticipates concentrations of up to 936 ppm by the same year.

Timing the new normals

While testing the sensitivity of their index, Mora and his colleagues concluded that the length of the reference period—the number of years between 1860 and 2005 used as a basis for establishing the limits of historical climate variability—had no effect on the ultimate outcome. A longer period would include more instances of temperature extremes, both low and high, so you would expect that it would yield a broader range of limits. That would mean that any projections of extreme future events might not seem so extreme by comparison.

In practice, it didn’t matter whether the authors used 20 years or 140 years as the length of their reference period. What did matter, they found, was the number of consecutive years where the climate was out of historical bounds. This makes intuitive sense: if you consider fewer consecutive years, the departure from “normal” will come sooner.

Rather than pick one arbitrary number of consecutive years versus another, the authors simply used all of the possible values from each of the 39 models. That accounts for the relatively large standard deviations in the estimated starting dates of exceptional climates—18 years for the RCP45 scenario and 14 years for the RCP85 scenario. That means that the first clear climate shift could occur as early as 2033 or as late as 2087.

Though temperature served as the main proxy for climate in their study, the authors also analyzed four other variables for the atmosphere and two for the ocean. These included evaporation, transpiration, sensible heat flux (the conductive transfer of heat from the planet’s surface to the atmosphere) and precipitation, as well as sea surface temperature and surface pH in the ocean.

Replacing temperature with, or considering it alongside, any of the other four variables for atmosphere did not change the timing of climate departures. This is because temperature is the most sensitive variable and therefore also the earliest to exceed the normal bounds of historical variability.

When examining the ocean through the prism of sea surface temperature, the researchers determined that it would reach its tipping point by 2051 or 2072 under the RCP85 and RCP45 scenarios, respectively. However, when they considered both sea surface temperature and surface pH together, the estimated tipping point was moved all the way up to this decade.

Seawater pH has an extremely narrow range of historical variability, and it moved out of this range 5 years ago, which caused the year of the climate departure to jump forward several decades. This may be an extreme case, but it serves as a stark reminder that the ocean is already on the edge of uncharted territory.

Read the entire article here.

Image courtesy of Salon.

Nuclear Near Miss

Just over 50 years ago the United States Air Force came within a hair’s breadth of destroying much of the South Eastern part of the country. While on a routine flight along the eastern seaboard of the United States, a malfunctioning B-52 bomber accidentally dropped two 4-Megaton hydrogen bombs over Goldsboro, North Carolina on 23 January 1961.

Had either one of these bombs exploded — with a force over 200 times that of the bomb dropped over Hiroshima — the effects would have been calamitous.

From the Guardian:

A secret document, published in declassified form for the first time by the Guardian today, reveals that the US Air Force came dramatically close to detonating an atom bomb over North Carolina that would have been 260 times more powerful than the device that devastated Hiroshima.

The document, obtained by the investigative journalist Eric Schlosser under the Freedom of Information Act, gives the first conclusive evidence that the US was narrowly spared a disaster of monumental proportions when two Mark 39 hydrogen bombs were accidentally dropped over Goldsboro, North Carolina on 23 January 1961. The bombs fell to earth after a B-52 bomber broke up in mid-air, and one of the devices behaved precisely as a nuclear weapon was designed to behave in warfare: its parachute opened, its trigger mechanisms engaged, and only one low-voltage switch prevented untold carnage.

Each bomb carried a payload of 4 megatons – the equivalent of 4 million tons of TNT explosive. Had the device detonated, lethal fallout could have been deposited over Washington, Baltimore, Philadelphia and as far north as New York city – putting millions of lives at risk.

Though there has been persistent speculation about how narrow the Goldsboro escape was, the US government has repeatedly publicly denied that its nuclear arsenal has ever put Americans’ lives in jeopardy through safety flaws. But in the newly-published document, a senior engineer in the Sandia national laboratories responsible for the mechanical safety of nuclear weapons concludes that “one simple, dynamo-technology, low voltage switch stood between the United States and a major catastrophe”.

Writing eight years after the accident, Parker F Jones found that the bombs that dropped over North Carolina, just three days after John F Kennedy made his inaugural address as president, were inadequate in their safety controls and that the final switch that prevented disaster could easily have been shorted by an electrical jolt, leading to a nuclear burst. “It would have been bad news – in spades,” he wrote.

Jones dryly entitled his secret report “Goldsboro Revisited or: How I learned to Mistrust the H-Bomb” – a quip on Stanley Kubrick’s 1964 satirical film about nuclear holocaust, Dr Strangelove or: How I Learned to Stop Worrying and Love the Bomb.

The accident happened when a B-52 bomber got into trouble, having embarked from Seymour Johnson Air Force base in Goldsboro for a routine flight along the East Coast. As it went into a tailspin, the hydrogen bombs it was carrying became separated. One fell into a field near Faro, North Carolina, its parachute draped in the branches of a tree; the other plummeted into a meadow off Big Daddy’s Road.

Read the entire article here.

Image: Nuclear weapon test Romeo (yield 11 Mt) on Bikini Atoll. The test was part of the Operation Castle. Romeo was the first nuclear test conducted on a barge. The barge was located in the Bravo crater. Courtesy of Wikipedia.

Post-Apocalyptic Transportation

What better way to get around your post-apocalyptic neighborhhood after the end-of-times than on a trusty bicycle. Let’s face it, a simple human-powered, ride-share bike is likely to fare much better in a dystopian landscape than a gas-guzzling truck or an electric vehicle or even s fuel-efficient moped.

From Slate

There’s something post-apocalyptic about Citi Bike, the bike-sharing program that debuted a few months ago in parts of New York City. Or perhaps better terms would be “pre-post-apocalyptic” and “pre-dystopian.” Because these bikes basically are designed for the end of the world.

Bike-sharing programs have arisen around the world—from Washington, D.C., to Hangzhou, China. The New York bikes are almost disturbingly durable: Human-powered, solar-charged, and with aluminum frames so sturdy that during stress testing the bike broke the testing equipment. Sure, riding one through Midtown Manhattan is like entering a speedboat race on a manatee. And yes, they’re geared so that it feels you’re at a very goofy spinning class when riding up Second Avenue. But if you think post-apocalyptically, that gear ratio means a very efficient bike for carrying heavy loads. With the help of the local blacksmith, as long as he’s not too busy making helmets for fight-to-the-death cage matches, you could find a way to attach a hitch to the back of a Citi Bike and that could carry, say, a laser cannon, or a seat for the local warlord. Now all that’s left to do is to attach a hipster to one of the bikes, perhaps with an iron neck collar. Voila! The Citi Bike has become the Escalade SUV in the cannibal culture that arises after peak oil.

A Citi Bike would have made perfect sense in Cormac McCarthy’s The Road. Imagine:

The man put the boy on the handlebars of the bicycle.

It had once been blue. Streaks of cerulean remained in the spectral lines of dulled gray aluminum. It was heavy and his leg ached as he pedaled.

Who used to ride this bicycle, Papa?

No one person rode this bicycle. The bike was shared by everyone who could pay.

Did the people who shared the bikes carry the fire?

They thought they carried the fire.

Turns out the lack of bikes in end-of-the-world narratives has been identified as a cultural issue of significance; there’s even a TV Tropes page called “No Bikes in the Apocalypse” that takes this sort of story to task for forgetting that bicycles would work just fine if there wasn’t any gasoline.

The real problem is that your typical grizzled mutant-killing protagonist wearing a bandolier and carrying a shotgun would look ridiculous huffing up a hill on an 18-speed Trek. And think about where movies are made. In Hollywood, bikes are for immature losers, like Steve Carell’s character in The 40-Year-Old Virgin. Heroes don’t downshift. In a good Hollywood post-apocalypse, if the hero doesn’t have a jacked-up Dodge Charger with guns mounted on top, he (it’s always a he) trudges through the ashes, cradling his gun—or, if they haven’t all been eaten, he scores himself a horse.

There is one notable recent exception (not including Premium Rush, which wasn’t post-apocalyptic, and which no one saw): In the film version of World War Z, Brad Pitt and company have to sneak from a bunker to an airplane on creaky old bicycles. (The zombies are sensitive to noise.) It was obvious from the moment you first saw these old janky bikes that they were going to cause trouble. Future citizens facing a zombie pandemic should note the Citi Bikes tend to whirr, not rattle, so they would be perfect for slow, quiet travel through stumbling sleepy zombies.

The NYC bike-share program is also very much for-profit–it’s owned by Alta Bike Share, a global company that builds out programs like this–and super-mega-ultra-branded by Citicorp, down to the i in Citi Bike. It almost feels like they’re tempting fate, because nothing satisfies the consumer of science fiction like the failed optimism of a logotype creeping out from under dangling scraps of fabric and glue (see this Onion AV Club article on brands in post-apocalytic films). It’s magic when brands poke through under a pile of bones.

Why is this? Well, from a narrative-efficiency viewpoint it’s a pretty elegant way to set up your world. Marketing is so relentlessly positive, the smiles so big, that the sight of a skeleton wearing Lululemon or holding an iPhone does a lot of your expository work for you. Which, back in reality, is one of the things that makes branded stadiums slightly disturbing. The Coliseum got its name from the colossal statue of Nero that adjoined it. (The statue was given a number of different heads over the years, depending on who was in power.) As the Venerable Bede wrote in the eighth century: “as long as the Colossus stands, so shall Rome; when the Colossus falls, Rome shall fall; when Rome falls, so falls the world.” The stadium is still there, the statue is gone, and today photos of the Coliseum, and its cheesy fake gladiators posing with tourists, serve as a global shorthand for “empires eventually fall.” (If you want to know who’s in charge of a culture, look at what they name their stadiums.) Citi Bikes thus also seems particularly well-suited for a sort of Hunger Games-style future: 1) The economy crashes utterly 2) poor, hungry people compete in hyperviolent Citi Bike chariot races at Madison Square Garden, now renamed Velodrome 17.

A trundling Citi Bike would make sense in just about any post-apocalyptic or dystopian book or movie. In the post-humanity 1949 George R. Stewart classic Earth Abides, about a Berkeley student who survives a plague, the bikes would have been very practical as people rebuilt society across generations, especially after electricity stopped working. And Walter M. Miller Jr.’s legendary 1960 A Canticle for Leibowitz, about monks rebuilding the world after “the Flame Deluge,” could easily have featured monks pedaling around the empty desert after that deluge. Riding a Citi Bike (likely renamed something like “urbem vehentem”) would probably have been a tremendous, abbot-level privilege, and the repair manual would have been an illuminated manuscript. It’s gotten so that when I ride a Citi Bike I invariably end up thinking of all the buildings with their windows shattered, gray snow falling on people trudging in rags on their way to the rat market to buy a nice rat for Thanksgiving.

You have to wonder if “sharing” could survive. Probably not. I mean, at some level working headlights are more liability than asset, especially if you’re worried about being eaten. But the charging stations? As reliable sources of a steady flow of electricity, it’s pretty easy to imagine local chieftains taking those over, and lines of desperate people lining up to charge their cracked mobile devices so that they can look one last time at pictures of the people they lost, trading whatever of value they still possess for one last hour with their smartphones. It will be like the blackout, but forever.

If you prefer a nice total-surveillance dystopia to an absolute apocalypse, Citi Bikes are eminently trackable—they have a GPS-driven beacon installed in case they need to be retrieved. Three million rides have been made, 3 million swipes of the little Citi Bike keyfob that is used to keep track of who has which bike for how long. The service knows who you are and where you ride, and data visualizations show where people are traveling.

It’s only a matter of time, then, before a 24-style TV show gives us a bike-riding serial killer being tracked around New York City, clusters of incognito cops waiting by the docking station for their target to dock his bike with a ca-chunk. But that sort of government surveillance is almost passé in the age of the NSA.

Read the entire article here.

Image: Citi Bike, New York City. Courtesy of Velojoy.

The Rim Fire

One of the largest wildfires in California history — the Rim Fire — threatens some of the most spectacular vistas in the U.S. Yet, as it reshaped part of Yosemite Valley and surroundings it is forcing another reshaping: a fundamental re-thinking of the wildland urban interface (WUI) and the role of human activity in catalyzing natural processes.

From Wired:

For nearly two weeks, the nation has been transfixed by wildfire spreading through Yosemite National Park, threatening to pollute San Francisco’s water supply and destroy some of America’s most cherished landscapes. As terrible as the Rim Fire seems, though, the question of its long-term effects, and whether in some ways it could actually be ecologically beneficial, is a complicated one.

Some parts of Yosemite may be radically altered, entering entire new ecological states. Yet others may be restored to historical conditions that prevailed for for thousands of years from the last Ice Age’s end until the 19th century, when short-sighted fire management disrupted natural fire cycles and transformed the landscape.

In certain areas, “you could absolutely consider it a rebooting, getting the system back to the way it used to be,” said fire ecologist Andrea Thode of Northern Arizona University. “But where there’s a high-severity fire in a system that wasn’t used to having high-severity fires, you’re creating a new system.”

The Rim Fire now covers 300 square miles, making it the largest fire in Yosemite’s recent history and the sixth-largest in California’s. It’s also the latest in a series of exceptionally large fires that over the last several years have burned across the western and southwestern United States.

Fire is a natural, inevitable phenomenon, and one to which western North American ecologies are well-adapted, and even require to sustain themselves. The new fires, though, fueled by drought, a warming climate and forest mismanagement — in particular the buildup of small trees and shrubs caused by decades of fire suppression — may reach sizes and intensities too severe for existing ecosystems to withstand.

The Rim Fire may offer some of both patterns. At high elevations, vegetatively dominated by shrubs and short-needled conifers that produce a dense, slow-to-burn mat of ground cover, fires historically occurred every few hundred years, and they were often intense, reaching the crowns of trees. In such areas, the current fire will fit the usual cycle, said Thode.

Decades- and centuries-old seeds, which have remained dormant in the ground awaiting a suitable moment, will be cracked open by the heat, explained Thode. Exposed to moisture, they’ll begin to germinate and start a process of vegetative succession that results again in forests.

At middle elevations, where most of the Rim Fire is currently concentrated, a different fire dynamic prevails. Those forests are dominated by long-needled conifers that produce a fluffy, fast-burning ground cover. Left undisturbed, fires occur regularly.

“Up until the middle of the 20th century, the forests of that area would burn very frequently. Fires would go through them every five to 12 years,” said Carl Skinner, a U.S. Forest Service ecologist who specializes in relationships between fire and vegetation in northern California. “Because the fires burned as frequently as they did, it kept fuels from accumulating.”

A desire to protect houses, commercial timber and conservation lands by extinguishing these small, frequent fires changed the dynamic. Without fire, dead wood accumulated and small trees grew, creating a forest that’s both exceptionally flammable and structurally suited for transferring flames from ground to tree-crown level, at which point small burns can become infernos.

Though since the 1970s some fires have been allowed to burn naturally in the western parts of Yosemite, that’s not the case where the Rim Fire now burns, said Skinner. An open question, then, is just how big and hot it will burn.

Where the fire is extremely intense, incinerating soil seed banks and root structures from which new trees would quickly sprout, the forest won’t come back, said Skinner. Those areas will become dominated by dense, fast-growing shrubs that burn naturally every few years, killing young trees and creating a sort of ecological lock-in.

If the fire burns at lower intensities, though, it could result in a sort of ecological recalibration, said Skinner. In his work with fellow U.S. Forest Service ecologist Eric Knapp at the Stanislaus-Tuolumne Experimental Forest, Skinner has found that Yosemite’s contemporary, fire-suppressed forests are actually far more homogeneous and less diverse than a century ago.

The fire could “move the forests in a trajectory that’s more like the historical,” said Skinner, both reducing the likelihood of large future fires and generating a mosaic of habitats that contain richer plant and animal communities.

“It may well be that, across a large landscape, certain plants and animals are adapted to having a certain amount of young forest recovering after disturbances,” said forest ecologist Dan Binkley of Colorado State University. “If we’ve had a century of fires, the landscape might not have enough of this.”

Read the entire article here.

Image: Rim Fire, August 2013. Courtesy of Earth Observatory, NASA.

Ethical Meat and Idiotic Media

Lab grown meat is now possible. But is not available on an industrial scale to satisfy the human desire for burgers, steak and ribs. While this does represent a breakthrough it’s likely to be a while before the last cow or chicken or pig is slaughtered. Of course, the mainstream media picked up this important event and immediately labeled it with captivating headlines featuring the word “frankenburger”. Perhaps a well-intentioned lab will someday come up with an intelligent form of media organization.

From the New York Times (dot earth):

I first explored livestock-free approaches to keeping meat on menus in 2008 in a pieced titled “Can People Have Meat and a Planet, Too?”

It’s been increasingly clear since then that there are both environmental and — obviously — ethical advantages to using technology to sustain omnivory on a crowding planet. This presumes humans will not all soon shift to a purely vegetarian lifestyle, even though there are signs of what you might call “peak meat” (consumption, that is) in prosperous societies (Mark Bittman wrote a nice piece on this). Given dietary trends as various cultures rise out of poverty, I would say it’s a safe bet meat will remain a favored food for decades to come.

Now non-farmed meat is back in the headlines, with a patty of in-vitro beef – widely dubbed a “frankenburger” — fried and served in London earlier today.

The beef was grown in a lab by a pioneer in this arena — Mark Post of Maastricht University in the Netherlands. My colleague Henry Fountain has reported the details in a fascinating news article. Here’s an excerpt followed by my thoughts on next steps in what I see as an important area of research and development:

According to the three people who ate it, the burger was dry and a bit lacking in flavor. One taster, Josh Schonwald, a Chicago-based author of a book on the future of food [link], said “the bite feels like a conventional hamburger” but that the meat tasted “like an animal-protein cake.”

But taste and texture were largely beside the point: The event, arranged by a public relations firm and broadcast live on the Web, was meant to make a case that so-called in-vitro, or cultured, meat deserves additional financing and research…..

Dr. Post, one of a handful of scientists working in the field, said there was still much research to be done and that it would probably take 10 years or more before cultured meat was commercially viable. Reducing costs is one major issue — he estimated that if production could be scaled up, cultured beef made as this one burger was made would cost more than $30 a pound.

The two-year project to make the one burger, plus extra tissue for testing, cost $325,000. On Monday it was revealed that Sergey Brin, one of the founders of Google, paid for the project. Dr. Post said Mr. Brin got involved because “he basically shares the same concerns about the sustainability of meat production and animal welfare.”
The enormous potential environmental benefits of shifting meat production, where feasible, from farms to factories were estimated in “Environmental Impacts of Cultured Meat Production,”a 2011 study in Environmental Science and Technology.

Read the entire article here.

Image: Professor Mark Post holds the world’s first lab-grown hamburger. Courtesy of Reuters/David Parry / The Atlantic.

A Smarter Smart Grid

If you live somewhere rather toasty you know how painful your electricity bills can be during the summer months. So, wouldn’t it be good to have a system automatically find you the cheapest electricity when you need it most? Welcome to the artificially intelligent smarter smart grid.

From the New Scientist:

An era is coming in which artificially intelligent systems can manage your energy consumption to save you money and make the electricity grid even smarter

IF YOU’RE tired of keeping track of how much you’re paying for energy, try letting artificial intelligence do it for you. Several start-up companies aim to help people cut costs, flex their muscles as consumers to promote green energy, and usher in a more efficient energy grid – all by unleashing smart software on everyday electricity usage.

Several states in the US have deregulated energy markets, in which customers can choose between several energy providers competing for their business. But the different tariff plans, limited-time promotional rates and other products on offer can be confusing to the average consumer.

A new company called Lumator aims to cut through the morass and save consumers money in the process. Their software system, designed by researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, asks new customers to enter their energy preferences – how they want their energy generated, and the prices they are willing to pay. The software also gathers any available metering measurements, in addition to data on how the customer responds to emails about opportunities to switch energy provider.

A machine-learning system digests that information and scans the market for the most suitable electricity supply deal. As it becomes familiar with the customer’s habits it is programmed to automatically switch energy plans as the best deals become available, without interrupting supply.

“This ensures that customers aren’t taken advantage of by low introductory prices that drift upward over time, expecting customer inertia to prevent them from switching again as needed,” says Lumator’s founder and CEO Prashant Reddy.

The goal is not only to save customers time and money – Lumator claims it can save people between $10 and $30 a month on their bills – but also to help introduce more renewable energy into the grid. Reddy says power companies have little idea whether or not their consumers want to get their energy from renewables. But by keeping customer preferences on file and automatically switching to a new service when those preferences are met, Reddy hopes renewable energy suppliers will see the demand more clearly.

A firm called Nest, based in Palo Alto, California, has another way to save people money. It makes Wi-Fi-enabled thermostats that integrate machine learning to understand users’ habits. Energy companies in southern California and Texas offer deals to customers if they allow Nest to make small adjustments to their thermostats when the supplier needs to reduce customer demand.

“The utility company gives us a call and says they’re going to need help tomorrow as they’re expecting a heavy load,” says Matt Rogers, one of Nest’s founders. “We provide about 5 megawatts of load shift, but each home has a personalised demand response. The entire programme is based on data collected by Nest.”

Rogers says that about 5000 Nest users have opted-in to such load-balancing programmes.

Read the entire article here.

Image courtesy of Treehugger.

En Vie: Bio-Fabrication Expo

En Vie, french for “alive” is an exposition like no other. It’s a fantastical place defined through a rich collaboration of material scientists, biologists, architects, designers and engineers. The premise of En Vie is quite elegant — put these disparate minds together and ask them to imagine what the future will look like. And, it’s a quite magical world; a world where biological fabrication replaces traditional mechanical and chemical fabrication. Here shoes grow from plants, furniture from fungi and bees construct vases. The En Vie exhibit is open at the Space Foundation EDF in Paris, France until September 1.

From ars technica:

The natural world has, over millions of years, evolved countless ways to ensure its survival. The industrial revolution, in contrast, has given us just a couple hundred years to play catch-up using technology. And while we’ve been busily degrading the Earth since that revolution, nature continues to outdo us in the engineering of materials that are stronger, tougher, and multipurpose.

Take steel for example. According to the World Steel Association, for every ton produced, 1.8 tons of carbon dioxide is emitted into the atmosphere. In total in 2010, the iron and steel industries, combined, were responsible for 6.7 percent of total global CO2 emissions. Then there’s the humble spider, which produces silk that is—weight for weight—stronger than steel. Webs spun by Darwin’s bark spider in Madagascar, meanwhile, are 10 times tougher than steel and more durable than Kevlar, the synthetic fiber used in bulletproof vests. Material scientists savvy to this have ensured biomimicry is now high on the agenda at research institutions, and an exhibit currently on at the Space Foundation EDF in Paris is doing its best to popularize the notion that we should not just be salvaging the natural world but also learning from it.

En Vie (Alive), curated by Reader and Deputy Director of the Textile Futures Research Center at Central Saint Martins College Carole Collet, is an exposition for what happens when material scientists, architects, biologists, and engineers come together with designers to ask what the future will look like. According to them, it will be a world where plants grow our products, biological fabrication replaces traditional manufacturing, and genetically reprogrammed bacteria build new materials, energy, or even medicine.

It’s a fantastical place where plants are magnetic, a vase is built by 60,000 bees, furniture is made from funghi, and shoes from cellulose. You can print algae onto rice paper, then eat it or encourage gourds to grow in the shape of plastic components found in things like torches or radios (you’ll have to wait a few months for the finished product, though). These are not fanciful designs but real products, grown or fashioned with nature’s direct help.

In other parts of the exhibit, biology is the inspiration and shows what might be. Eskin, for instance, provides visitors with a simulation of how a building’s exterior could mimic and learn from the human body in keeping it warm and cool.

Alive shows that, speculative or otherwise, design has a real role to play in bringing different research fields together, which will be essential if there’s any hope of propelling the field into mass commercialization.

“More than any other point in history, advances in science and engineering are making it feasible to mimic natural processes in the laboratory, which makes it a very exciting time,” Craig Vierra, Professor and Assistant Chair, Biological Sciences at University of the Pacific, tells Wired.co.uk. In his California lab, Vierra has for the past few years been growing spider silk proteins from bacteria in order to engineer fibers that are close, if not quite ready, to give steel a run for its money. The technique involves purifying the spider silk proteins away from the bacteria proteins before concentrating these using a freeze-dryer in order to render them into powder form. A solvent is then added, and the material is spun into fiber using wet spinning techniques and stretched to three times its original length.

“Although the mechanical properties of the synthetic spider fibers haven’t quite reached those of natural fibers, research scientists are rapidly approaching this level of performance. Our laboratory has been working on improving the composition of the spinning dope and spinning parameters of the fibers to enhance their performance.”

Vierra is a firm believer that nature will save us.

“Mother Nature has provided us with some of the most outstanding biomaterials that can be used for a plethora of applications in the textile industry. In addition to these, modern technological advances will also allow us to create new biocomposite materials that rely on the fundamentals of natural processes, elevating the numbers and types of materials that are available. But, more importantly, we can generate eco-friendly materials.

“As the population size increases, the availability of natural resources will become more scarce and limiting for humans. It will force society to develop new methods and strategies to produce larger quantities of materials at a faster pace to meet the demands of the world. We simply must find more cost-efficient methods to manufacture materials that are non-toxic for the environment. Many of the materials being synthesized today are very dangerous after they degrade and enter the environment, which is severely impacting the wildlife and disrupting the ecology of the animals on the planet.”

According to Vierra, the fact that funding in the field has become extremely competitive over the past ten years is proof of the quality of research today. “The majority of scientists are expected to justify how their research has a direct, immediate tie to applications in society in order to receive funding.”

We really have no alternative but to continue down this route, he argues. Without advances in material science, we will continue to produce “inferior materials” and damage the environment. “Ultimately, this will affect the way humans live and operate in society.”

We’re agreed that the field is a vital and rapidly growing one. But what value, if any, can a design-led project bring to the table, aside from highlighting the related issues. Vierra has assessed a handful of the incredible designs on display at Alive for us to see which he thinks could become a future biomanufacturing reality.

Read the entire article here.

Image: Radiant Soil, En Vie Exposition. Courtesy of Philip Beesley, En Vie / Wired.

Only Three Feet

Three feet. Three feet is nothing you say. Three feet is less than the difference between the shallow and deep ends of most swimming pools. Well, when the three feet is the mean ocean level rise it becomes a little more significant. And, when that three feet is the rise predicted to happen within the next 87 years, by 2100, it’s, well, how do you say, catastrophic.

A rise like that and you can kiss goodbye to your retirement home in Miami, and for that matter, kiss goodbye to much of southern Florida, and many coastal communities around the world.

From the New York Times:

An international team of scientists has found with near certainty that human activity is the cause of most of the temperature increases of recent decades, and warns that sea levels could rise by more than three feet by the end of the century if emissions continue at a runaway pace.

The scientists, whose findings are reported in a summary of the next big United Nations climate report, largely dismiss a recent slowdown in the pace of warming, which is often cited by climate change contrarians, as probably related to short-term factors. The report emphasizes that the basic facts giving rise to global alarm about future climate change are more established than ever, and it reiterates that the consequences of runaway emissions are likely to be profound.

“It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010,” the draft report says. “There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level, and changed some climate extremes in the second half of the 20th century.”

The “extremely likely” language is stronger than in the last major United Nations report, published in 2007, and it means the authors of the draft document are now 95 percent to 100 percent confident that human activity is the primary influence on planetary warming. In the 2007 report, they said they were 90 percent to 100 percent certain on that issue.

On another closely watched issue, however, the authors retreated slightly from their 2007 position.

On the question of how much the planet could warm if carbon dioxide levels in the atmosphere doubled, the previous report had largely ruled out any number below 3.6 degrees Fahrenheit. The new draft says the rise could be as low as 2.7 degrees, essentially restoring a scientific consensus that prevailed from 1979 to 2007.

Most scientists see only an outside chance that the warming will be as low as either of those numbers, with the published evidence suggesting that an increase above 5 degrees Fahrenheit is likely if carbon dioxide doubles.

The new document is not final and will not become so until an intensive, closed-door negotiating session among scientists and government leaders in Stockholm in late September. But if the past is any guide, most of the core findings of the document will survive that final review.

The document was leaked over the weekend after it was sent to a large group of people who had signed up to review it. It was first reported on in detail by the Reuters news agency, and The New York Times obtained a copy independently to verify its contents.

It was prepared by the Intergovernmental Panel on Climate Change, a large, international group of scientists appointed by the United Nations. The group does no original research, but instead periodically assesses and summarizes the published scientific literature on climate change.

“The text is likely to change in response to comments from governments received in recent weeks and will also be considered by governments and scientists at a four-day approval session at the end of September,” the panel’s spokesman, Jonathan Lynn, said in a statement Monday. “It is therefore premature and could be misleading to attempt to draw conclusions from it.”

The intergovernmental panel won the Nobel Peace Prize along with Al Gore in 2007 for seeking to educate the world’s citizens about the risks of global warming. But it has also become a political target for climate contrarians, who helped identify several minor errors in the last big report from 2007. This time, the group adopted rigorous procedures in hopes of preventing such mistakes.

On sea level, one of the biggest single worries about climate change, the new report goes well beyond the one from 2007, which largely sidestepped the question of how much the ocean could rise this century.

The new report lays out several scenarios. In the most optimistic, the world’s governments would prove far more successful at getting emissions under control than they have been in the recent past, helping to limit the total warming.

In that circumstance, sea level could be expected to rise as little as 10 inches by the end of the century, the report found. That is a bit more than the eight-inch rise in the 20th century, which proved manageable even though it caused severe erosion along the world’s shorelines.

Read the entire article here.

Image courtesy of the Telegraph.

CSA

No, it’s not another network cop show. CSA began life as community supported agriculture — neighbors buying fresh produce from collectives of local growers and farmers. Now, CSA has grown itself to include art — community supported art — exposing neighbors to local color and creativity.

From the New York Times:

For years, Barbara Johnstone, a professor of linguistics at Carnegie Mellon University here, bought shares in a C.S.A. — a community-supported agriculture program — and picked up her occasional bags of tubers or tomatoes or whatever the member farms were harvesting.

Her farm shares eventually lapsed. (“Too much kale,” she said.) But on a recent summer evening, she showed up at a C.S.A. pickup location downtown and walked out carrying a brown paper bag filled with a completely different kind of produce. It was no good for eating, but it was just as homegrown and sustainable as what she used to get: contemporary art, fresh out of local studios.

“It’s kind of like Christmas in the middle of July,” said Ms. Johnstone, who had just gone through her bag to see what her $350 share had bought. The answer was a Surrealistic aluminum sculpture (of a pig’s jawbone, by William Kofmehl III), a print (a deadpan image appropriated from a lawn-care book, by Kim Beck) and a ceramic piece (partly about slavery, by Alexi Morrissey).

Without even having to change the abbreviation, the C.S.A. idea has fully made the leap from agriculture to art. After the first program started four years ago in Minnesota, demonstrating that the concept worked just as well for art lovers as for locavores, community-supported art programs are popping up all over the country: in Pittsburgh, now in its first year; Miami; Brooklyn; Lincoln, Neb.; Fargo, N.D.

The goal, borrowed from the world of small farms, is a deeper-than-commerce connection between people who make things and people who buy them. The art programs are designed to be self-supporting: Money from shares is used to pay the artists, who are usually chosen by a jury, to produce a small work in an edition of 50 or however many shares have been sold. The shareholders are often taking a leap of faith. They don’t know in advance what the artists will make and find out only at the pickup events, which are as much about getting to know the artists as collecting the fruits of their shares.

The C.S.A.’s have flourished in larger cities as a kind of organic alternative to the dominance of the commercial gallery system and in smaller places as a way to make up for the dearth of galleries, as a means of helping emerging artists and attracting people who are interested in art but feel they have neither the means nor the connections to collect it.

“A lot of our people who bought shares have virtually no real experience with contemporary art,” said Dayna Del Val, executive director of the Arts Partnership in Fargo, which began a C.S.A. last year, selling 50 shares at $300 each for pieces from nine local artists. “They’re going to a big-box store and buying prints of Monet’s ‘Water Lilies,’ if they have anything.”

Read the entire article here.

Image courtesy of Daily Camera.

Citizens and Satellites: SkyTruth

Daily we are reminded how much our world has changed and how it continues to transform. Technology certainly aids those who seek to profit from Earth’s resources, as they drill, cut, dig, and explode. Some use it wisely, while others leave our fragile home covered in scars of pollution and exploitation — often unseen.

For those who care passionately about the planet, satellite surveillance has become an tool essential tool — in powerful yet unexpected ways.

From the Washington Post:

Somewhere in the South Pacific, thousands of miles from the nearest landfall, there is a fishing ship. Let’s say you’re on it. Go onto the open deck, scream, jump around naked, fire a machine gun into the air — who will ever know? You are about as far from anyone as it is possible to be.

But you know what you should do? You should look up and wave.

Because 438 miles above you, moving at 17,000 miles per hour, a polar-orbiting satellite is taking your photograph. A man named John Amos is looking at you. He knows the name and size of your ship, how fast you’re moving and, perhaps, if you’re dangling a line in the water, what type of fish you’re catching.

Sheesh, you’re thinking, Amos must be some sort of highly placed international official in maritime law. … Nah.

He’s a 50-year-old geologist who heads a tiny nonprofit called SkyTruth in tiny Shepherdstown, W.Va., year-round population, 805.

Amos is looking at these ships to monitor illegal fishing in Chilean waters. He’s doing it from a quiet, shaded street, populated mostly with old houses, where the main noises are (a) birds and (b) the occasional passing car. His office, in a one-story building, shares a toilet with a knitting shop.

With a couple of clicks on the keyboard, Amos switches his view from the South Pacific to Tioga County, Pa., where SkyTruth is cataloguing, with a God’s-eye view, the number and size of fracking operations. Then it’s over to Appalachia for a 40-year history of what mountaintop-removal mining has wrought, all through aerial and satellite imagery, 59 counties covering four states.

“You can track anything in the world from anywhere in the world,” Amos is saying, a smile coming into his voice. “That’s the real revolution.”

Amos is, by many accounts, reshaping the postmodern environmental movement. He is among the first, if not the only, scientist to take the staggering array of satellite data that have accumulated over 40 years, turn it into maps with overlays of radar or aerial flyovers, then fan it out to environmental agencies, conservation nonprofit groups and grass-roots activists. This arms the little guys with the best data they’ve ever had to challenge oil, gas, mining and fishing corporations over how they’re changing the planet.

His satellite analysis of the gulf oil spill in 2010, posted on SkyTruth’s Web site, almost single-handedly forced BP and the U.S. government to acknowledge that the spill was far worse than either was saying.

He was the first to document how many Appalachian mountains have been decapitated in mining operations (about 500) because no state or government organization had ever bothered to find out, and no one else had, either. His work was used in the Environmental Protection Agency’s rare decision to block a major new mine in West Virginia, a decision still working its way through the courts.

“John’s work is absolutely cutting-edge,” says Kert Davies, research director of Greenpeace. “No one else in the nonprofit world is watching the horizon, looking for how to use satellite imagery and innovative new technology.”

“I can’t think of anyone else who’s doing what John is,” says Peter Aengst, regional director for the Wilderness Society’s Northern Rockies office.

Amos’s complex maps “visualize what can’t be seen with the human eye — the big-picture, long-term impact of environment damage,” says Linda Baker, executive director of the Upper Green River Alliance, an activist group in Wyoming that has used his work to illustrate the growth of oil drilling.

This distribution of satellite imagery is part of a vast, unparalleled democratization of humanity’s view of the world, an event not unlike cartography in the age of Magellan, the unknowable globe suddenly brought small.

Read the entire article here.

Image: Detail from a September 2012 satellite image of natural gas drilling infrastructure on public lands near Pinedale, Wyoming. Courtesy of SkyTruth.

Earth as the New Venus

New research models show just how precarious our planet’s climate really is. Runaway greenhouse warming would make a predicted 2-6 feet rise in average sea levels over the next 50-100 years seem like a puddle at the local splash pool.

From ars technica:

With the explosion of exoplanet discoveries, researchers have begun to seriously revisit what it takes to make a planet habitable, defined as being able to support liquid water. At a basic level, the amount of light a planet receives sets its temperature. But real worlds aren’t actually basic—they have atmospheres, reflect some of that light back into space, and experience various feedbacks that affect the temperature.

Attempts to incorporate all those complexities into models of other planets have produced some unexpected results. Some even suggest that Earth teeters on the edge of experiencing a runaway greenhouse, one that would see its oceans boil off. The fact that large areas of the planet are covered in ice may make that conclusion seem a bit absurd, but a second paper looks at the problem from a somewhat different angle—and comes to the same conclusion. If it weren’t for clouds and our nitrogen-rich atmosphere, the Earth might be an uninhabitable hell right now.

The new work focuses on a very simple model of an atmosphere: a linear column of nothing but water vapor. This clearly doesn’t capture the complex dynamics of weather and the different amounts of light to reach the poles, but it does include things like the amount of light scattered back out into space and the greenhouse impact of the water vapor. These sorts of calculations are simple enough that they were first done decades ago, but the authors note that this particular problem hadn’t been revisited in 25 years. Our knowledge of how water vapor absorbs both visible and infrared light has improved over that time.

Water vapor, like other greenhouse gasses, allows visible light to reach the surface of a planet, but it absorbs most of the infrared light that gets emitted back toward space. Only a narrow window, centered around 10 micrometer wavelengths, makes it back out to space. Once the incoming energy gets larger than the amount that can escape, the end result is a runaway greenhouse: heat evaporates more surface water, which absorbs more infrared, trapping even more heat. At some point, the atmosphere gets so filled with water vapor that light no longer even reaches the surface, instead getting absorbed by the atmosphere itself.

The model shows that, once temperatures reach 1,800K, a second window through the water vapor opens up at about four microns, which allows additional energy to escape into space. The authors suggest that this could be used when examining exoplanets, as high emissions in this region could be taken as an indication that the planet was undergoing a runaway greenhouse.

The authors also used the model to look at what Earth would be like if it had a cloud-free, water atmosphere. The surprise was that the updated model indicated that this alternate-Earth atmosphere would absorb 30 percent more energy than previous estimates suggested. That’s enough to make a runaway greenhouse atmosphere stable at the Earth’s distance from the Sun.

So, why is the Earth so relatively temperate? The authors added a few additional factors to their model to find out. Additional greenhouse gasses like carbon dioxide and methane made runaway heating more likely, while nitrogen scattered enough light to make it less likely. The net result is that, under an Earth-like atmosphere composition, our planet should experience a runaway greenhouse. (In fact, greenhouse gasses can lower the barrier between a temperate climate and a runaway greenhouse, although only at concentrations much higher than we’ll reach even if we burn all the fossil fuels available.) But we know it hasn’t. “A runaway greenhouse has manifestly not occurred on post-Hadean Earth,” the authors note. “It would have sterilized Earth (there is observer bias).”

So, what’s keeping us cool? The authors suggest two things. The first is that our atmosphere isn’t uniformly saturated with water; some areas are less humid and allow more heat to radiate out into space. The other factor is the existence of clouds. Depending on their properties, clouds can either insulate or reflect sunlight back into space. On balance, however, it appears they are key to keeping our planet’s climate moderate.

But clouds won’t help us out indefinitely. Long before the Sun expands and swallows the Earth, the amount of light it emits will rise enough to make a runaway greenhouse more likely. The authors estimate that, with an all-water atmosphere, we’ve got about 1.5 billion years until the Earth is sterilized by skyrocketing temperatures. If other greenhouse gasses are present, then that day will come even sooner.

The authors don’t expect that this will be the last word on exoplanet conditions—in fact, they revisited waterlogged atmospheres in the hopes of stimulating greater discussion of them. But the key to understanding exoplanets will ultimately involve adapting the planetary atmospheric models we’ve built to understand the Earth’s climate. With full, three-dimensional circulation of the atmosphere, these models can provide a far more complete picture of the conditions that could prevail under a variety of circumstances. Right now, they’re specialized to model the Earth, but work is underway to change that.

Read the entire article here.

Image: Venus shrouded in perennial clouds of carbon dioxide, sulfur dioxide and sulfuric acid, as seen by the Messenger probe, 2004. Courtesy of Wikipedia.

MondayMap: Feeding the Mississippi

The system of streams and tributaries that feeds the great Mississippi river is a complex interconnected web covering around half of the United States. A new mapping tool puts it all in one intricate chart.

From Slate:

A new online tool released by the Department of the Interior this week allows users to select any major stream and trace it up to its sources or down to its watershed. The above map, exported from the tool, highlights all the major tributaries that feed into the Mississippi River, illustrating the river’s huge catchment area of approximately 1.15 million square miles, or 37 percent of the land area of the continental U.S. Use the tool to see where the streams around you are getting their water (and pollution).

See a larger version of the map here.

Image: Map of the Mississippi river system. Courtesy of Nationalatlas.gov.

Helping the Honeybees

Agricultural biotechnology giant Monsanto is joining efforts to help the honeybee. Honeybees the world over have been suffering from a widespread and catastrophic condition often referred to a colony collapse disorder.

From Technology Review:

Beekeepers are desperately battling colony collapse disorder, a complex condition that has been killing bees in large swaths and could ultimately have a massive effect on people, since honeybees pollinate a significant portion of the food that humans consume.

A new weapon in that fight could be RNA molecules that kill a troublesome parasite by disrupting the way its genes are expressed. Monsanto and others are developing the molecules as a means to kill the parasite, a mite that feeds on honeybees.

The killer molecule, if it proves to be efficient and passes regulatory hurdles, would offer welcome respite. Bee colonies have been dying in alarming numbers for several years, and many factors are contributing to this decline. But while beekeepers struggle with malnutrition, pesticides, viruses, and other issues in their bee stocks, one problem that seems to be universal is the Varroa mite, an arachnid that feeds on the blood of developing bee larvae.

“Hives can survive the onslaught of a lot of these insults, but with Varroa, they can’t last,” says Alan Bowman, a University of Aberdeen molecular biologist in Scotland, who is studying gene silencing as a means to control the pest.

The Varroa mite debilitates colonies by hampering the growth of young bees and increasing the lethality of the viruses that it spreads. “Bees can quite happily survive with these viruses, but now, in the presence of Varroa, these viruses become lethal,” says Bowman. Once a hive is infested with Varroa, it will die within two to four years unless a beekeeper takes active steps to control it, he says.

One of the weapons beekeepers can use is a pesticide that kills mites, but “there’s always the concern that mites will become resistant to the very few mitocides that are available,” says Tom Rinderer, who leads research on honeybee genetics at the U.S. Department of Agriculture Research Service in Baton Rouge, Louisiana. And new pesticides to kill mites are not easy to come by, in part because mites and bees are found in neighboring branches of the animal tree. “Pesticides are really difficult for chemical companies to develop because of the relatively close relationship between the Varroa and the bee,” says Bowman.

RNA interference could be a more targeted and effective way to combat the mites. It is a natural process in plants and animals that normally defends against viruses and potentially dangerous bits of DNA that move within genomes. Based upon their nucleotide sequence, interfering RNAs signal the destruction of the specific gene products, thus providing a species-specific self-destruct signal. In recent years, biologists have begun to explore this process as a possible means to turn off unwanted genes in humans (see “Gene-Silencing Technique Targets Scarring”) and to control pests in agricultural plants (see “Crops that Shut Down Pests’ Genes”).  Using the technology to control pests in agricultural animals would be a new application.

In 2011 Monsanto, the maker of herbicides and genetically engineered seeds, bought an Israeli company called Beeologics, which had developed an RNA interference technology that can be fed to bees through sugar water. The idea is that when a nurse bee spits this sugar water into each cell of a honeycomb where a queen bee has laid an egg, the resulting larvae will consume the RNA interference treatment. With the right sequence in the interfering RNA, the treatment will be harmless to the larvae, but when a mite feeds on it, the pest will ingest its own self-destruct signal.

The RNA interference technology would not be carried from generation to generation. “It’s a transient effect; it’s not a genetically modified organism,” says Bowman.

Monsanto says it has identified a few self-destruct triggers to explore by looking at genes that are fundamental to the biology of the mite. “Something in reproduction or egg laying or even just basic housekeeping genes can be a good target provided they have enough difference from the honeybee sequence,” says Greg Heck, a researcher at Monsanto.

Read the entire article here.

Image: Honeybee, Apis mellifera. Courtesy of Wikipedia.

MondayMap: U.S. Interstate Highway System

It’s summer, which means lots of people driving every-which-way for family vacations.

So, this is a good time to refresh you with the map of the arteries that distribute lifeblood across the United States — the U.S. Interstate Highway System. The network of highways stretching around 46,800 miles from coast to coast is sometimes referred to as the Eisenhower Interstate System. President Eisenhower signed the Federal-Aid Highway Act in June 29, 1956 making the current system possible.

Thus the father of the Interstate System is also responsible for the never-ending choruses of: “are we there yet?”, “how much further?”, “I need to go to the bathroom”, and “can we stop at the next Starbucks (from the adults) / McDonalds (from the kids)?”.

Get a full-size map here.

Map courtesy of WikiCommons.

Surveillance, British Style

While the revelations about the National Security Agency (NSA) snooping on private communications of U.S. citizens are extremely troubling, the situation could be much worse. Cast a sympathetic thought to the Her Majesty’s subjects in the United Kingdom of Great Britain and Northern Island, where almost everyone eavesdrops on everyone else. While the island nation of 60 million covers roughly the same area as Michigan, it is swathed in over 4 million CCTV (closed circuit television) surveillance cameras.

From Slate:

We adore the English here in the States. They’re just so precious! They call traffic circles “roundabouts,” prostitutes “prozzies,” and they have a queen. They’re ever so polite and carry themselves with such admirable poise. We love their accents so much, we use them in historical films to give them a bit more gravitas. (Just watch The Last Temptation of Christ to see what happens when we don’t: Judas doesn’t sound very intimidating with a Brooklyn accent.)

What’s not so cute is the surveillance society they’ve built—but the U.S. government seems pretty enamored with it.

The United Kingdom is home to an intense surveillance system. Most of the legal framework for this comes from the Regulation of Investigatory Powers Act, which dates all the way back to the year 2000. RIPA is meant to support criminal investigation, preventing disorder, public safety, public health, and, of course, “national security.” If this extremely broad application of law seems familiar, it should: The United States’ own PATRIOT Act is remarkably similar in scope and application. Why should the United Kingdom have the best toys, after all?

This is one of the problems with being the United Kingdom’s younger sibling. We always want what Big Brother has. Unless it’s soccer. Wiretaps, though? We just can’t get enough!

The PATRIOT Act, broad as it is, doesn’t match RIPA’s incredible wiretap allowances. In 1994, the United States passed the Communications Assistance for Law Enforcement Act, which mandated that service providers give the government “technical assistance” in the use of wiretaps. RIPA goes a step further and insists that wiretap capability be implemented right into the system. If you’re a service provider and can’t set up plug-and-play wiretap capability within a short time, Johnny English comes knocking at your door to say, ” ‘Allo, guvna! I ‘ear tell you ‘aven’t put in me wiretaps yet. Blimey! We’ll jus’ ‘ave to give you a hefty fine! Ods bodkins!” Wouldn’t that be awful (the law, not the accent)? It would, and it’s just what the FBI is hoping for. CALEA is getting a rewrite that, if it passes, would give the FBI that very capability.

I understand. Older siblings always get the new toys, and it’s only natural that we want to have them as well. But why does it have to be legal toys for surveillance? Why can’t it be chocolate? The United Kingdom enjoys chocolate that’s almost twice as good as American chocolate. Literally, they get 20 percent solid cocoa in their chocolate bars, while we suffer with a measly 11 percent. Instead, we’re learning to shut off the Internet for entire families.

That’s right. In the United Kingdom, if you are just suspected of having downloaded illegally obtained material three times (it’s known as the “three strikes” law), your Internet is cut off. Not just for you, but for your entire household. Life without the Internet, let’s face it, sucks. You’re not just missing out on videos of cats falling into bathtubs. You’re missing out of communication, jobs, and being a 21st-century citizen. Maybe this is OK in the United Kingdom because you can move up north, become a farmer, and enjoy a few pints down at the pub every night. Or you can just get a new ISP, because the United Kingdom actually has a competitive market for ISPs. The United States, as an homage, has developed the so-called “copyright alert system.” It works much the same way as the U.K. law, but it provides for six “strikes” instead of three and has a limited appeals system, in which the burden of proof lies on the suspected customer. In the United States, though, the rights-holders monitor users for suspected copyright infringement on their own, without the aid of ISPs. So far, we haven’t adopted the U.K. system in which ISPs are expected to monitor traffic and dole out their three strikes at their discretion.

These are examples of more targeted surveillance of criminal activities, though. What about untargeted mass surveillance? On June 21, one of Edward Snowden’s leaks revealed that the Government Communications Headquarters, the United Kingdom’s NSA equivalent, has been engaging in a staggering amount of data collection from civilians. This development generated far less fanfare than the NSA news, perhaps because the legal framework for this data collection has existed for a very long time under RIPA, and we expect surveillance in the United Kingdom. (Or maybe Americans were just living down to the stereotype of not caring about other countries.) The NSA models follow the GCHQ’s very closely, though, right down to the oversight, or lack thereof.

Media have labeled the FISA court that regulates the NSA’s surveillance as a “rubber-stamp” court, but it’s no match for the omnipotence of the Investigatory Powers Tribunal, which manages oversight for MI5, MI6, and the GCHQ. The Investigatory Powers Tribunal is exempt from the United Kingdom’s Freedom of Information Act, so it doesn’t have to share a thing about its activities (FISA apparently does not have this luxury—yet). On top of that, members of the tribunal are appointed by the queen. The queen. The one with the crown who has jubilees and a castle and probably a court wizard. Out of 956 complaints to the Investigatory Powers Tribunal, five have been upheld. Now that’s a rubber-stamp court we can aspire to!

Or perhaps not. The future of U.S. surveillance looks very grim if we’re set on following the U.K.’s lead. Across the United Kingdom, an estimated 4.2 million CCTV cameras, some with facial-recognition capability, keep watch on nearly the entire nation. (This can lead to some Monty Python-esque high jinks.) Washington, D.C., took its first step toward strong camera surveillance in 2008, when several thousand were installed ahead of President Obama’s inauguration.

Read the entire article here.

Image: Royal coat of arms of Queen Elizabeth II of the United Kingdom, as used in England and Wales, and Scotland. Courtesy of Wikipedia.

United States of Strange

With the United States turning another year older it reminds us to ponder some of the lesser known components of this beautiful yet paradoxical place. All nations have their esoteric cultural wonders and benign local oddities: the British (actually the Scots) have kilts, bowler hats, the Royal Family; Italians have Vespas, governments that last on average 8 months; the French, well they’re just French; the Germans love fast cars and lederhosen. But for sheer variety and volume the United States probably surpasses all for its extreme absurdity.

From the Telegraph:

Run by the improbably named Genghis Cohen, Machine Gun Vegas bills itself as the ‘world’s first luxury gun lounge’. It opened last year, and claims to combine “the look and feel of an ultra-lounge with the functionality of a state of the art indoor gun range”. The team of NRA-certified on-site instructors, however, may be its most unique appeal. All are female, and all are ex-US military personnel.

See other images and read the entire article here.

Image courtesy of the Telegraph.

Circadian Rhythm in Vegetables

The vegetables you eat may be better for you based on how and when they are exposed to light. Just as animals adhere to circadian rhythms, research shows that some plants may generate different levels of healthy nutritional metabolites based the light cycle as well.

From ars technica:

When you buy vegetables at the grocery store, they are usually still alive. When you lock your cabbage and carrots in the dark recess of the refrigerator vegetable drawer, they are still alive. They continue to metabolize while we wait to cook them.

Why should we care? Well, plants that are alive adjust to the conditions surrounding them. Researchers at Rice University have shown that some plants have circadian rhythms, adjusting their production of certain chemicals based on their exposure to light and dark cycles. Understanding and exploiting these rhythms could help us maximize the nutritional value of the vegetables we eat.

According to Janet Braam, a professor of biochemistry at Rice, her team’s initial research looked at how Arabidopsis, a common plant model for scientists, responded to light cycles. “It adjusts its defense hormones before the time of day when insects attack,” Braam said. Arabidopsis is in the same plant family as the cruciforous vegetables—broccoli, cabbage, and kale—so Braam and her colleagues decided to look for a similar light response in our foods.

They bought some grocery store cabbage and brought it back to the lab so they could subject the cabbage to the same tests they gave their model plant, which involved offering up living, leafy vegetables to a horde of hungry caterpillars. First, half the cabbages were exposed to a normal light and dark cycle, the same schedule as the caterpillars, while the other half were exposed to the opposite light cycle.

The caterpillars tend to feed in the late afternoon, according to Braam, so the light signals the plants to increase production of glucosinolates, a chemical that the insects don’t like. The study found that cabbages that adjusted to the normal light cycle had far less insect damage than the jet-lagged cabbages.

While it’s cool to know that cabbages are still metabolizing away and responding to light stimulus days after harvest, Braam said that this process could affect the nutritional value of the cabbage. “We eat cabbage, in part, because these glucosinolates are anti-cancer compounds,” Braam said.

Glucosinolates are only found in the cruciform vegetable family, but the Rice team wanted to see if other vegetables demonstrated similar circadian rhythms. They tested spinach, lettuce, zucchini, blueberries, carrots, and sweet potatoes. “Luckily, our caterpillar isn’t picky,” Braam said. “It’ll eat just about anything.”

Just like with the cabbage, the caterpillars ate far less of the vegetables trained on the normal light schedule. Even the fruits and roots increased production of some kind of anti-insect compound in response to light stimulus.

Metabolites affected by circadian rhythms could include vitamins and antioxidants. The Rice team is planning follow-up research to begin exploring how the cycling phenomenon affects known nutrients and if the magnitude of the shifts are large enough to have an impact on our diets. “We’ve uncovered some very basic stimuli, but we haven’t yet figured out how to amplify that for human nutrition,” Braam said.

Read the entire article here.

Sci-Fi Begets Cli-Fi

The world of fiction is populated with hundreds of different genres — most of which were invented by clever marketeers anxious to ensure vampire novels (teen / horror) don’t live next to classic works (literary) on real or imagined (think Amazon) book shelves. So, it should come as no surprise to see a new category recently emerge: cli-fi.

Short for climate fiction, cli-fi novels explore the dangers of environmental degradation and apocalyptic climate change. Not light reading for your summer break at the beach. But, then again, more books in this category may get us to think often and carefully about preserving our beaches — and the rest of the planet — for our kids.

From the Guardian:

A couple of days ago Dan Bloom, a freelance news reporter based in Taiwan, wrote on the Teleread blog that his word had been stolen from him. In 2012 Bloom had “produced and packaged” a novella called Polar City Red, about climate refugees in a post-apocalyptic Alaska in the year 2075. Bloom labelled the book “cli-fi” in the press release and says he coined that term in 2007, cli-fi being short for “climate fiction”, described as a sub-genre of sci-fi. Polar City Red bombed, selling precisely 271 copies, until National Public Radio (NPR) and the Christian Science Monitor picked up on the term cli-fi last month, writing Bloom out of the story. So Bloom has blogged his reply on Teleread, saying he’s simply pleased the term is now out there – it has gone viral since the NPR piece by Scott Simon. It’s not quite as neat as that – in recent months the term has been used increasingly in literary and environmental circles – but there’s no doubt it has broken out more widely. You can search for cli-fi on Amazon, instantly bringing up a plethora of books with titles such as 2042: The Great Cataclysm, or Welcome to the Greenhouse. Twitter has been abuzz.

Whereas 10 or 20 years ago it would have been difficult to identify even a handful of books that fell under this banner, there is now a growing corpus of novels setting out to warn readers of possible environmental nightmares to come. Barbara Kingsolver’s Flight Behaviour, the story of a forest valley filled with an apparent lake of fire, is shortlisted for the 2013 Women’s prize for fiction. Meanwhile, there’s Nathaniel Rich’s Odds Against Tomorrow, set in a future New York, about a mathematician who deals in worst-case scenarios. In Liz Jensen’s 2009 eco-thriller The Rapture, summer temperatures are asphyxiating and Armageddon is near; her most recent book, The Uninvited, features uncanny warnings from a desperate future. Perhaps the most high-profile cli-fi author is Margaret Atwood, whose 2009 The Year of the Flood features survivors of a biological catastrophe also central to her 2003 novel Oryx and Crake, a book Atwood sometimes preferred to call “speculative fiction”.

Engaging with this subject in fiction increases debate about the issue; finely constructed, intricate narratives help us broaden our understanding and explore imagined futures, encouraging us to think about the kind of world we want to live in. This can often seem difficult in our 24?hour news-on-loop society where the consequences of climate change may appear to be everywhere, but intelligent discussion of it often seems to be nowhere. Also, as the crime genre can provide the dirty thrill of, say, reading about a gruesome fictional murder set on a street the reader recognises, the best cli-fi novels allow us to be briefly but intensely frightened: climate chaos is closer, more immediate, hovering over our shoulder like that murderer wielding his knife. Outside of the narrative of a novel the issue can seem fractured, incoherent, even distant. As Gregory Norminton puts it in his introduction to an anthology on the subject, Beacons: Stories for Our Not-So-Distant Future: “Global warming is a predicament, not a story. Narrative only comes in our response to that predicament.” Which is as good an argument as any for engaging with those stories.

All terms are reductive, all labels simplistic – clearly, the likes of Kingsolver, Jensen and Atwood have a much broader canvas than this one issue. And there’s an argument for saying this is simply rebranding: sci-fi writers have been engaging with the climate-change debate for longer than literary novelists – Snow by Adam Roberts comes to mind – and I do wonder whether this is a term designed for squeamish writers and critics who dislike the box labelled “science fiction”. So the term is certainly imperfect, but it’s also valuable. Unlike sci-fi, cli-fi writing comes primarily from a place of warning rather than discovery. There are no spaceships hovering in the sky; no clocks striking 13. On the contrary, many of the horrors described seem oddly familiar.

Read the entire article after the jump.

Image: Aftermath of Superstorm Sandy. Courtesy of the Independent.

Self-Assured Destruction (SAD)

The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.

Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.

From TomDispatch:

 

What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.

For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.

And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.

The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.

There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.

In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”

Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.

Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.

So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.

Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.

And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.

So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.

The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.

We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.

What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.

So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.

Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.

And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.

Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.

Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.

Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.

At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.

Read the entire article here.

Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.

Living Long and Prospering on Ikaria

It’s safe to suggest that most of us above a certain age — let’s say 30 — wish to stay young. It is also safer to suggest, in the absence of a solution to this first wish, that many of us wish to age gracefully and happily. Yet for most of us, especially in the West, we age in a less dignified manner in combination with colorful medicines, lengthy tubes, and unpronounceable procedures. We are collectively living longer. But, the quality of those extra years leaves much to be desired.

In a quest to understand the process of aging more thoroughly researchers regularly descend on areas the world over that are known to have higher than average populations of healthy older people. These have become known as “Blue Zones”. One such place is a small, idyllic (there’s a clue right there) Greek island called Ikaria.

From the Guardian:

Gregoris Tsahas has smoked a packet of cigarettes every day for 70 years. High up in the hills of Ikaria, in his favourite cafe, he draws on what must be around his half-millionth fag. I tell him smoking is bad for the health and he gives me an indulgent smile, which suggests he’s heard the line before. He’s 100 years old and, aside from appendicitis, has never known a day of illness in his life.

Tsahas has short-cropped white hair, a robustly handsome face and a bone-crushing handshake. He says he drinks two glasses of red wine a day, but on closer interrogation he concedes that, like many other drinkers, he has underestimated his consumption by a couple of glasses.

The secret of a good marriage, he says, is never to return drunk to your wife. He’s been married for 60 years. “I’d like another wife,” he says. “Ideally one about 55.”

Tsahas is known at the cafe as a bit of a gossip and a joker. He goes there twice a day. It’s a 1km walk from his house over uneven, sloping terrain. That’s four hilly kilometres a day. Not many people half his age manage that far in Britain.

In Ikaria, a Greek island in the far east of the Mediterranean, about 30 miles from the Turkish coast, characters such as Gregoris Tsahas are not exceptional. With its beautiful coves, rocky cliffs, steep valleys and broken canopy of scrub and olive groves, Ikaria looks similar to any number of other Greek islands. But there is one vital difference: people here live much longer than the population on other islands and on the mainland. In fact, people here live on average 10 years longer than those in the rest of Europe and America – around one in three Ikarians lives into their 90s. Not only that, but they also have much lower rates of cancer and heart disease, suffer significantly less depression and dementia, maintain a sex life into old age and remain physically active deep into their 90s. What is the secret of Ikaria? What do its inhabitants know that the rest of us don’t?

The island is named after Icarus, the young man in Greek mythology who flew too close to the sun and plunged into the sea, according to legend, close to Ikaria. Thoughts of plunging into the sea are very much in my mind as the propeller plane from Athens comes in to land. There is a fierce wind blowing – the island is renowned for its wind – and the aircraft appears to stall as it turns to make its final descent, tipping this way and that until, at the last moment, the pilot takes off upwards and returns to Athens. Nor are there any ferries, owing to a strike. “They’re always on strike,” an Athenian back at the airport tells me.

Stranded in Athens for the night, I discover that a fellow thwarted passenger is Dan Buettner, author of a book called The Blue Zones, which details the five small areas in the world where the population outlive the American and western European average by around a decade: Okinawa in Japan, Sardinia, the Nicoya peninsula in Costa Rica, Loma Linda in California and Ikaria.

Tall and athletic, 52-year-old Buettner, who used to be a long-distance cyclist, looks a picture of well-preserved youth. He is a fellow with National Geographic magazine and became interested in longevity while researching Okinawa’s aged population. He tells me there are several other passengers on the plane who are interested in Ikaria’s exceptional demographics. “It would have been ironic, don’t you think,” he notes drily, “if a group of people looking for the secret of longevity crashed into the sea and died.”

Chatting to locals on the plane the following day, I learn that several have relations who are centenarians. One woman says her aunt is 111. The problem for demographers with such claims is that they are often very difficult to stand up. Going back to Methuselah, history is studded with exaggerations of age. In the last century, longevity became yet another battleground in the cold war. The Soviet authorities let it be known that people in the Caucasus were living deep into their hundreds. But subsequent studies have shown these claims lacked evidential foundation.

Since then, various societies and populations have reported advanced ageing, but few are able to supply convincing proof. “I don’t believe Korea or China,” Buettner says. “I don’t believe the Hunza Valley in Pakistan. None of those places has good birth certificates.”

However, Ikaria does. It has also been the subject of a number of scientific studies. Aside from the demographic surveys that Buettner helped organise, there was also the University of Athens’ Ikaria Study. One of its members, Dr Christina Chrysohoou, a cardiologist at the university’s medical school, found that the Ikarian diet featured a lot of beans and not much meat or refined sugar. The locals also feast on locally grown and wild greens, some of which contain 10 times more antioxidants than are found in red wine, as well as potatoes and goat’s milk.

Chrysohoou thinks the food is distinct from that eaten on other Greek islands with lower life expectancy. “Ikarians’ diet may have some differences from other islands’ diets,” she says. “The Ikarians drink a lot of herb tea and small quantities of coffee; daily calorie consumption is not high. Ikaria is still an isolated island, without tourists, which means that, especially in the villages in the north, where the highest longevity rates have been recorded, life is largely unaffected by the westernised way of living.”

But she also refers to research that suggests the Ikarian habit of taking afternoon naps may help extend life. One extensive study of Greek adults showed that regular napping reduced the risk of heart disease by almost 40%. What’s more, Chrysohoou’s preliminary studies revealed that 80% of Ikarian males between the ages of 65 and 100 were still having sex. And, of those, a quarter did so with “good duration” and “achievement”. “We found that most males between 65 and 88 reported sexual activity, but after the age of 90, very few continued to have sex.”

Read the entire article here.

Image: Agios Giorgis Beach, Ikaria. Courtesy of Island-Ikaria travel guide.

MondayMap: The Double Edge of Climate Change

So the changing global climate will imperil our coasts, flood low-lying lands, fuel more droughts, increase weather extremes, and generally make the planet more toasty. But, a new study — for the first time — links increasing levels of CO2 to an increase in global vegetation. Perhaps this portends our eventual fate — ceding the Earth back to the plants — unless humans make some drastic behavioral changes.

From the New Scientist:

The planet is getting lusher, and we are responsible. Carbon dioxide generated by human activity is stimulating photosynthesis and causing a beneficial greening of the Earth’s surface.

For the first time, researchers claim to have shown that the increase in plant cover is due to this “CO2 fertilisation effect” rather than other causes. However, it remains unclear whether the effect can counter any negative consequences of global warming, such as the spread of deserts.

Recent satellite studies have shown that the planet is harbouring more vegetation overall, but pinning down the cause has been difficult. Factors such as higher temperatures, extra rainfall, and an increase in atmospheric CO2 – which helps plants use water more efficiently – could all be boosting vegetation.

To home in on the effect of CO2, Randall Donohue of Australia’s national research institute, the CSIRO in Canberra, monitored vegetation at the edges of deserts in Australia, southern Africa, the US Southwest, North Africa, the Middle East and central Asia. These are regions where there is ample warmth and sunlight, but only just enough rainfall for vegetation to grow, so any change in plant cover must be the result of a change in rainfall patterns or CO2 levels, or both.

If CO2 levels were constant, then the amount of vegetation per unit of rainfall ought to be constant, too. However, the team found that this figure rose by 11 per cent in these areas between 1982 and 2010, mirroring the rise in CO2 (Geophysical Research Letters, doi.org/mqx). Donohue says this lends “strong support” to the idea that CO2 fertilisation drove the greening.

Climate change studies have predicted that many dry areas will get drier and that some deserts will expand. Donohue’s findings make this less certain.

However, the greening effect may not apply to the world’s driest regions. Beth Newingham of the University of Idaho, Moscow, recently published the result of a 10-year experiment involving a greenhouse set up in the Mojave desert of Nevada. She found “no sustained increase in biomass” when extra CO2 was pumped into the greenhouse. “You cannot assume that all these deserts respond the same,” she says. “Enough water needs to be present for the plants to respond at all.”

The extra plant growth could have knock-on effects on climate, Donohue says, by increasing rainfall, affecting river flows and changing the likelihood of wildfires. It will also absorb more CO2 from the air, potentially damping down global warming but also limiting the CO2 fertilisation effect itself.

Read the entire article here.

Image: Global vegetation mapped: Normalized Difference Vegetation Index (NDVI) from Nov. 1, 2007, to Dec. 1, 2007, during autumn in the Northern Hemisphere. This monthly average is based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The greenness values depict vegetation density; higher values (dark greens) show land areas with plenty of leafy green vegetation, such as the Amazon Rainforest. Lower values (beige to white) show areas with little or no vegetation, including sand seas and Arctic areas. Areas with moderate amounts of vegetation are pale green. Land areas with no data appear gray, and water appears blue. Courtesy of NASA.

Your Home As Eco-System

For centuries biologists, zoologists and ecologists have been mapping the wildlife that surrounds us in the great outdoors. Now a group led by microbiologist Noah Fierer at the University of Colorado Boulder is pursuing flora and fauna in one of the last unexplored eco-systems — the home. (Not for the faint of heart).

From the New York Times:

On a sunny Wednesday, with a faint haze hanging over the Rockies, Noah Fierer eyed the field site from the back of his colleague’s Ford Explorer. Two blocks east of a strip mall in Longmont, one of the world’s last underexplored ecosystems had come into view: a sandstone-colored ranch house, code-named Q. A pair of dogs barked in the backyard.

Dr. Fierer, 39, a microbiologist at the University of Colorado Boulder and self-described “natural historian of cooties,” walked across the front lawn and into the house, joining a team of researchers inside. One swabbed surfaces with sterile cotton swabs. Others logged the findings from two humming air samplers: clothing fibers, dog hair, skin flakes, particulate matter and microbial life.

Ecologists like Dr. Fierer have begun peering into an intimate, overlooked world that barely existed 100,000 years ago: the great indoors. They want to know what lives in our homes with us and how we “colonize” spaces with other species — viruses, bacteria, microbes. Homes, they’ve found, contain identifiable ecological signatures of their human inhabitants. Even dogs exert a significant influence on the tiny life-forms living on our pillows and television screens. Once ecologists have more thoroughly identified indoor species, they hope to come up with strategies to scientifically manage homes, by eliminating harmful taxa and fostering species beneficial to our health.

But the first step is simply to take a census of what’s already living with us, said Dr. Fierer; only then can scientists start making sense of their effects. “We need to know what’s out there first. If you don’t know that, you’re wandering blind in the wilderness.”

Here’s an undeniable fact: We are an indoor species. We spend close to 90 percent of our lives in drywalled caves. Yet traditionally, ecologists ventured outdoors to observe nature’s biodiversity, in the Amazon jungles, the hot springs of Yellowstone or the subglacial lakes of Antarctica. (“When you train as an ecologist, you imagine yourself tromping around in the forest,” Dr. Fierer said. “You don’t imagine yourself swabbing a toilet seat.”)

But as humdrum as a home might first appear, it is a veritable wonderland. Ecology does not stop at the front door; a home to you is also home to an incredible array of wildlife.

Besides the charismatic fauna commonly observed in North American homes — dogs, cats, the occasional freshwater fish — ants and roaches, crickets and carpet bugs, mites and millions upon millions of microbes, including hundreds of multicellular species and thousands of unicellular species, also thrive in them. The “built environment” doubles as a complex ecosystem that evolves under the selective pressure of its inhabitants, their behavior and the building materials. As microbial ecologists swab DNA from our homes, they’re creating an atlas of life much as 19th-century naturalists like Alfred Russel Wallace once logged flora and fauna on the Malay Archipelago.

Take an average kitchen. In a study published in February in the journal Environmental Microbiology, Dr. Fierer’s lab examined 82 surfaces in four Boulder kitchens. Predictable patterns emerged. Bacterial species associated with human skin, like Staphylococcaceae or Corynebacteriaceae, predominated. Evidence of soil showed up on the floor, and species associated with raw produce (Enterobacteriaceae, for example) appeared on countertops. Microbes common in moist areas — including sphingomonads, some strains infamous for their ability to survive in the most toxic sites — splashed in a kind of jungle above the faucet.

A hot spot of unrivaled biodiversity was discovered on the stove exhaust vent, probably the result of forced air and settling. The counter and refrigerator, places seemingly as disparate as temperate and alpine grasslands, shared a similar assemblage of microbial species — probably less because of temperature and more a consequence of cleaning. Dr. Fierer’s lab also found a few potential pathogens, like Campylobacter, lurking on the cupboards. There was evidence of the bacterium on a microwave panel, too, presumably a microbial “fingerprint” left by a cook handling raw chicken.

If a kitchen represents a temperate forest, few of its plants would be poison ivy. Most of the inhabitants are relatively benign. In any event, eradicating them is neither possible nor desirable. Dr. Fierer wants to make visible this intrinsic, if unseen, aspect of everyday life. “For a lot of the general public, they don’t care what’s in soil,” he said. “People care more about what’s on their pillowcase.” (Spoiler alert: The microbes living on your pillowcase are not all that different from those living on your toilet seat. Both surfaces come in regular contact with exposed skin.)

Read the entire article after the jump.

Image: Animals commonly found in the home. Courtesy of North Carolina State University.

Your State Bird

The official national bird of the United States is the Bald Eagle. For that matter, it’s also the official animal. Thankfully it was removed from the endangered species list a mere 5 years ago. Aside from the bird itself Americans love the symbolism that the eagle implies — strength, speed, leadership and achievement. But do Americans know their State bird. A recent article from the bird-lovers over at Slate will refresh your memory, and also recommend a more relevant alternative.

From Slate:

I drove over a bridge from Maryland into Virginia today and on the big “Welcome to Virginia” sign was an image of the state bird, the northern cardinal—with a yellow bill. I should have scoffed, but it hardly registered. Everyone knows that state birds are a big joke. There are a million cardinals, a scattering of robins, and just a general lack of thought put into the whole thing.

States should have to put more thought into their state bird than I put into picking my socks in the morning. “Ugh, state bird? I dunno, what’re the guys next to us doing? Cardinal? OK, let’s do that too. Yeah put it on all the signs. Nah, no time to research the bill color, let’s just go.” It’s the official state bird! Well, since all these jackanape states are too busy passing laws requiring everyone to own guns or whatever to consider what their state bird should be, I guess I’ll have to do it.

1. Alabama. Official state bird: yellowhammer

Right out of the gate with this thing. Yellowhammer? C’mon. I Asked Jeeves and it told me that Yellowhammer is some backwoods name for a yellow-shafted flicker. The origin story dates to the Civil War, when some Alabama troops wore yellow-trimmed uniforms. Sorry, but that’s dumb, mostly because it’s just a coincidence and has nothing to do with the actual bird. If you want a woodpecker, go for something with a little more cachet, something that’s at least a full species.

What it should be: red-cockaded woodpecker

2. Alaska. Official state bird: willow ptarmigan

Willow Ptarmigans are the dumbest-sounding birds on Earth, sorry. They sound like rejected Star Wars aliens, angrily standing outside the Mos Eisley Cantina because their IDs were rejected. Why go with these dopes, Alaska, when you’re the best state to see the most awesome falcon on Earth?

What it should be: gyrfalcon

3. Arizona. Official state bird: cactus wren

Cactus Wren is like the only boring bird in the entire state. I can’t believe it.

What it should be: red-faced warbler

4. Arkansas. Official state bird: northern mockingbird

Christ. What makes this even less funny is that there are like eight other states with mockingbird as their official bird. I’m convinced that the guy whose job it was to report to the state’s legislature on what the official bird should be forgot until the day it was due and he was in line for a breakfast sandwich at Burger King. In a panic he walked outside and selected the first bird he could find, a dirty mockingbird singing its stupid head off on top of a dumpster.

What it should be: painted bunting

5. California. Official state bird: California quail

… Or perhaps the largest, most radical bird on the continent?

What it should be: California condor

6. Colorado. Official state bird: lark bunting

I’m actually OK with this. A nice choice. But why not go with one of the birds that are (or are pretty much) endemic in your state?

What it should be: brown-capped rosy-finch or Gunnison sage-grouse

Read the entire article here.

Image: Bald Eagle, Kodiak Alaska, 2010. Courtesy of Yathin S Krishnappa / Wikipedia.

MondayMap: Global Intolerance

Following on from last week’s MondayMap post on intolerance and hatred within the United States — according to tweets on the social media site Twitter — we expand our view this week to cover the globe. This map is a based on a more detailed, global research study of people’s attitudes to having neighbors of a different race.

From the Washington Post:

When two Swedish economists set out to examine whether economic freedom made people any more or less racist, they knew how they would gauge economic freedom, but they needed to find a way to measure a country’s level of racial tolerance. So they turned to something called the World Values Survey, which has been measuring global attitudes and opinions for decades.

Among the dozens of questions that World Values asks, the Swedish economists found one that, they believe, could be a pretty good indicator of tolerance for other races. The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society. (The study concluded that economic freedom had no correlation with racial tolerance, but it does appear to correlate with tolerance toward homosexuals.)

Unfortunately, the Swedish economists did not include all of the World Values Survey data in their final research paper. So I went back to the source, compiled the original data and mapped it out on the infographic above. In the bluer countries, fewer people said they would not want neighbors of a different race; in red countries, more people did.

If we treat this data as indicative of racial tolerance, then we might conclude that people in the bluer countries are the least likely to express racist attitudes, while the people in red countries are the most likely.

Update: Compare the results to this map of the world’s most and least diverse countries.

Before we dive into the data, a couple of caveats. First, it’s entirely likely that some people lied when answering this question; it would be surprising if they hadn’t. But the operative question, unanswerable, is whether people in certain countries were more or less likely to answer the question honestly. For example, while the data suggest that Swedes are more racially tolerant than Finns, it’s possible that the two groups are equally tolerant but that Finns are just more honest. The willingness to state such a preference out loud, though, might be an indicator of racial attitudes in itself. Second, the survey is not conducted every year; some of the results are very recent and some are several years old, so we’re assuming the results are static, which might not be the case.

• Anglo and Latin countries most tolerant. People in the survey were most likely to embrace a racially diverse neighbor in the United Kingdom and its Anglo former colonies (the United States, Canada, Australia and New Zealand) and in Latin America. The only real exceptions were oil-rich Venezuela, where income inequality sometimes breaks along racial lines, and the Dominican Republic, perhaps because of its adjacency to troubled Haiti. Scandinavian countries also scored high.

• India, Jordan, Bangladesh and Hong Kong by far the least tolerant. In only three of 81 surveyed countries, more than 40 percent of respondents said they would not want a neighbor of a different race. This included 43.5 percent of Indians, 51.4 percent of Jordanians and an astonishingly high 71.8 percent of Hong Kongers and 71.7 percent of Bangladeshis.

Read more about this map here.

MondayMap: Intolerance and Hatred

A fascinating map of tweets espousing hatred and racism across the United States. The data analysis and map were developed by researchers at Humboldt State University.

From the Guardian:

[T]he students and professors at Humboldt State University who produced this map read the entirety of the 150,000 geo-coded tweets they analysed.

Using humans rather than machines means that this research was able to avoid the basic pitfall of most semantic analysis where a tweet stating ‘the word homo is unacceptable’ would still be classed as hate speech. The data has also been ‘normalised’, meaning that the scale accounts for the total twitter traffic in each county so that the final result is something that shows the frequency of hateful words on Twitter. The only question that remains is whether the views of US Twitter users can be a reliable indication of the views of US citizens.

See the interactive map and read the entire article here.

More CO2 is Good, Right?

Yesterday, May 10, 2013, scientists published new measures of atmospheric carbon dioxide (CO2). For the first time in human history CO2 levels reached an average of 400 parts per million (ppm). This is particularly troubling since CO2 has long been known as the most potent heat trapping component of the atmosphere. The sobering milestone was recorded from the Mauna Loa Observatory in Hawaii — monitoring has been underway at the site since the mid-1950s.

This has many climate scientists re-doubling their efforts to warn of the consequences of climate change, which is believed to be driven by human activity and specifically the generation of atmospheric CO2 in ever increasing quantities. But not to be outdone, the venerable Wall Street Journal — seldom known for its well-reasoned scientific journalism — chimed in with an op-ed on the subject. According to the WSJ we have nothing to worry about because increased levels of CO2 are good for certain crops and the Earth had historically much higher levels of CO2 (though pre-humanity).

Ashutosh Jogalekar over at The Curious Wavefunction dissects the WSJ article line by line:

Since we were discussing the differences between climate change “skeptics” and “deniers” (or “denialists”, whatever you want to call them) the other day this piece is timely. The Wall Street Journal is not exactly known for reasoned discussion of climate change, but this Op-Ed piece may set a new standard even for its own naysayers and skeptics. It’s a piece by William Happer and Harrison Schmitt that’s so one-sided, sparse on detail, misleading and ultimately pointless that I am wondering if it’s a spoof.

Happer and Schmitt’s thesis can be summed up in one line: More CO2 in the atmosphere is a good thing because it’s good for one particular type of crop plant. That’s basically it. No discussion of the downsides, not even a pretense of a balanced perspective. Unfortunately it’s not hard to classify their piece as a denialist article because it conforms to some of the classic features of denial; it’s entirely one sided, it’s very short on detail, it does a poor job even with the little details that it does present and it simply ignores the massive amount of research done on the topic. In short it’s grossly misleading.

First of all Happer and Schmitt simply dismiss any connection that might exist between CO2 levels and rising temperatures, in the process consigning a fair amount of basic physics and chemistry to the dustbin. There are no references and no actual discussion of why they don’t believe there’s a connection. That’s a shoddy start to put it mildly; you would expect a legitimate skeptic to start with some actual evidence and references. Most of the article after that consists of a discussion of the differences between so-called C3 plants (like rice) and C4 plants (like corn and sugarcane). This is standard stuff found in college biochemistry textbooks, nothing revealing here. But Happer and Schmitt leverage a fundamental difference between the two – the fact that C4 plants can utilize CO2 more efficiently than C3 plants under certain conditions – into an argument for increasing CO2 levels in the atmosphere.

This of course completely ignores all the other potentially catastrophic effects that CO2 could have on agriculture, climate, biodiversity etc. You don’t even have to be a big believer in climate change to realize that focusing on only a single effect of a parameter on a complicated system is just bad science. Happer and Schmitt’s argument is akin to the argument that everyone should get themselves addicted to meth because one of meth’s effects is euphoria. So ramping up meth consumption will make everyone feel happier, right?

But even if you consider that extremely narrowly defined effect of CO2 on C3 and C4 plants, there’s still a problem. What’s interesting is that the argument has been countered by Matt Ridley in the pages of this very publication:

But it is not quite that simple. Surprisingly, the C4 strategy first became common in the repeated ice ages that began about four million years ago. This was because the ice ages were a very dry time in the tropics and carbon-dioxide levels were very low—about half today’s levels. C4 plants are better at scavenging carbon dioxide (the source of carbon for sugars) from the air and waste much less water doing so. In each glacial cold spell, forests gave way to seasonal grasslands on a huge scale. Only about 4% of plant species use C4, but nearly half of all grasses do, and grasses are among the newest kids on the ecological block.

So whereas rising temperatures benefit C4, rising carbon-dioxide levels do not. In fact, C3 plants get a greater boost from high carbon dioxide levels than C4. Nearly 500 separate experiments confirm that if carbon-dioxide levels roughly double from preindustrial levels, rice and wheat yields will be on average 36% and 33% higher, while corn yields will increase by only 24%.

So no, the situation is more subtle than the authors think. In fact I am surprised that, given that C4 plants actually do grow better at higher temperatures, Happer and Schmitt missed an opportunity for making the case for a warmer planet. In any case, there’s a big difference between improving yields of C4 plants under controlled greenhouse conditions and expecting these yields to improve without affecting other components of the ecosystem by doing a giant planetary experiment.

Read the entire article after the jump.

Image courtesy of Sierra Club.

 

Your Weekly Groceries

Photographer Peter Menzel traveled to over 20 countries to compile his culinary atlas Hungry Planet. But this is no ordinary cookbook or trove of local delicacies. The book is a visual catalog of a family’s average weekly grocery shopping.

It is both enlightening and sobering to see the nutritional inventory of a Western family juxtaposed with that of a sub-Saharan African family. It puts into perspective the internal debate within the United States of the 1 percent versus the 99 percent. Those of us lucky enough to have been born in one of the world’s richer nations, even though we may be part of the 99 percent are still truly in the group of haves, rather than the have-nots.

For more on Menzel’s book jump over to Amazon.

The Melander family from Bargteheide, Germany, who spend around £320 [$480] on a week’s worth of food.

 

The Aboubakar family from Darfur, Sudan, in the Breidjing refugee camp in Chad. Their weekly food, which feeds six people, costs 79p [$1.19].

 

The Revis family from Raleigh in North Carolina. Their weekly shopping costs £219 [$328.50].

 

The Namgay family from Shingkhey, Bhutan, with a week’s worth of food that costs them around £3.20 [$4.80].

Images courtesy of Peter Menzel /Barcroft Media.