Category Archives: Environs

Anti-Eco-Friendly Consumption

It should come as no surprise that those who deny the science of climate change and human-propelled impact on the environment would also shirk from purchasing products and services that are friendly to the environment.

A recent study shows how extreme political persuasion sways purchasing behavior of light bulbs: conservatives are more likely to purchase incandescent bulbs, while moderates and liberals lean towards more eco-friendly bulbs.

Joe Barton, U.S. Representative from Texas, sums up the issue of light bulb choice quite neatly, “… it is about personal freedom”. All the while our children shake their heads in disbelief.

Presumably many climate change skeptics prefer to purchase items that are harmful to the environment and also to humans just to make a political statement. This might include continuing to purchase products containing dangerous levels of unpronounceable acronyms and questionable chemicals: rBGH (recombinant Bovine Growth Hormone) in milk, BPA (Bisphenol_A) in plastic utensils and bottles, KBrO3 (Potassium Bromate) in highly processed flour, BHA (Butylated Hydroxyanisole) food preservative, Azodicarbonamide in dough.

Freedom truly does come at a cost.

From the Guardian:

Eco-friendly labels on energy-saving bulbs are a turn-off for conservative shoppers, a new study has found.

The findings, published this week in the Proceedings of the National Academy of Sciences, suggest that it could be counterproductive to advertise the environmental benefits of efficient bulbs in the US. This could make it even more difficult for America to adopt energy-saving technologies as a solution to climate change.

Consumers took their ideological beliefs with them when they went shopping, and conservatives switched off when they saw labels reading “protect the environment”, the researchers said.

The study looked at the choices of 210 consumers, about two-thirds of them women. All were briefed on the benefits of compact fluorescent (CFL) bulbs over old-fashioned incandescents.

When both bulbs were priced the same, shoppers across the political spectrum were uniformly inclined to choose CFL bulbs over incandescents, even those with environmental labels, the study found.

But when the fluorescent bulb cost more – $1.50 instead of $0.50 for an incandescent – the conservatives who reached for the CFL bulb chose the one without the eco-friendly label.

“The more moderate and conservative participants preferred to bear a long-term financial cost to avoid purchasing an item associated with valuing environmental protections,” the study said.

The findings suggest the extreme political polarisation over environment and climate change had now expanded to energy-savings devices – which were once supported by right and left because of their money-saving potential.

“The research demonstrates how promoting the environment can negatively affect adoption of energy efficiency in the United States because of the political polarisation surrounding environmental issues,” the researchers said.

Earlier this year Harvard academic Theda Skocpol produced a paper tracking how climate change and the environment became a defining issue for conservatives, and for Republican-elected officials.

Conservative activists elevated opposition to the science behind climate change, and to action on climate change, to core beliefs, Skocpol wrote.

There was even a special place for incandescent bulbs. Republicans in Congress two years ago fought hard to repeal a law phasing out incandescent bulbs – even over the objections of manufacturers who had already switched their product lines to the new energy-saving technology.

Republicans at the time cast the battle of the bulb as an issue of liberty. “This is about more than just energy consumption. It is about personal freedom,” said Joe Barton, the Texas Republican behind the effort to keep the outdated bulbs burning.

Read the entire article following the jump.

Image courtesy of Housecraft.

Science and Art of the Brain

Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.

From the New York Times:

This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.

Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.

This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.

Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.

The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.

As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.

Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.

The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.

This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.

Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.

Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?

Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”

Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.

This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.

Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.

Read the entire article following the jump.

Cheap Hydrogen

Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.

This new method of production appears to be less costly, less dangerous and also more environmentally sound.

From the Independent:

Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.

The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.

Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.

Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.

In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.

Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.

Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already  offers potential and promise and can help in making the storage of green  energy cheaper.”

Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.

But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.

Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate  the atoms.

Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”

Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it  currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen  to generate our electrical power  at home. It would also allow us to  significantly reduce the country’s  carbon footprint.”

Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”

Read the entire article following the jump.

Dark Lightning

It’s fascinating how a seemingly well-understood phenomenon, such as lightning, can still yield enormous surprises. Researchers have found that visible flashes of lightning can also be accompanied by non-visible, and more harmful, radiation such as x- and gamma-rays.

From the Washington Post:

A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.

But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.

Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.

Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.

The only way to determine whether an airplane had been struck by dark lightning, Dwyer says, “would be to use a radiation detector. Right in the middle of [a flash], a very brief bluish-purple glow around the plane might be perceptible. Inside an aircraft, a passenger would probably not be able to feel or hear much of anything, but the radiation dose could be significant.”

However, because there’s only about one dark lightning occurrence for every thousand visible flashes and because pilots take great pains to avoid thunderstorms, Dwyer says, the risk of injury is quite limited. No one knows for sure if anyone has ever been hit by dark lightning.

About 25 million visible thunderbolts hit the United States every year, killing about 30 people and many farm animals, says John Jensenius, a lightning safety specialist with the National Weather Service in Gray, Maine. Worldwide, thunderstorms produce about a billion or so lightning bolts annually.

Read the entire article after the jump.

Image: Lightning in Foshan, China. Courtesy of Telegraph.

Farmscrapers

No, the drawing is not a construction from the mind of sci fi illustrator extraordinaire Michael Whelan. This is reality. Or, to be more precise an architectural rendering of buildings to come — in China of course.

From the Independent:

A French architecture firm has unveiled their new ambitious ‘farmscraper’ project – six towering structures which promise to change the way that we think about green living.

Vincent Callebaut Architects’ innovative Asian Cairns was planned specifically for Chinese city Shenzhen in response to the growing population, increasing CO2 emissions and urban development.

The structures will consist of a series of pebble-shaped levels – each connected by a central spinal column – which will contain residential areas, offices, and leisure spaces.

Sustainability is key to the innovative project – wind turbines will cover the roof of each tower, water recycling systems will be in place to recycle waste water, and solar panels will be installed on the buildings, providing renewable energy. The structures will also have gardens on the exterior, further adding to the project’s green credentials.

Vincent Callebaut, the Belgian architect behind the firm, is well-known for his ambitious, eco-friendly projects, winning many awards over the years.

His self-sufficient amphibious city Lilypad – ‘a floating ecopolis for climate refugees’ – is perhaps his most famous design. The model has been proposed as a long-term solution to rising water levels, and successfully meets the four challenges of climate, biodiversity, water, and health, that the OECD laid out in 2008.

Vincent Callebaut Architects said: “It is a prototype to build a green, dense, smart city connected by technology and eco-designed from biotechnologies.”

Read the entire article and see more illustrations after the jump.

Image: “Farmscrapers” take eco-friendly architecture to dizzying heights in China. Courtesy of Vincent Callebaut Architects / Independent.

The Richest Person in the Solar System

[tube]Bs6rCxU_IHY[/tube]

Forget Warren Buffet, Bill Gates and Carlos Slim or the Russian oligarchs and the emirs of the Persian Gulf. These guys are merely multi-billionaires. Their fortunes — combined — account for less than half of 1 percent of the net worth of Dennis Hope, the world’s first trillionaire. In fact, you could describe Dennis as the solar system’s first trillionaire, with an estimated wealth of $100 trillion.

So, why have you never heard of Dennis Hope, trillionaire? Where does he invest his money? And, how did he amass this jaw-dropping uber-fortune? The answer to the first question is that he lives a relatively ordinary and quiet life in Nevada. The answer to the second question is: property. The answer to the third, and most fascinating question: well, he owns most of the Moon. He also owns the majority of the planets Mars, Venus and Mercury, and 90 or so other celestial plots. You too could become an interplanetary property investor for the starting and very modest sum of $19.99. Please write your check to… Dennis Hope.

The New York Times has a recent story and documentary on Mr.Hope, here.

[div class=attrib]From Discover:[end-div]

Dennis Hope, self-proclaimed Head Cheese of the Lunar Embassy, will promise you the moon. Or at least a piece of it. Since 1980, Hope has raked in over $9 million selling acres of lunar real estate for $19.99 a pop. So far, 4.25 million people have purchased a piece of the moon, including celebrities like Barbara Walters, George Lucas, Ronald Reagan, and even the first President Bush. Hope says he exploited a loophole in the 1967 United Nations Outer Space Treaty, which prohibits nations from owning the moon.

Because the law says nothing about individual holders, he says, his claim—which he sent to the United Nations—has some clout. “It was unowned land,” he says. “For private property claims, 197 countries at one time or another had a basis by which private citizens could make claims on land and not make payment. There are no standardized rules.”

Hope is right that the rules are somewhat murky—both Japan and the United States have plans for moon colonies—and lunar property ownership might be a powder keg waiting to spark. But Ram Jakhu, law professor at the Institute of Air and Space Law at McGill University in Montreal, says that Hope’s claims aren’t likely to hold much weight. Nor, for that matter, would any nation’s. “I don’t see a loophole,” Jakhu says. “The moon is a common property of the international community, so individuals and states cannot own it. That’s very clear in the U.N. treaty. Individuals’ rights cannot prevail over the rights and obligations of a state.”

Jakhu, a director of the International Institute for Space Law, believes that entrepreneurs like Hope have misread the treaty and that the 1967 legislation came about to block property claims in outer space. Historically, “the ownership of private property has been a major cause of war,” he says. “No one owns the moon. No one can own any property in outer space.”

Hope refuses to be discouraged. And he’s focusing on expansion. “I own about 95 different planetary bodies,” he says. “The total amount of property I currently own is about 7 trillion acres. The value of that property is about $100 trillion. And that doesn’t even include mineral rights.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the New York Times.[end-div]

MondayMap: New Jersey Under Water

We love maps here at theDiagonal. So much so that we’ve begun a new feature: MondayMap. As the name suggests, we plan to feature fascinating new maps on Mondays. For our readers who prefer their plots served up on a Saturday, sorry. Usually we like to highlight maps that cause us to look at our world differently or provide a degree of welcome amusement, such as the wonderful trove of maps over at Strange Maps curated by Frank Jacobs.

However, this first MondayMap is a little different and serious. It’s an interactive map that shows the impact of estimated sea level rise on the streets of New Jersey. Obviously, such a tool would be a great boon for emergency services and urban planners. For the rest of us, whether we live in New Jersey or not, maps like this one — of extreme weather events and projections — are likely to become much more common over the coming decades. Kudos to researchers at Rutgers University for developing the NJ Flood Mapper.

[div class=attrib]From Wall Street Journal:[end-div]

While superstorm Sandy revealed the Northeast’s vulnerability, a new map by New Jersey scientists suggests how rising seas could make future storms even worse.

The map shows ocean waters surging more than a mile into communities along Raritan Bay, engulfing nearly all of New Jersey’s barrier islands and covering northern sections of the New Jersey Turnpike and land surrounding the Port Newark Container Terminal.

Such damage could occur under a scenario in which sea levels rise 6 feet—or a 3-foot rise in tandem with a powerful coastal storm, according to the map produced by Rutgers University researchers.

The satellite-based tool, one of the first comprehensive, state-specific maps of its kind, uses a Google-maps-style interface that allows viewers to zoom into street-level detail.

“We are not trying to unduly frighten people,” said Rick Lathrop, director of the Grant F. Walton Center for Remote Sensing and Spatial Analysis at Rutgers, who led the map’s development. “This is providing people a look at where our vulnerability is.”

Still, the implications of the Rutgers project unnerve residents of Surf City, on Long Beach Island, where the map shows water pouring over nearly all of the barrier island’s six municipalities with a 6-foot increase in sea levels.

“The water is going to come over the island and there will be no island,” said Barbara Epstein, a 73-year-old resident of nearby Barnegat Light, who added that she is considering moving after 12 years there. “The storms are worsening.”

To be sure, not everyone agrees that climate change will make sea-level rise more pronounced.

Politically, climate change remains an issue of debate. New York Gov. Andrew Cuomo has said Sandy showed the need to address the issue, while New Jersey Gov. Chris Christie has declined to comment on whether Sandy was linked to climate change.

Scientists have gone ahead and started to map sea-level-rise scenarios in New Jersey, New York City and flood-prone communities along the Gulf of Mexico to help guide local development and planning.

Sea levels have risen by 1.3 feet near Atlantic City and 0.9 feet by Battery Park between 1911 and 2006, according to data from the National Oceanic and Atmospheric Administration.

A serious storm could add at least another 3 feet, with historic storm surges—Sandy-scale—registering at 9 feet. So when planning for future coastal flooding, 6 feet or higher isn’t far-fetched when combining sea-level rise with high tides and storm surges, Mr. Lathrop said.

NOAA estimated in December that increasing ocean temperatures could cause sea levels to rise by 1.6 feet in 100 years, and by 3.9 feet if considering some level of Arctic ice-sheet melt.

Such an increase amounts to 0.16 inches per year, but the eventual impact could mean that a small storm could “do the same damage that Sandy did,” said Peter Howd, co-author of a 2012 U.S. Geological Survey report that found the rate of sea level rise had increased in the northeast.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: NJ Flood Mapper. Courtesy of Grant F. Walton Center for Remote Sensing and Spatial Analysis (CRSSA), Rutgers University, in partnership with the Jacques Cousteau National Estuarine Research Reserve (JCNERR), and in collaboration with the NOAA Coastal Services Center (CSC).[end-div]

Engineering Your Food Addiction

Fast food, snack foods and all manner of processed foods are a multi-billion dollar global industry. So, it’s no surprise that companies collectively spend $100s of millions each year to perfect the perfect bite. Importantly, part of this perfection (for the businesses) is to ensure that you keep coming back for more.

By all accounts the “cheeto” is as close to processed-food-addiction-heaven as we can get — so far. It has just the right amount of salt (too much) and fat (too much), crunchiness, and something known as vanishing caloric density (melts in the mouth at the optimum rate). Aesthetically sad, but scientifically true.

[div class=attrib]From the New York Times:[end-div]

On the evening of April 8, 1999, a long line of Town Cars and taxis pulled up to the Minneapolis headquarters of Pillsbury and discharged 11 men who controlled America’s largest food companies. Nestlé was in attendance, as were Kraft and Nabisco, General Mills and Procter & Gamble, Coca-Cola and Mars. Rivals any other day, the C.E.O.’s and company presidents had come together for a rare, private meeting. On the agenda was one item: the emerging obesity epidemic and how to deal with it. While the atmosphere was cordial, the men assembled were hardly friends. Their stature was defined by their skill in fighting one another for what they called “stomach share” — the amount of digestive space that any one company’s brand can grab from the competition.

James Behnke, a 55-year-old executive at Pillsbury, greeted the men as they arrived. He was anxious but also hopeful about the plan that he and a few other food-company executives had devised to engage the C.E.O.’s on America’s growing weight problem. “We were very concerned, and rightfully so, that obesity was becoming a major issue,” Behnke recalled. “People were starting to talk about sugar taxes, and there was a lot of pressure on food companies.” Getting the company chiefs in the same room to talk about anything, much less a sensitive issue like this, was a tricky business, so Behnke and his fellow organizers had scripted the meeting carefully, honing the message to its barest essentials. “C.E.O.’s in the food industry are typically not technical guys, and they’re uncomfortable going to meetings where technical people talk in technical terms about technical things,” Behnke said. “They don’t want to be embarrassed. They don’t want to make commitments. They want to maintain their aloofness and autonomy.”

A chemist by training with a doctoral degree in food science, Behnke became Pillsbury’s chief technical officer in 1979 and was instrumental in creating a long line of hit products, including microwaveable popcorn. He deeply admired Pillsbury but in recent years had grown troubled by pictures of obese children suffering from diabetes and the earliest signs of hypertension and heart disease. In the months leading up to the C.E.O. meeting, he was engaged in conversation with a group of food-science experts who were painting an increasingly grim picture of the public’s ability to cope with the industry’s formulations — from the body’s fragile controls on overeating to the hidden power of some processed foods to make people feel hungrier still. It was time, he and a handful of others felt, to warn the C.E.O.’s that their companies may have gone too far in creating and marketing products that posed the greatest health concerns.

 

In This Article:
• ‘In This Field, I’m a Game Changer.’
• ‘Lunchtime Is All Yours’
• ‘It’s Called Vanishing Caloric Density.’
• ‘These People Need a Lot of Things, but They Don’t Need a Coke.’

 

The discussion took place in Pillsbury’s auditorium. The first speaker was a vice president of Kraft named Michael Mudd. “I very much appreciate this opportunity to talk to you about childhood obesity and the growing challenge it presents for us all,” Mudd began. “Let me say right at the start, this is not an easy subject. There are no easy answers — for what the public health community must do to bring this problem under control or for what the industry should do as others seek to hold it accountable for what has happened. But this much is clear: For those of us who’ve looked hard at this issue, whether they’re public health professionals or staff specialists in your own companies, we feel sure that the one thing we shouldn’t do is nothing.”

As he spoke, Mudd clicked through a deck of slides — 114 in all — projected on a large screen behind him. The figures were staggering. More than half of American adults were now considered overweight, with nearly one-quarter of the adult population — 40 million people — clinically defined as obese. Among children, the rates had more than doubled since 1980, and the number of kids considered obese had shot past 12 million. (This was still only 1999; the nation’s obesity rates would climb much higher.) Food manufacturers were now being blamed for the problem from all sides — academia, the Centers for Disease Control and Prevention, the American Heart Association and the American Cancer Society. The secretary of agriculture, over whom the industry had long held sway, had recently called obesity a “national epidemic.”

Mudd then did the unthinkable. He drew a connection to the last thing in the world the C.E.O.’s wanted linked to their products: cigarettes. First came a quote from a Yale University professor of psychology and public health, Kelly Brownell, who was an especially vocal proponent of the view that the processed-food industry should be seen as a public health menace: “As a culture, we’ve become upset by the tobacco companies advertising to children, but we sit idly by while the food companies do the very same thing. And we could make a claim that the toll taken on the public health by a poor diet rivals that taken by tobacco.”

“If anyone in the food industry ever doubted there was a slippery slope out there,” Mudd said, “I imagine they are beginning to experience a distinct sliding sensation right about now.”

Mudd then presented the plan he and others had devised to address the obesity problem. Merely getting the executives to acknowledge some culpability was an important first step, he knew, so his plan would start off with a small but crucial move: the industry should use the expertise of scientists — its own and others — to gain a deeper understanding of what was driving Americans to overeat. Once this was achieved, the effort could unfold on several fronts. To be sure, there would be no getting around the role that packaged foods and drinks play in overconsumption. They would have to pull back on their use of salt, sugar and fat, perhaps by imposing industrywide limits. But it wasn’t just a matter of these three ingredients; the schemes they used to advertise and market their products were critical, too. Mudd proposed creating a “code to guide the nutritional aspects of food marketing, especially to children.”

“We are saying that the industry should make a sincere effort to be part of the solution,” Mudd concluded. “And that by doing so, we can help to defuse the criticism that’s building against us.”

What happened next was not written down. But according to three participants, when Mudd stopped talking, the one C.E.O. whose recent exploits in the grocery store had awed the rest of the industry stood up to speak. His name was Stephen Sanger, and he was also the person — as head of General Mills — who had the most to lose when it came to dealing with obesity. Under his leadership, General Mills had overtaken not just the cereal aisle but other sections of the grocery store. The company’s Yoplait brand had transformed traditional unsweetened breakfast yogurt into a veritable dessert. It now had twice as much sugar per serving as General Mills’ marshmallow cereal Lucky Charms. And yet, because of yogurt’s well-tended image as a wholesome snack, sales of Yoplait were soaring, with annual revenue topping $500 million. Emboldened by the success, the company’s development wing pushed even harder, inventing a Yoplait variation that came in a squeezable tube — perfect for kids. They called it Go-Gurt and rolled it out nationally in the weeks before the C.E.O. meeting. (By year’s end, it would hit $100 million in sales.)

According to the sources I spoke with, Sanger began by reminding the group that consumers were “fickle.” (Sanger declined to be interviewed.) Sometimes they worried about sugar, other times fat. General Mills, he said, acted responsibly to both the public and shareholders by offering products to satisfy dieters and other concerned shoppers, from low sugar to added whole grains. But most often, he said, people bought what they liked, and they liked what tasted good. “Don’t talk to me about nutrition,” he reportedly said, taking on the voice of the typical consumer. “Talk to me about taste, and if this stuff tastes better, don’t run around trying to sell stuff that doesn’t taste good.”

To react to the critics, Sanger said, would jeopardize the sanctity of the recipes that had made his products so successful. General Mills would not pull back. He would push his people onward, and he urged his peers to do the same. Sanger’s response effectively ended the meeting.

“What can I say?” James Behnke told me years later. “It didn’t work. These guys weren’t as receptive as we thought they would be.” Behnke chose his words deliberately. He wanted to be fair. “Sanger was trying to say, ‘Look, we’re not going to screw around with the company jewels here and change the formulations because a bunch of guys in white coats are worried about obesity.’ ”

The meeting was remarkable, first, for the insider admissions of guilt. But I was also struck by how prescient the organizers of the sit-down had been. Today, one in three adults is considered clinically obese, along with one in five kids, and 24 million Americans are afflicted by type 2 diabetes, often caused by poor diet, with another 79 million people having pre-diabetes. Even gout, a painful form of arthritis once known as “the rich man’s disease” for its associations with gluttony, now afflicts eight million Americans.

The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Cheeto puffs. Courtesy of tumblr.[end-div]

Geoengineering As a Solution to Climate Change

Experimental physicist David Keith has a plan: dump hundreds of thousands of tons of atomized sulfuric acid into the upper atmosphere; watch the acid particles reflect additional sunlight; wait for global temperature to drop. Many of Keith’s peers think this geoengineering scheme is crazy, least of which are the possible unknown and unmeasured side-effects, but this hasn’t stopped the healthy debate. One thing is becoming increasingly clear — humans need to take collective action.

[div class=attrib]From Technology Review:[end-div]

Here is the plan. Customize several Gulfstream business jets with military engines and with equipment to produce and disperse fine droplets of sulfuric acid. Fly the jets up around 20 kilometers—significantly higher than the cruising altitude for a commercial jetliner but still well within their range. At that altitude in the tropics, the aircraft are in the lower stratosphere. The planes spray the sulfuric acid, carefully controlling the rate of its release. The sulfur combines with water vapor to form sulfate aerosols, fine particles less than a micrometer in diameter. These get swept upward by natural wind patterns and are dispersed over the globe, including the poles. Once spread across the stratosphere, the aerosols will reflect about 1 percent of the sunlight hitting Earth back into space. Increasing what scientists call the planet’s albedo, or reflective power, will partially offset the warming effects caused by rising levels of greenhouse gases.

The author of this so-called geoengineering scheme, David Keith, doesn’t want to implement it anytime soon, if ever. Much more research is needed to determine whether injecting sulfur into the stratosphere would have dangerous consequences such as disrupting precipitation patterns or further eating away the ozone layer that protects us from damaging ultraviolet radiation. Even thornier, in some ways, are the ethical and governance issues that surround geoengineering—questions about who should be allowed to do what and when. Still, Keith, a professor of applied physics at Harvard University and a leading expert on energy technology, has done enough analysis to suspect it could be a cheap and easy way to head off some of the worst effects of climate change.

According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft.

One of the startling things about Keith’s proposal is just how little sulfur would be required. A few grams of it in the stratosphere will offset the warming caused by a ton of carbon dioxide, according to his estimate. And even the amount that would be needed by 2070 is dwarfed by the roughly 50 million metric tons of sulfur emitted by the burning of fossil fuels every year. Most of that pollution stays in the lower atmosphere, and the sulfur molecules are washed out in a matter of days. In contrast, sulfate particles remain in the stratosphere for a few years, making them more effective at reflecting sunlight.

The idea of using sulfate aerosols to offset climate warming is not new. Crude versions of the concept have been around at least since a Russian climate scientist named Mikhail Budkyo proposed the idea in the mid-1970s, and more refined descriptions of how it might work have been discussed for decades. These days the idea of using sulfur particles to counteract warming—often known as solar radiation management, or SRM—is the subject of hundreds of papers in academic journals by scientists who use computer models to try to predict its consequences.

But Keith, who has published on geoengineering since the early 1990s, has emerged as a leading figure in the field because of his aggressive public advocacy for more research on the technology—and his willingness to talk unflinchingly about how it might work. Add to that his impeccable academic credentials—last year Harvard lured him away from the University of Calgary with a joint appointment in the school of engineering and the Kennedy School of Government—and Keith is one of the world’s most influential voices on solar geoengineering. He is one of the few who have done detailed engineering studies and logistical calculations on just how SRM might be carried out. And if he and his collaborator James ­Anderson, a prominent atmospheric chemist at Harvard, gain public funding, they plan to conduct some of the first field experiments to assess the risks of the technique.

Leaning forward from the edge of his chair in a small, sparse Harvard office on an unusually warm day this winter, he explains his urgency. Whether or not greenhouse-gas emissions are cut sharply—and there is little evidence that such reductions are coming—”there is a realistic chance that [solar geoengineering] technologies could actually reduce climate risk significantly, and we would be negligent if we didn’t look at that,” he says. “I’m not saying it will work, and I’m not saying we should do it.” But “it would be reckless not to begin serious research on it,” he adds. “The sooner we find out whether it works or not, the better.”

The overriding reason why Keith and other scientists are exploring solar geoengineering is simple and well documented, though often overlooked: the warming caused by atmospheric carbon dioxide buildup is for all practical purposes irreversible, because the climate change is directly related to the total cumulative emissions. Even if we halt carbon dioxide emissions entirely, the elevated concentrations of the gas in the atmosphere will persist for decades. And according to recent studies, the warming itself will continue largely unabated for at least 1,000 years. If we find in, say, 2030 or 2040 that climate change has become intolerable, cutting emissions alone won’t solve the problem.

“That’s the key insight,” says Keith. While he strongly supports cutting carbon dioxide emissions as rapidly as possible, he says that if the climate “dice” roll against us, that won’t be enough: “The only thing that we think might actually help [reverse the warming] in our lifetime is in fact geoengineering.”

[div class=attrib]Read the entire article following the jump.[end-div]

From Sea to Shining Sea – By Rail

Now that air travel has become well and truly commoditized, and for most of us, a nightmare, it’s time, again, to revisit the romance of rail. After all, the elitist romance of air travel passed away about 40-50 years ago. Now all we are left with is parking trauma at the airport; endless lines at check in, security, the gate and while boarding and disembarking; inane airport announcements and beeping golf carts; coughing, tweeting passengers crammed shoulder to shoulder in far too small seats; poor quality air and poor quality service in the cabin. It’s even dangerous to open the shade and look out of the aircraft window for fear of waking a cranky neighbor, or, more calamitous still, for washing out the in-seat displays showing the latest reality TV videos.

Some of you, surely, still pine for a quiet and calming ride across the country taking in the local sights at a more leisurely pace. Alfred Twu, who helped define the 2008 high speed rail proposal for California, would have us zooming across the entire United States in trains, again. So, it not be a leisurely ride — think more like 200-300 miles per hour — but it may well bring us closer to what we truly miss when suspended at 30,000 ft. We can’t wait.

[div class=attrib]From the Guardian:[end-div]

I created this US High Speed Rail Map as a composite of several proposed maps from 2009, when government agencies and advocacy groups were talking big about rebuilding America’s train system.

Having worked on getting California’s high speed rail approved in the 2008 elections, I’ve long sung the economic and environmental benefits of fast trains.

This latest map comes more from the heart. It speaks more to bridging regional and urban-rural divides than about reducing airport congestion or even creating jobs, although it would likely do that as well.

Instead of detailing construction phases and service speeds, I took a little artistic license and chose colors and linked lines to celebrate America’s many distinct but interwoven regional cultures.

The response to my map this week went above and beyond my wildest expectations, sparking vigorous political discussion between thousands of Americans ranging from off-color jokes about rival cities to poignant reflections on how this kind of rail network could change long-distance relationships and the lives of faraway family members.

Commenters from New York and Nebraska talked about “wanting to ride the red line”. Journalists from Chattanooga, Tennessee (population 167,000) asked to reprint the map because they were excited to be on the map. Hundreds more shouted “this should have been built yesterday”.

It’s clear that high speed rail is more than just a way to save energy or extend economic development to smaller cities.

More than mere steel wheels on tracks, high speed rail shrinks space and brings farflung families back together. It keeps couples in touch when distant career or educational opportunities beckon. It calls to adventure and travel. It is duct tape and string to reconnect politically divided regions. Its colorful threads weave new American Dreams.

That said, while trains still live large in the popular imagination, decades of limited service have left some blind spots in the collective consciousness. I’ll address few here:

Myth: High speed rail is just for big city people.
Fact: Unlike airplanes or buses which must make detours to drop off passengers at intermediate points, trains glide into and out of stations with little delay, pausing for under a minute to unload passengers from multiple doors. Trains can, have, and continue to effectively serve small towns and suburbs, whereas bus service increasingly bypasses them.

I do hear the complaint: “But it doesn’t stop in my town!” In the words of one commenter, “the train doesn’t need to stop on your front porch.” Local transit, rental cars, taxis, biking, and walking provide access to and from stations.

Myth: High speed rail is only useful for short distances.
Fact: Express trains that skip stops allow lines to serve many intermediate cities while still providing some fast end-to-end service. Overnight sleepers with lie-flat beds where one boards around dinner and arrives after breakfast have been successful in the US before and are in use on China’s newest 2,300km high speed line.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: U.S. High Speed Rail System proposal. Alfred Twu created this map to showcase what could be possible.[end-div]

Beware North Korea, Google is Watching You

This week Google refreshed its maps of North Korea. What was previously a blank canvas with only the country’s capital — Pyongyang — visible, now boasts roads, hotels, monuments and even some North Korean internment camps. While this is not the first detailed map of the secretive state it is an important milestone in Google’s quest to map us all.

[div class=attrib]From the Washington Post:[end-div]

Until Tuesday, North Korea appeared on Google Maps as a near-total white space — no roads, no train lines, no parks and no restaurants. The only thing labeled was the capital city, Pyongyang.

This all changed when Google, on Tuesday, rolled out a detailed map of one of the world’s most secretive states. The new map labels everything from Pyongyang’s subway stops to the country’s several city-sized gulags, as well as its monuments, hotels, hospitals and department stores.

According to a Google blog post, the maps were created by a group of volunteer “citizen cartographers,” through an interface known as Google Map Maker. That program — much like Wikipedia — allows users to submit their own data, which is then fact-checked by other users, and sometimes altered many times over. Similar processes were used in other once-unmapped countries like Afghanistan and Burma.

In the case of North Korea, those volunteers worked from outside of the country, beginning from 2009. They used information that was already public, compiling details from existing analog maps, satellite images, or other Web-based materials. Much of the information was already available on the Internet, said Hwang Min-woo, 28, a volunteer mapmaker from Seoul who worked for two years on the project.

North Korea was the last country virtually unmapped by Google, but other — even more detailed — maps of the North existed before this. Most notable is a map created by Curtis Melvin, who runs the North Korea Economy Watch blog and spent years identifying thousands of landmarks in the North: tombs, textile factories, film studios, even rumored spy training locations. Melvin’s map is available as a downloadable Google Earth file.

Google’s map is important, though, because it is so readily accessible.  The map is unlikely to have an immediate influence in the North, where Internet use is restricted to all but a handful of elites. But it could prove beneficial for outsider analysts and scholars, providing an easy-to-access record about North Korea’s provinces, roads, landmarks, as well as hints about its many unseen horrors.

[div class=attrib]Read the entire article and check out more maps after the jump.[end-div]

So, You Want to Be a Brit?

The United Kingdom government has just published its updated 180-page handbook for new residents. So, those seeking to become subjects of Her Majesty will need to brush up on more that Admiral Nelson, Churchill, Spitfires, Chaucer and the Black Death. Now, if you are one of the approximately 150,000 new residents each year, you may well have to learn about Morecambe and Wise, Roald Dahl, and Monty Python. Nudge-nudge, wink-wink!

[div class=attrib]From the Telegraph:[end-div]

It has been described as “essential reading” for migrants and takes readers on a whirlwind historical tour of Britain from Stone Age hunter-gatherers to Morecambe and Wise, skipping lightly through the Black Death and Tudor England.

The latest Home Office citizenship handbook, Life in the United Kingdom: A Guide for New Residents, has scrapped sections on claiming benefits, written under the Labour government in 2007, for a triumphalist vision of events and people that helped make Britain a “great place to live”.

The Home Office said it had stripped-out “mundane information” about water meters, how to find train timetables, and using the internet.

The guide’s 180 pages, filled with pictures of the Queen, Spitfires and Churchill, are a primer for citizenship tests taken by around 150,000 migrants a year.

Comedies such as Monty Python and The Morecambe and Wise Show are highlighted as examples of British people’s “unique sense of humour and satire”, while Olympic athletes including Jessica Ennis and Sir Chris Hoy are included for the first time.

Previously, historical information was included in the handbook but was not tested. Now the book features sections on Roman, Anglo-Saxon and Viking Britain to give migrants an “understanding of how modern Britain has evolved”.

They can equally expect to be quizzed on the children’s author Roald Dahl, the Harrier jump jet and the Turing machine – a theoretical device proposed by Alan Turing and seen as a precursor to the modern computer.

The handbook also refers to the works of William Shakespeare, Geoffrey Chaucer and Jane Austen alongside Coronation Street. Meanwhile, Christmas pudding, the Last Night of the Proms and cricket matches are described as typical “indulgences”.

The handbook goes on sale today and forms the basis of the 45-minute exam in which migrants must gain marks of 75 per cent to pass.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Group shot of the Monty Python crew in 1969. Courtesy of Wikpedia.[end-div]

Las Vegas, Tianducheng and Paris: Cultural Borrowing

These three locations in Nevada, China (near Hangzhou) and Paris, France, have something in common. People the world over travel to these three places to see what they share. But only one has an original. In this case, we’re talking about the Eiffel Tower.

Now, this architectural grand theft is subject to a lengthy debate — the merits of mimicry, on a vast scale. There is even a fascinating coffee table sized book dedicated to this growing trend: Original Copies: Architectural Mimicry in Contemporary China, by Bianca Bosker.

Interestingly, the copycat trend only seems worrisome if those doing the copying are in a powerful and growing nation, and the copying is done on a national scale, perhaps for some form of cultural assimilation. After all, we don’t hear similar cries when developers put up a copy of Venice in Las Vegas — that’s just for entertainment we are told.

Yet haven’t civilizations borrowed, and stolen, ideas both good and bad throughout the ages? The answer of course is an unequivocal yes. Humans are avaricious collectors of memes that work — it’s more efficient to borrow than to invent. The Greeks borrowed from the Egyptians; the Romans borrowed from the Greeks; the Turks borrowed from the Romans; the Arabs borrowed from the Turks; the Spanish from the Arabs, the French from the Spanish, the British from the French, and so on. Of course what seems to be causing a more recent stir is that China is doing the borrowing, and on such a rapid and grand scale — the nation is copying not just buildings (and most other products) but entire urban landscapes. However, this is one way that empires emerge and evolve. In this case, China’s acquisitive impulses could, perhaps, be tempered if most nations of the world borrowed less from the Chinese — money that is. But that’s another story.

[div class=attrib]From the Atlantic:[end-div]

The latest and most famous case of Chinese architectural mimicry doesn’t look much like its predecessors. On December 28, German news weekly Der Spiegel reported that the Wangjing Soho, Zaha Hadid’s soaring new office and retail development under construction in Beijing, is being replicated, wall for wall and window for window, in Chongqing, a city in central China.

To most outside observers, this bold and quickly commissioned counterfeit represents a familiar form of piracy. In fashion, technology, and architecture, great ideas trickle down, often against the wishes of their progenitors. But in China, architectural copies don’t usually ape the latest designs.

In the vast space between Beijing and Chongqing lies a whole world of Chinese architectural simulacra that quietly aspire to a different ideal. In suburbs around China’s booming cities, developers build replicas of towns like Halstatt, Austria and Dorchester, England. Individual homes and offices, too, are designed to look like Versailles or the Chrysler Building. The most popular facsimile in China is the White House. The fastest-urbanizing country in history isn’t scanning design magazines for inspiration; it’s watching movies.

At Beijing’s Palais de Fortune, two hundred chateaus sit behind gold-tipped fences. At Chengdu’s British Town, pitched roofs and cast-iron street lamps dot the streets. At Shanghai’s Thames Town, a Gothic cathedral has become a tourist attraction in itself. Other developments have names like “Top Aristocrat,” (Beijing), “the Garden of Monet” (Shanghai), and “Galaxy Dante,” (Shenzhen).

Architects and critics within and beyond China have treated these derivative designs with scorn, as shameless kitsch or simply trash. Others cite China’s larger knock-off culture, from handbags to housing, as evidence of the innovation gap between China and the United States. For a larger audience on the Internet, they are merely a punchline, another example of China’s endlessly entertaining wackiness.

In short, the majority of Chinese architectural imitation, oozing with historical romanticism, is not taken seriously.

But perhaps it ought to be.

In Original Copies: Architectural Mimicry in Contemporary China, the first detailed book on the subject, Bianca Bosker argues that the significance of these constructions has been unfairly discounted. Bosker, a senior technology editor at the Huffington Post, has been visiting copycat Chinese villages for some six years, and in her view, these distorted impressions of the West offer a glance at the hopes, dreams and contradictions of China’s middle class.

“Clearly there’s an acknowledgement that there’s something great about Paris,” says Bosker. “But it’s also: ‘We can do it ourselves.'”

Armed with firsthand observation, field research, interviews, and a solid historical background, Bosker’s book is an attempt to change the way we think about Chinese duplitecture. “We’re seeing the Chinese dream in action,” she says. “It has to do with this ability to take control of your life. There’s now this plethora of options to choose from.” That is something new in China, as is the role that private enterprise is taking in molding built environments that will respond to people’s fantasies.

While the experts scoff, the people who build and inhabit these places are quite proud of them. As the saying goes, “The way to live best is to eat Chinese food, drive an American car, and live in a British house. That’s the ideal life.” The Chinese middle class is living in Orange County, Beijing, the same way you listen to reggae music or lounge in Danish furniture.

In practice, though, the depth and scale of this phenomenon has few parallels. No one knows how many facsimile communities there are in China, but the number is increasing every day. “Every time I go looking for more,” Bosker says, “I find more.”

How many are there?

“At least hundreds.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Tianducheng, 13th arrondissement, Paris in China. Courtesy of Bianca Bosker/University of Hawaii Press.[end-div]

Light From Gravity

Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:

[tube]1dd9NIlhvlI[/tube]

[div class=attrib]From ars technica:[end-div]

A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.

Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.

The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.

Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.

It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.

[div class=attrib]Read more about GravityLight after the jump.[end-div]

[div class=attrib]Video courtesy of GravityLight.[end-div]

Map as Illusion

We love maps here at theDiagonal. We also love ideas that challenge the status quo. And, this latest Strange Map, courtesy of Frank Jacobs over at Big Think does both. What we appreciate about his cartographic masterpiece is that it challenges our visual perception, and, more importantly, challenges our assumed hemispheric worldview.

[div class=attrib]Read more of this article after the jump.[end-div]

National Geographic Hits 125

Chances are that if you don’t have some ancient National Geographic magazines hidden in a box in your attic, then you know someone who does. If not, it’s time to see what you have been missing all these years. National Geographic celebrates 125 years in 2013, and what better way to do this than to look back through some of its glorious photographic archives.

[div class=attrib]See more classic images after the jump.[end-div]

[div class=attrib]Image: 1964, Tanzania: a touching moment between the primatologist and National Geographic grantee Jane Goodall and a young chimpanzee called Flint at Tanzania’s Gombe Stream reserve. Courtesy of Guardian / National Geographic.[end-div]

Climate Change Report

No pithy headline. The latest U.S. National Climate Assessment makes sobering news. The full 1,146 page report is available for download here.

Over the next 30 years (and beyond), it warns of projected sea-level rises along the Eastern Seaboard of the United States, warmer temperatures across much of the nation, and generally warmer and more acidic oceans. More worrying still are the less direct consequences of climate change: increased threats to human health due to severe weather such as storms, drought and wildfires; more vulnerable infrastructure in regions subject to increasingly volatile weather; and rising threats to regional stability and national security due to a less reliable national and global water supply.

[div class=attrib]From Scientific American:[end-div]

The consequences of climate change are now hitting the United States on several fronts, including health, infrastructure, water supply, agriculture and especially more frequent severe weather, a congressionally mandated study has concluded.

A draft of the U.S. National Climate Assessment, released on Friday, said observable change to the climate in the past half-century “is due primarily to human activities, predominantly the burning of fossil fuel,” and that no areas of the United States were immune to change.

“Corn producers in Iowa, oyster growers in Washington State, and maple syrup producers in Vermont have observed changes in their local climate that are outside of their experience,” the report said.

Months after Superstorm Sandy hurtled into the U.S. East Coast, causing billions of dollars in damage, the report concluded that severe weather was the new normal.

“Certain types of weather events have become more frequent and/or intense, including heat waves, heavy downpours, and, in some regions, floods and droughts,” the report said, days after scientists at the National Oceanic and Atmospheric Administration declared 2012 the hottest year ever in the United States.

Some environmentalists looked for the report to energize climate efforts by the White House or Congress, although many Republican lawmakers are wary of declaring a definitive link between human activity and evidence of a changing climate.

The U.S. Congress has been mostly silent on climate change since efforts to pass “cap-and-trade” legislation collapsed in the Senate in mid-2010.

The advisory committee behind the report was established by the U.S. Department of Commerce to integrate federal research on environmental change and its implications for society. It made two earlier assessments, in 2000 and 2009.

Thirteen departments and agencies, from the Agriculture Department to NASA, are part of the committee, which also includes academics, businesses, nonprofits and others.

‘A WARNING TO ALL OF US’

The report noted that of an increase in average U.S. temperatures of about 1.5 degrees F (.83 degree C) since 1895, when reliable national record-keeping began, more than 80 percent had occurred in the past three decades.

With heat-trapping gases already in the atmosphere, temperatures could rise by a further 2 to 4 degrees F (1.1 to 2.2 degrees C) in most parts of the country over the next few decades, the report said.

[div class=attrib]Read the entire article following the jump.[end-div]

Plagiarism is the Sincerest Form of Capitalism

Plagiarism is fine art in China. But, it’s also very big business. The nation knocks off everything, from Hollywood and Bollywood movies, to software, electronics, appliances, drugs, and military equipment. Now, it’s moved on to copying architectural plans.

[div class=attrib]From the Telegraph:[end-div]

China is famous for its copy-cat architecture: you can find replicas of everything from the Eiffel Tower and the White House to an Austrian village across its vast land. But now they have gone one step further: recreating a building that hasn’t even been finished yet. A building designed by the Iraqi-British architect Dame Zaha Hadid for Beijing has been copied by a developer in Chongqing, south-west China, and now the two projects are racing to be completed first.

Dame Zaha, whose Wangjing Soho complex consists of three pebble-like constructions and will house an office and retail complex, unveiled her designs in August 2011 and hopes to complete the project next year.

Meanwhile, a remarkably similar project called Meiquan 22nd Century is being constructed in Chongqing, that experts (and anyone with eyes, really) deem a rip-off. The developers of the Soho complex are concerned that the other is being built at a much faster rate than their own.

“It is possible that the Chongqing pirates got hold of some digital files or renderings of the project,” Satoshi Ohashi, project director at Zaha Hadid Architects, told Der Spiegel online. “[From these] you could work out a similar building if you are technically very capable, but this would only be a rough simulation of the architecture.”

So where does the law stand? Reporting on the intriguing case, China Intellectual Property magazine commented, “Up to now, there is no special law in China which has specific provisions on IP rights related to architecture.” They added that if it went to court, the likely outcome would be payment of compensation to Dame Zaha’s firm, rather than the defendant being forced to pull the building down. However, Dame Zaha seems somewhat unfazed about the structure, simply remarking that if the finished building contains a certain amount of innovation then “that could be quite exciting”. One of the world’s most celebrated architects, Dame Zaha – who recently designed the Aquatics Centre for the London Olympics – has 11 current projects in China. She is quite the star over there: 15,000 fans flocked to see her give a talk at the unveiling of the designs for the complex.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Wangjing Soho Architecture. Courtesy of Zaha Hadid Architects.[end-div]

The Future of the Grid

Two common complaints dog the sustainable energy movement: first, energy generated from the sun and wind is not always present; second, renewable energy is too costly. A new study debunks these notions, and shows that cost effective renewable energy could power our needs 99 percent of the time by 2030.

[div class=attrib]From ars technica:[end-div]

You’ve probably heard the argument: wind and solar power are well and good, but what about when the wind doesn’t blow and the sun doesn’t shine? But it’s always windy and sunny somewhere. Given a sufficient distribution of energy resources and a large enough network of electrically conducting tubes, plus a bit of storage, these problems can be overcome—technologically, at least.

But is it cost-effective to do so? A new study from the University of Delaware finds that renewable energy sources can, with the help of storage, power a large regional grid for up to 99.9 percent of the time using current technology. By 2030, the cost of doing so will hit parity with current methods. Further, if you can live with renewables meeting your energy needs for only 90 percent of the time, the economics become positively compelling.

“These results break the conventional wisdom that renewable energy is too unreliable and expensive,” said study co-author Willett Kempton, a professor at the University of Delaware’s School of Marine Science and Policy. “The key is to get the right combination of electricity sources and storage—which we did by an exhaustive search—and to calculate costs correctly.”

By exhaustive, Kempton is referring to the 28 billion combinations of inland and offshore wind and photovoltaic solar sources combined with centralized hydrogen, centralized batteries, and grid-integrated vehicles analyzed in the study. The researchers deliberately overlooked constant renewable sources of energy such as geothermal and hydro power on the grounds that they are less widely available geographically.

These technologies were applied to a real-world test case: that of the PJM Interconnection regional grid, which covers parts of states from New Jersey to Indiana, and south to North Carolina. The model used hourly consumption data from the years 1999 to 2002; during that time, the grid had a generational capacity of 72GW catering to an average demand of 31.5GW. Taking in 13 states, either whole or in part, the PJM Interconnection constitutes one fifth of the USA’s grid. “Large” is no overstatement, even before considering more recent expansions that don’t apply to the dataset used.

The researchers constructed a computer model using standard solar and wind analysis tools. They then fed in hourly weather data from the region for the whole four-year period—35,040 hours worth. The goal was to find the minimum cost at which the energy demand could be met entirely by renewables for a given proportion of the time, based on the following game plan:

  1. When there’s enough renewable energy direct from source to meet demand, use it. Store any surplus.
  2. When there is not enough renewable energy direct from source, meet the shortfall with the stored energy.
  3. When there is not enough renewable energy direct from source, and the stored energy reserves are insufficient to bridge the shortfall, top up the remaining few percent of the demand with fossil fuels.

Perhaps unsurprisingly, the precise mix required depends upon exactly how much time you want renewables to meet the full load. Much more surprising is the amount of excess renewable infrastructure the model proposes as the most economic. To achieve a 90-percent target, the renewable infrastructure should be capable of generating 180 percent of the load. To meet demand 99.9 percent of the time, that rises to 290 percent.

“So much excess generation of renewables is a new idea, but it is not problematic or inefficient, any more than it is problematic to build a thermal power plant requiring fuel input at 250 percent of the electrical output, as we do today,” the study argues.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bangui Windfarm, Ilocos Norte, Philippines. Courtesy of
Wikipedia.[end-div]

Places to Visit Before World’s End

In case you missed all the apocalyptic hoopla, the world is supposed to end today. Now, if you’re reading this, you obviously still have a little time, since the Mayans apparently did not specify a precise time for prophesied end. So, we highly recommend that you visit one or more of these beautiful places, immediately. Of course, if we’re all still here tomorrow, you will have some extra time to take in these breathtaking sights before the next planned doomsday.

[div class=attrib]Check out the top 100 places according to the Telegraph after the jump.[end-div]

[div class=attrib]Image: Lapland for the northern lights. Courtesy of ALAMY / Telegraph.[end-div]

Climate change: Not in My Neigborhood

It’s no surprise that in our daily lives we seek information that reinforces our perceptions, opinions and beliefs of the world around us. It’s also the case that if we do not believe in a particular position, we will overlook any evidence in our immediate surroundings that runs contrary to our disbelief — climate change is no different.

[div class=attrib]From ars technica:[end-div]

We all know it’s hard to change someone’s mind. In an ideal, rational world, a person’s opinion about some topic would be based on several pieces of evidence. If you were to supply that person with several pieces of stronger evidence that point in another direction, you might expect them to accept the new information and agree with you.

However, this is not that world, and rarely do we find ourselves in a debate with Star Trek’s Spock. There are a great many reasons that we behave differently. One is the way we rate incoming information for trustworthiness and importance. Once we form an opinion, we rate information that confirms our opinion more highly than information that challenges it. This is one form of “motivated reasoning.” We like to think we’re right, and so we are motivated to come to the conclusion that the facts are still on our side.

Publicly contentious issues often put a spotlight on these processes—issues like climate change, example. In a recent paper published in Nature Climate Change, researchers from George Mason and Yale explore how motivated reasoning influences whether people believe they have personally experienced the effects of climate change.

When it comes to communicating the science of global warming, a common strategy is to focus on the concrete here-and-now rather than the abstract and distant future. The former is easier for people to relate to and connect with. Glazed eyes are the standard response to complicated graphs of projected sea level rise, with ranges of uncertainty and several scenarios of future emissions. Show somebody that their favorite ice fishing spot is iced over for several fewer weeks each winter than it was in the late 1800s, though, and you might have their attention.

Public polls show that acceptance of a warming climate correlates with agreement that one has personally experienced its effects. That could be affirmation that personal experience is a powerful force for the acceptance of climate science. Obviously, there’s another possibility—that those who accept that the climate is warming are more likely to believe they’ve experienced the effects themselves, whereas those who deny that warming is taking place are unlikely to see evidence of it in daily life. That’s, at least partly, motivated reasoning at work. (And of course, this cuts both ways. Individuals who agree that the Earth is warming may erroneously interpret unrelated events as evidence of that fact.)

The survey used for this study was unique in that the same people were polled twice, two and a half years apart, to see how their views changed over time. For the group as a whole, there was evidence for both possibilities—experience affected acceptance, and acceptance predicted statements about experience.

Fortunately, the details were a bit more interesting than that. When you categorize individuals by engagement—essentially how confident and knowledgeable they feel about the facts of the issue—differences are revealed. For the highly-engaged groups (on both sides), opinions about whether climate is warming appeared to drive reports of personal experience. That is, motivated reasoning was prevalent. On the other hand, experience really did change opinions for the less-engaged group, and motivated reasoning took a back seat.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of: New York Times / Steen Ulrik Johannessen / Agence France-Presse — Getty Images.[end-div]

 

 

National Emotions Mapped

Are Canadians as a people more emotional than Brazilians? Are Brits as emotional as Mexicans? While generalizing and mapping a nation’s emotionality is dubious at best, this map is nonetheless fascinating.

[div class=atfrib]From the Washington Post:[end-div]

Since 2009, the Gallup polling firm has surveyed people in 150 countries and territories on, among other things, their daily emotional experience. Their survey asks five questions, meant to gauge whether the respondent felt significant positive or negative emotions the day prior to the survey. The more times that people answer “yes” to questions such as “Did you smile or laugh a lot yesterday?”, the more emotional they’re deemed to be.

Gallup has tallied up the average “yes” responses from respondents in almost every country on Earth. The results, which I’ve mapped out above, are as fascinating as they are indecipherable. The color-coded key in the map indicates the average percentage of people who answered “yes.” Dark purple countries are the most emotional, yellow the least. Here are a few takeaways.

Singapore is the least emotional country in the world. ”Singaporeans recognize they have a problem,” Bloomberg Businessweek writes of the country’s “emotional deficit,” citing a culture in which schools “discourage students from thinking of themselves as individuals.” They also point to low work satisfaction, competitiveness, and the urban experience: “Staying emotionally neutral could be a way of coping with the stress of urban life in a place where 82 percent of the population lives in government-built housing.”

The Philippines is the world’s most emotional country. It’s not even close; the heavily Catholic, Southeast Asian nation, a former colony of Spain and the U.S., scores well above second-ranked El Salvador.

Post-Soviet countries are consistently among the most stoic. Other than Singapore (and, for some reason, Madagascar and Nepal), the least emotional countries in the world are all former members of the Soviet Union. They are also the greatest consumers of cigarettes and alcohol. This could be what you call and chicken-or-egg problem: if the two trends are related, which one came first? Europe appears almost like a gradient here, with emotions increasing as you move West.

People in the Americas are just exuberant. Every nation on the North and South American continents ranked highly on the survey. Americans and Canadians are both among the 15 most emotional countries in the world, as well as ten Latin countries. The only non-American countries in the top 15, other than the Philippines, are the Arab nations of Oman and Bahrain, both of which rank very highly.

[div class=attrib]Read the entire article following the jump.[end-div]

Testosterone and the Moon

While the United States’ military makes no comment a number of corroborated reports suggest that the country had a plan to drop an atomic bomb on the moon during the height of the Cold War. Apparently, a Hiroshima-like explosion on our satellite would have been seen as a “show of force” by the Soviets. The shear absurdity of this Dr.Strangelove story makes it all the more real.

[div class=attrib]From the Independent:[end-div]

US Military chiefs, keen to intimidate Russia during the Cold War, plotted to blow up the moon with a nuclear bomb, according to project documents kept secret for for nearly 45 years.

The army chiefs allegedly developed a top-secret project called, ‘A Study of Lunar Research Flights’ – or ‘Project A119’, in the hope that their Soviet rivals would be intimidated by a display of America’s Cold War muscle.

According to The Sun newspaper the military bosses developed a classified plan to launch a nuclear weapon 238,000 miles to the moon where it would be detonated upon impact.

The planners reportedly opted for an atom bomb, rather than a hydrogen bomb, because the latter would be too heavy for the missile.

Physicist Leonard Reiffel, who says he was involved in the project, claims the hope was that the flash from the bomb would intimidate the Russians following their successful launching of the Sputnik satellite in October 1957.

The planning of the explosion reportedly included calculations by astronomer Carl Sagan, who was then a young graduate.

Documents reportedly show the plan was abandoned because of fears it would have an adverse effect on Earth should the explosion fail.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of NASA.[end-div]

Pluralistic Ignorance

Why study the science of climate change when you can study the complexities of climate change deniers themselves? That was the question that led several groups of independent researchers to study why some groups of people cling to mistaken beliefs and hold inaccurate views of the public consensus.

[div class=attrib]From ars technica:[end-div]

By just about every measure, the vast majority of scientists in general—and climate scientists in particular—have been convinced by the evidence that human activities are altering the climate. However, in several countries, a significant portion of the public has concluded that this consensus doesn’t exist. That has prompted a variety of studies aimed at understanding the large disconnect between scientists and the public, with results pointing the finger at everything from the economy to the weather. Other studies have noted societal influences on acceptance, including ideology and cultural identity.

Those studies have generally focused on the US population, but the public acceptance of climate change is fairly similar in Australia. There, a new study has looked at how societal tendencies can play a role in maintaining mistaken beliefs. The authors of the study have found evidence that two well-known behaviors—the “false consensus” and “pluralistic ignorance”—are helping to shape public opinion in Australia.

False consensus is the tendency of people to think that everyone else shares their opinions. This can arise from the fact that we tend to socialize with people who share our opinions, but the authors note that the effect is even stronger “when we hold opinions or beliefs that are unpopular, unpalatable, or that we are uncertain about.” In other words, our social habits tend to reinforce the belief that we’re part of a majority, and we have a tendency to cling to the sense that we’re not alone in our beliefs.

Pluralistic ignorance is similar, but it’s not focused on our own beliefs. Instead, sometimes the majority of people come to believe that most people think a certain way, even though the majority opinion actually resides elsewhere.

As it turns out, the authors found evidence of both these effects. They performed two identical surveys of over 5,000 Australians, done a year apart; about 1,350 people took the survey both times, which let the researchers track how opinions evolve. Participants were asked to describe their own opinion on climate change, with categories including “don’t know,” “not happening,” “a natural occurrence,” and “human-induced.” After voicing their own opinion, people were asked to estimate what percentage of the population would fall into each of these categories.

In aggregate, over 90 percent of those surveyed accepted that climate change was occurring (a rate much higher than we see in the US), with just over half accepting that humans were driving the change. Only about five percent felt it wasn’t happening, and even fewer said they didn’t know. The numbers changed only slightly between the two polls.

The false consensus effect became obvious when the researchers looked at what these people thought that everyone else believed. Here, the false consensus effect was obvious: every single group believed that their opinion represented the plurality view of the population. This was most dramatic among those who don’t think that the climate is changing; even though they represent far less than 10 percent of the population, they believed that over 40 percent of Australians shared their views. Those who profess ignorance also believed they had lots of company, estimating that their view was shared by a quarter of the populace.

Among those who took the survey twice, the effect became even more pronounced. In the year between the surveys, they respondents went from estimating that 30 percent of the population agreed with them to thinking that 45 percent did. And, in general, this group was the least likely to change its opinion between the two surveys.

But there was also evidence of pluralistic ignorance. Every single group grossly overestimated the number of people who were unsure about climate change or convinced it wasn’t occurring. Even those who were convinced that humans were changing the climate put 20 percent of Australians into each of these two groups.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Flood victims. Courtesy of NRDC.[end-div]

From Finely Textured Beef to Soylent Pink

Blame corporate euphemisms and branding for the obfuscation of everyday things. More sinister yet, is the constant re-working of names for our ever increasingly processed foodstuffs. Only last year as several influential health studies pointed towards the detrimental health effects of high fructose corn syrup (HFC) did the food industry act, but not by removing copious amounts of the addictive additive from many processed foods. Rather, the industry attempted to re-brand HFC as “corn sugar”. And, now on to the battle over “soylent pink” also known as “pink slim”.

[div class=attrib]From Slate:[end-div]

What do you call a mash of beef trimmings that have been chopped and then spun in a centrifuge to remove the fatty bits and gristle? According to the government and to the company that invented the process, you call it lean finely textured beef. But to the natural-food crusaders who would have the stuff removed from the nation’s hamburgers and tacos, the protein-rich product goes by another, more disturbing name: Pink slime.

The story of this activist rebranding—from lean finely textured beef to pink slime—reveals just how much these labels matter. It was the latter phrase that, for example, birthed the great ground-beef scare of 2012. In early March, journalists at both the Daily and at ABC began reporting on a burger panic: Lax rules from the U.S. Department of Agriculture allowed producers to fill their ground-beef packs with a slimy, noxious byproduct—a mush the reporters called unsanitary and without much value as a food. Coverage linked back to a New York Times story from 2009 in which the words pink slime had appeared in public for the first time in a quote from an email written by a USDA microbiologist who was frustrated at a decision to leave the additive off labels for ground meat.

The slimy terror spread in the weeks that followed. Less than a month after ABC’s initial reports, almost a quarter million people had signed a petition to get pink slime out of public school cafeterias. Supermarket chains stopped selling burger meat that contained it—all because of a shift from four matter-of-fact words to two visceral ones.

And now that rebranding has become the basis for a 263-page lawsuit. Last month, Beef Products Inc., the first and principal producer of lean/pink/textured/slimy beef, filed a defamation claim against ABC (along with that microbiologist and a former USDA inspector) in a South Dakota court. The company says the network carried out a malicious and dishonest campaign to discredit its ground-beef additive and that this work had grievous consequences. When ABC began its coverage, Beef Products Inc. was selling 5 million pounds of slime/beef/whatever every week. Then three of its four plants were forced to close, and production dropped to 1.6 million pounds. A weekly profit of $2.3 million had turned into a $583,000 weekly loss.

At Reuters, Steven Brill argued that the suit has merit. I won’t try to comment on its legal viability, but the details of the claim do provide some useful background about how we name our processed foods, in both industry and the media. It turns out the paste now known within the business as lean finely textured beef descends from an older, less purified version of the same. Producers have long tried to salvage the trimmings from a cattle carcass by cleaning off the fat and the bacteria that often congregate on these leftover parts. At best they could achieve a not-so-lean class of meat called partially defatted chopped beef, which USDA deemed too low in quality to be a part of hamburger or ground meat.

By the late 1980s, though, Eldon Roth of Beef Products Inc. had worked out a way to make those trimmings a bit more wholesome. He’d found a way, using centrifuges, to separate the fat more fully. In 1991, USDA approved his product as fat reduced beef and signed off on its use in hamburgers. JoAnn Smith, a government official and former president of the National Cattlemen’s Association, signed off on this “euphemistic designation,” writes Marion Nestle in Food Politics. (Beef Products, Inc. maintains that this decision “was not motivated by any official’s so-called ‘links to the beef industry.’ “) So 20 years ago, the trimmings had already been reformulated and rebranded once.

But the government still said that fat reduced beef could not be used in packages marked “ground beef.” (The government distinction between hamburger and ground beef is that the former can contain added fat, while the latter can’t.) So Beef Products Inc. pressed its case, and in 1993 it convinced the USDA to approve the mash for wider use, with a new and better name: lean finely textured beef. A few years later, Roth started killing the microbes on his trimmings with ammonia gas and got approval to do that, too. With government permission, the company went on to sell several billion pounds of the stuff in the next two decades.

In the meantime, other meat processors started making something similar but using slightly different names. AFA Foods (which filed for bankruptcy in April after the recent ground-beef scandal broke), has referred to its products as boneless lean beef trimmings, a more generic term. Cargill, which decontaminates its meat with citric acid in place of ammonia gas, calls its mash of trimmings finely textured beef.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Industrial ground beef. Courtesy of Wikipedia.[end-div]

GigaBytes and TeraWatts

Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.

Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.

[div class=attrib]From the New York Times:[end-div]

Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.

The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.

Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.

That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.

They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.

A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.

Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.

To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.

Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.

“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the AP / Thanassis Stavrakis.[end-div]

A Link Between BPA and Obesity

You have probably heard of BPA. It’s a compound used in the manufacture of many plastics, especially hard, polycarbonate plastics. Interestingly, it has hormone-like characteristics, mimicking estrogen. As a result, BPA crops up in many studies that show adverse health affects. As a precaution, the U.S. Food and Drug Administration (FDA) several years ago banned the use of BPA from products aimed at young children, such as baby bottles. But evidence remains inconsistent, so BPA is still found in many products today. Now comes another study linking BPA to obesity.

[div class=attrib]From Smithsonian:[end-div]

Since the 1960s, manufacturers have widely used the chemical bisphenol-A (BPA) in plastics and food packaging. Only recently, though, have scientists begun thoroughly looking into how the compound might affect human health—and what they’ve found has been a cause for concern.

Starting in 2006, a series of studies, mostly in mice, indicated that the chemical might act as an endocrine disruptor (by mimicking the hormone estrogen), cause problems during development and potentially affect the reproductive system, reducing fertility. After a 2010 Food and Drug Administration report warned that the compound could pose an especially hazardous risk for fetuses, infants and young children, BPA-free water bottles and food containers started flying off the shelves. In July, the FDA banned the use of BPA in baby bottles and sippy cups, but the chemical is still present in aluminum cans, containers of baby formula and other packaging materials.

Now comes another piece of data on a potential risk from BPA but in an area of health in which it has largely been overlooked: obesity. A study by researchers from New York University, published today in the Journal of the American Medical Association, looked at a sample of nearly 3,000 children and teens across the country and found a “significant” link between the amount of BPA in their urine and the prevalence of obesity.

“This is the first association of an environmental chemical in childhood obesity in a large, nationally representative sample,” said lead investigator Leonardo Trasande, who studies the role of environmental factors in childhood disease at NYU. “We note the recent FDA ban of BPA in baby bottles and sippy cups, yet our findings raise questions about exposure to BPA in consumer products used by older children.”

The researchers pulled data from the 2003 to 2008 National Health and Nutrition Examination Surveys, and after controlling for differences in ethnicity, age, caregiver education, income level, sex, caloric intake, television viewing habits and other factors, they found that children and adolescents with the highest levels of BPA in their urine had a 2.6 times greater chance of being obese than those with the lowest levels. Overall, 22.3 percent of those in the quartile with the highest levels of BPA were obese, compared with just 10.3 percent of those in the quartile with the lowest levels of BPA.

The vast majority of BPA in our bodies comes from ingestion of contaminated food and water. The compound is often used as an internal barrier in food packaging, so that the product we eat or drink does not come into direct contact with a metal can or plastic container. When heated or washed, though, plastics containing BPA can break down and release the chemical into the food or liquid they hold. As a result, roughly 93 percent of the U.S. population has detectable levels of BPA in their urine.

The researchers point specifically to the continuing presence of BPA in aluminum cans as a major problem. “Most people agree the majority of BPA exposure in the United States comes from aluminum cans,” Trasande said. “Removing it from aluminum cans is probably one of the best ways we can limit exposure. There are alternatives that manufacturers can use to line aluminum cans.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bisphenol A. Courtesy of Wikipedia.[end-div]

An Answer is Blowing in the Wind

Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.

[div class=attrib]From Ars Technica:[end-div]

Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?

Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.

The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?

In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.

The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.

The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?

For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.

Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.

Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.

Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.

For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.

These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).

[div class=attrib]Read the entire article after the jump.[end-div]

Air Conditioning in a Warming World

[div class=attrib]From the New York Times:[end-div]

THE blackouts that left hundreds of millions of Indians sweltering in the dark last month underscored the status of air-conditioning as one of the world’s most vexing environmental quandaries.

Fact 1: Nearly all of the world’s booming cities are in the tropics and will be home to an estimated one billion new consumers by 2025. As temperatures rise, they — and we — will use more air-conditioning.

Fact 2: Air-conditioners draw copious electricity, and deliver a double whammy in terms of climate change, since both the electricity they use and the coolants they contain result in planet-warming emissions.

Fact 3: Scientific studies increasingly show that health and productivity rise significantly if indoor temperature is cooled in hot weather. So cooling is not just about comfort.

Sum up these facts and it’s hard to escape: Today’s humans probably need air-conditioning if they want to thrive and prosper. Yet if all those new city dwellers use air-conditioning the way Americans do, life could be one stuttering series of massive blackouts, accompanied by disastrous planet-warming emissions.

We can’t live with air-conditioning, but we can’t live without it.

“It is true that air-conditioning made the economy happen for Singapore and is doing so for other emerging economies,” said Pawel Wargocki, an expert on indoor air quality at the International Center for Indoor Environment and Energy at the Technical University of Denmark. “On the other hand, it poses a huge threat to global climate and energy use. The current pace is very dangerous.”

Projections of air-conditioning use are daunting. In 2007, only 11 percent of households in Brazil and 2 percent in India had air-conditioning, compared with 87 percent in the United States, which has a more temperate climate, said Michael Sivak, a research professor in energy at the University of Michigan. “There is huge latent demand,” Mr. Sivak said. “Current energy demand does not yet reflect what will happen when these countries have more money and more people can afford air-conditioning.” He has estimated that, based on its climate and the size of the population, the cooling needs of Mumbai alone could be about a quarter of those of the entire United States, which he calls “one scary statistic.”

It is easy to decry the problem but far harder to know what to do, especially in a warming world where people in the United States are using our existing air-conditioners more often. The number of cooling degree days — a measure of how often cooling is needed — was 17 percent above normal in the United States in 2010, according to the Environmental Protection Agency, leading to “an increase in electricity demand.” This July was the hottest ever in the United States.

Likewise, the blackouts in India were almost certainly related to the rising use of air-conditioning and cooling, experts say, even if the immediate culprit was a grid that did not properly balance supply and demand.

The late arrival of this year’s monsoons, which normally put an end to India’s hottest season, may have devastated the incomes of farmers who needed the rain. But it “put smiles on the faces of those who sell white goods — like air-conditioners and refrigerators — because it meant lots more sales,” said Rajendra Shende, chairman of the Terre Policy Center in Pune, India.

“Cooling is the craze in India — everyone loves cool temperatures and getting to cool temperatures as quickly as possible,” Mr. Shende said. He said that cooling has become such a cultural priority that rather than advertise a car’s acceleration, salesmen in India now emphasize how fast its air-conditioner can cool.

Scientists are scrambling to invent more efficient air-conditioners and better coolant gases to minimize electricity use and emissions. But so far the improvements have been dwarfed by humanity’s rising demands.

And recent efforts to curb the use of air-conditioning, by fiat or persuasion, have produced sobering lessons.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Parkland Air Conditioning.[end-div]

When to Eat Your Fruit and Veg

It’s time to jettison the $1.99 hyper-burger and super-sized fires and try some real fruits and vegetables. You know — the kind of product that comes directly from the soil. But, when is the best time to suck on a juicy peach or chomp some crispy radicchio?

A great chart, below, summarizes which fruits and vegetables are generally in season for the Northern Hemisphere.

[div class=attrib]Infographic courtesy of Visual News, designed by Column Five.[end-div]