- Cellphone Only Lanes>
You’ve seen the high occupancy vehicle lane on select highways. You’ve seen pedestrian only zones. You’ve seen cycle friendly zones. Now, it’s time for the slow walking lane — for pedestrians using smartphones! Perhaps we’ll eventually see separate lanes for tourists with tablets, smartwatch users and, of course, a completely separate zone for texting t(w)eens.
From the Independent:
The Chinese city of Chongqing claims to have introduced the world’s first ‘slow-walking lane’ for smartphone users.
No more will the most efficient of pedestrians be forced to stare frustratedly at the occiput of their meandering counterparts.
Two 100-ft lanes have been painted on to a pavement in the city, with one side reserved for those wanting to stare into their handheld device and the other exclusively for those who can presumably spare five minutes without checking their latest Weibo update.
However, according to the Telegraph, officials in Chongqing only introduced the signage to make the point that “it is best not to play with your phone while walking”.
Read the entire story here.
Image: City of Chongqing. Courtesy of the Independent.
- National Extinction Coming Soon>
Based on declining fertility rates in some Asian nations a new study predicts complete national extinctions in the not too distant future.
From the Telegraph:
South Koreans will be ‘extinct’ by 2750 if nothing is done to halt the nation’s falling fertility rate, according to a study by The National Assembly Research Service in Seoul.
The fertility rate declined to a new low of 1.19 children per woman in 2013, the study showed, well below the fertility rate required to sustainSouth Korea‘s current population of 50 million people, the Chosun Ilbo reported.
In a simulation, the NARS study suggests that the population will shrink to 40 million in 2056 and 10 million in 2136. The last South Korean, the report indicates, will die in 2750, making it the first national group in the world to become extinct.
The simulation is a worst-case scenario and does not consider possible changes in immigration policy, for example.
The study, carried out at the request of Yang Seung-jo, a member of the opposition New Politics Alliance for Democracy, underlines the challenges facing a number of nations in the Asia-Pacific region.
Japan, Taiwan, Singapore and increasingly China are all experiencing growing financial pressures caused by rising healthcare costs and pension payments for an elderly population.
The problem is particularly serious in South Korea, where more than 38 per cent of the population is predicted to be of retirement age by 2050, according to the National Statistics Office. The equivalent figure in Japan is an estimated 39.6 per cent by 2050.
According to a 2012 study conducted by Tohoku University, Japan will go extinct in about one thousand years, with the last Japanese child born in 3011.
David Coleman, a population expert at Oxford University, has previously warned that South Korea’s fertility rate is so low that it threatens the existence of the nation.
The NARS study suggests that the southern Korean port city of Busan is most at risk, largely because of a sharp decline in the number of young and middle-aged residents, and that the last person will be born in the city in 2413.
Read the entire article here.
- MondayMap: Drought Mapping>
The NYT has an fascinating and detailed article bursting with charts and statistics that shows the pervasive grip of the drought in the United States. The desert Southwest and West continue to be parched and scorching. This is not a pretty picture for farmers and increasingly for those (sub-)urban dwellers who rely upon a fragile and dwindling water supply.
From the NYT:Droughts appear to be intensifying over much of the West and Southwest as a result of global warming. Over the past decade, droughts in some regions have rivaled the epic dry spells of the 1930s and 1950s. About 34 percent of the contiguous United States was in at least a moderate drought as of July 22.Things have been particularly bad in California, where state officials have approved drastic measures to reduce water consumption. California farmers, without water from reservoirs in the Central Valley, are left to choose which of their crops to water. Parts of Texas, Oklahoma and surrounding states are also suffering from drought conditions.The relationship between the climate and droughts is complicated. Parts of the country are becoming wetter: East of the Mississippi, rainfall has been rising. But global warming also appears to be causing moisture to evaporate faster in places that were already dry. Researchers believe drought conditions in these places are likely to intensify in coming years.There has been little relief for some places since the summer of 2012. At the recent peak this May, about 40 percent of the country was abnormally dry or in at least a moderate drought.
Read the entire story and see the statistics for yourself here.
Image courtesy of Drought Monitor / NYT.
- Climate Change Denial: English Only>
It’s official. Native English-speakers are more likely to be in denial over climate change than non-English speakers. In fact, many who do not see a human hand in our planet’s environmental and climatic troubles are located in the United States, Britain, Australia and Canada. Enough said, in English.
Now, the Guardian would have you believe that media monopolist — Rupert Murdoch — is behind the climate change skeptics and deniers. After all, he is well known for his views on climate and his empire controls large swathes of the media that most English-speaking people consume. However, it’s probably a little more complicated.
From the Guardian:
Here in the United States, we fret a lot about global warming denial. Not only is it a dangerous delusion, it’s an incredibly prevalent one. Depending on your survey instrument of choice, we regularly learn that substantial minorities of Americans deny, or are sceptical of, the science of climate change.
The global picture, however, is quite different. For instance, recently the UK-based market research firm Ipsos MORI released its “Global Trends 2014” report, which included a number of survey questions on the environment asked across 20 countries. (h/t Leo Hickman). And when it came to climate change, the result was very telling.
Note that these results are not perfectly comparable across countries, because the data were gathered online, and Ipsos MORI cautions that for developing countries like India and China, “the results should be viewed as representative of a more affluent and ‘connected’ population.”
Nonetheless, some pretty significant patterns are apparent. Perhaps most notably: Not only is the United States clearly the worst in its climate denial, but Great Britain and Australia are second and third worst, respectively. Canada, meanwhile, is the seventh worst.
What do these four nations have in common? They all speak the language of Shakespeare.
Why would that be? After all, presumably there is nothing about English, in and of itself, that predisposes you to climate change denial. Words and phrases like “doubt,” “natural causes,” “climate models,” and other sceptic mots are readily available in other languages. So what’s the real cause?
One possible answer is that it’s all about the political ideologies prevalent in these four countries.
The US climate change counter movement is comprised of 91 separate organizations, with annual funding, collectively, of “just over $900 million.” And they all speak English.
“I do not find these results surprising,” says Riley Dunlap, a sociologist at Oklahoma State University who has extensively studied the climate denial movement. “It’s the countries where neo-liberalism is most hegemonic and with strong neo-liberal regimes (both in power and lurking on the sidelines to retake power) that have bred the most active denial campaigns—US, UK, Australia and now Canada. And the messages employed by these campaigns filter via the media and political elites to the public, especially the ideologically receptive portions.” (Neoliberalism is an economic philosophy centered on the importance of free markets and broadly opposed to big government interventions.)
Indeed, the English language media in three of these four countries are linked together by a single individual: Rupert Murdoch. An apparent climate sceptic or lukewarmer, Murdoch is the chairman of News Corp and 21st Century Fox. (You can watch him express his climate views here.) Some of the media outlets subsumed by the two conglomerates that he heads are responsible for quite a lot of English language climate scepticism and denial.
In the US, Fox News and the Wall Street Journal lead the way; research shows that Fox watching increases distrust of climate scientists. (You can also catch Fox News in Canada.) In Australia, a recent study found that slightly under a third of climate-related articles in 10 top Australian newspapers “did not accept” the scientific consensus on climate change, and that News Corp papers — the Australian, the Herald Sun, and the Daily Telegraph — were particular hotbeds of scepticism. “TheAustralian represents climate science as matter of opinion or debate rather than as a field for inquiry and investigation like all scientific fields,” noted the study.
And then there’s the UK. A 2010 academic study found that while News Corp outlets in this country from 1997 to 2007 did not produce as much strident climate scepticism as did their counterparts in the US and Australia, “the Sun newspaper offered a place for scornful sceptics on its opinion pages as did The Times and Sunday Times to a lesser extent.” (There are also other outlets in the UK, such as the Daily Mail, that feature plenty of scepticism but aren’t owned by News Corp.)
Thus, while there may not be anything inherent to the English language that impels climate denial, the fact that English language media are such a major source of that denial may in effect create a language barrier.
And media aren’t the only reason that denialist arguments are more readily available in the English language. There’s also the Anglophone nations’ concentration of climate “sceptic” think tanks, which provide the arguments and rationalisations necessary to feed this anti-science position.
According to a study in the journal Climatic Change earlier this year, the US is home to 91 different organisations (think tanks, advocacy groups, and trade associations) that collectively comprise a “climate change counter-movement.” The annual funding of these organisations, collectively, is “just over $900 million.” That is a truly massive amount of English-speaking climate “sceptic” activity, and while the study was limited to the US, it is hard to imagine that anything comparable exists in non-English speaking countries.
Read the entire article here.
- The 1970s Tube>
London’s heavily used, urban jewel — the Tube — has been a constant location for great people-watching. The 1970s was no exception, as this collection of photographs from Bob Mazzer shows.
See more images here.
- Dinosaurs of Retail>
Shopping malls in the United States were in their prime in the 1970s and ’80s. Many had positioned themselves a a bright, clean, utopian alternative to inner-city blight and decay. A quarter of a century on, while the mega-malls may be thriving, the numerous smaller suburban brethren are seeing lower sales. As internet shopping and retailing pervades all reaches of our society many midsize malls are decaying or shutting down completely. Documentary photographer Seth Lawless captures this fascinating transition in a new book: Black Friday: the Collapse of the American Shopping Mall.
From the Guardian:
It is hard to believe there has ever been any life in this place. Shattered glass crunches under Seph Lawless’s feet as he strides through its dreary corridors. Overhead lights attached to ripped-out electrical wires hang suspended in the stale air and fading wallpaper peels off the walls like dead skin.
Lawless sidesteps debris as he passes from plot to plot in this retail graveyard called Rolling Acres Mall in Akron, Ohio. The shopping centre closed in 2008, and its largest retailers, which had tried to make it as standalone stores, emptied out by the end of last year. When Lawless stops to overlook a two-storey opening near the mall’s once-bustling core, only an occasional drop of water, dribbling through missing ceiling tiles, breaks the silence.
“You came, you shopped, you dressed nice – you went to the mall. That’s what people did,” says Lawless, a pseudonymous photographer who grew up in a suburb of nearby Cleveland. “It was very consumer-driven and kind of had an ugly side, but there was something beautiful about it. There was something there.”
Gazing down at the motionless escalators, dead plants and empty benches below, he adds: “It’s still beautiful, though. It’s almost like ancient ruins.”
Dying shopping malls are speckled across the United States, often in middle-class suburbs wrestling with socioeconomic shifts. Some, like Rolling Acres, have already succumbed. Estimates on the share that might close or be repurposed in coming decades range from 15 to 50%. Americans are returning downtown; online shopping is taking a 6% bite out of brick-and-mortar sales; and to many iPhone-clutching, city-dwelling and frequently jobless young people, the culture that spawned satire like Mallrats seems increasingly dated, even cartoonish.
According to longtime retail consultant Howard Davidowitz, numerous midmarket malls, many of them born during the country’s suburban explosion after the second world war, could very well share Rolling Acres’ fate. “They’re going, going, gone,” Davidowitz says. “They’re trying to change; they’re trying to get different kinds of anchors, discount stores … [But] what’s going on is the customers don’t have the fucking money. That’s it. This isn’t rocket science.”
Shopping culture follows housing culture. Sprawling malls were therefore a natural product of the postwar era, as Americans with cars and fat wallets sprawled to the suburbs. They were thrown up at a furious pace as shoppers fled cities, peaking at a few hundred per year at one point in the 1980s, according to Paco Underhill, an environmental psychologist and author of Call of the Mall: The Geography of Shopping. Though construction has since tapered off, developers left a mall overstock in their wake.
Currently, the US contains around 1,500 of the expansive “malls” of suburban consumer lore. Most share a handful of bland features. Brick exoskeletons usually contain two storeys of inward-facing stores separated by tile walkways. Food courts serve mediocre pizza. Parking lots are big enough to easily misplace a car. And to anchor them economically, malls typically depend on department stores: huge vendors offering a variety of products across interconnected sections.
For mid-century Americans, these gleaming marketplaces provided an almost utopian alternative to the urban commercial district, an artificial downtown with less crime and fewer vermin. As Joan Didion wrote in 1979, malls became “cities in which no one lives but everyone consumes”. Peppered throughout disconnected suburbs, they were a place to see and be seen, something shoppers have craved since the days of the Greek agora. And they quickly matured into a self-contained ecosystem, with their own species – mall rats, mall cops, mall walkers – and an annual feeding frenzy known as Black Friday.
“Local governments had never dealt with this sort of development and were basically bamboozled [by developers],” Underhill says of the mall planning process. “In contrast to Europe, where shopping malls are much more a product of public-private negotiation and funding, here in the US most were built under what I call ‘cowboy conditions’.”
Shopping centres in Europe might contain grocery stores or childcare centres, while those in Japan are often built around mass transit. But the suburban American variety is hard to get to and sells “apparel and gifts and damn little else”, Underhill says.
Nearly 700 shopping centres are “super-regional” megamalls, retail leviathans usually of at least 1 million square feet and upward of 80 stores. Megamalls typically outperform their 800 slightly smaller, “regional” counterparts, though size and financial health don’t overlap entirely. It’s clearer, however, that luxury malls in affluent areas are increasingly forcing the others to fight for scraps. Strip malls – up to a few dozen tenants conveniently lined along a major traffic artery – are retail’s bottom feeders and so well-suited to the new environment. But midmarket shopping centres have begun dying off alongside the middle class that once supported them. Regional malls have suffered at least three straight years of declining profit per square foot, according to the International Council of Shopping Centres (ICSC).
Read the entire story here.
Image: Mall of America. Courtesy of Wikipedia.
Over the coming years the words “Thwaites Glacier” will become known to many people, especially those who make their home near the world’s oceans. The thawing of Antarctic ice and the accelerating melting of its glaciers — of which Thwaites is a prime example — pose an increasing threat to our coasts, but imperil us all.
Thwaites is one of size mega-glaciers that drain into the West Antarctic’s Amundsen Sea. If all were to melt completely, as they are continuing to do, global sea-level would be projected to rise an average of 4½ feet. Astonishingly, this catastrophe in the making has passed a tipping-point — climatologists and glaciologists now tend to agree that the melting is irreversible and accelerating.
From ars technica:
Today, researchers at UC Irvine and the Jet Propulsion Laboratory have announced results indicating that glaciers across a large area of West Antarctica have been destabilized and that there is little that will stop their continuing retreat. These glaciers are all that stand between the ocean and a massive basin of ice that sits below sea level. Should the sea invade this basin, we’d be committed to several meters of sea level rise.
Even in the short term, the new findings should increase our estimates for sea level rise by the end of the century, the scientists suggest. But the ongoing process of retreat and destabilization will mean that the area will contribute to rising oceans for centuries.
The press conference announcing these results is ongoing. We will have a significant update on this story later today.
UPDATE (2:05pm CDT):
The glaciers in question are in West Antarctica, and drain into the Amundsen Sea. On the coastal side, the ends of the glacier are actually floating on ocean water. Closer to the coast, there’s what’s called a “grounding line,” where the weight of the ice above sea level pushes the bottom of the glacier down against the sea bed. From there on, back to the interior of Antarctica, all of the ice is directly in contact with the Earth.
That’s a rather significant fact, given that, just behind a range of coastal hills, all of the ice is sitting in a huge basin that’s significantly below sea level. In total, the basin contains enough ice to raise sea levels approximately four meters, largely because the ice piled in there rises significantly above sea level.
Because of this configuration, the grounding line of the glaciers that drain this basin act as a protective barrier, keeping the sea back from the base of the deeper basin. Once ocean waters start infiltrating the base of a glacier, the glacier melts, flows faster, and thins. This lessens the weight holding the glacier down, ultimately causing it to float, which hastens its break up. Since the entire basin is below sea level (in some areas by over a kilometer), water entering the basin via any of the glaciers could destabilize the entire thing.
Thus, understanding the dynamics of the grounding lines is critical. Today’s announcements have been driven by two publications. One of them models the behavior of one of these glaciers, and shows that it has likely reached a point where it will be prone to a sudden retreat sometime in the next few centuries. The second examines every glacier draining this basin, and shows that all but one of them are currently losing contact with their grounding lines.
The data come from two decades worth of data from the ESA’s Earth Remote Sensing satellites. These include radar that performs two key functions: peers through the ice to get a sense of the terrain that lies buried under the ice near the grounding line. And, through interferometry, it tracks the dynamics of the ice sheet’s flow in the area, as well as its thinning and the location of the grounding line itself. The study tracks a number of glaciers that all drain into the region: Pine Island, Thwaites, Haynes, and Smith/Kohler.
As we’ve covered previously, the Pine Island Glacier came ungrounded in the second half of the past decade, retreating up to 31km in the process. Although this was the one that made headlines, all the glaciers in the area are in retreat. Thwaites saw areas retreat up to 14km over the course of the study, Haynes retracted by 10km, and the Smith/Kohler glaciers retreated by 35km.
The retreating was accompanied by thinning of the glaciers, as ice that had been held back above sea levels in the interior spread forward and thinned out. This contributed to sea level rise, and the speakers at the press conference agreed that the new data shows that the recently released IPCC estimates for sea level rise are out of date; even by the end of this century, the continuation of this process will significantly increase the rate of sea level rise we can expect.
The real problem, however, comes later. Glaciers can establish new grounding lines if there’s a feature in the terrain, such as a hill that rises above sea level, that provides a new anchoring point. The authors see none: “Upstream of the 2011 grounding line positions, we find no major bed obstacle that would prevent the glaciers from further retreat and draw down the entire basin.” In fact, several of the existing grounding lines are close to points where the terrain begins to slope downward into the basin.
For some of the glaciers, the problems are already starting. At Pine Island, the bottom of the glacier is now sitting on terrain that’s 400 meters deeper than where the end rested in 1992, and there are no major hills between there and the basin. As far as the Smith/Kohler glaciers, the grounding line is 800 meters deeper and “its ice shelf pinning points are vanishing.”
As a result, the authors concluded that these glaciers are essentially destabilized—unless something changes radically, they’re destined for retreat into the indefinite future. But what will the trajectory of that retreat look like? In this case, the data doesn’t directly help. It needs to be fed into a model that projects the current melting into the future. Conveniently, a different set of scientists has already done this modeling.
The work focuses on the Thwaites glacier, which appears to be the most stable: there are 60-80km before between the existing terminus and the deep basin, and two or three ridges within that distance that will allow the formation of new grounding lines.
The authors simulated the behavior of Thwaites using a number of different melting rates. These ranged from a low that approximated the behavior typical in the early 90s, to a high rate of melt that is similar to what was observed in recent years. Every single one of these situations saw the Thwaites retreat into the deep basin within the next 1,000 years. In the higher melt scenarios—the ones most reflective of current conditions—this typically took only a few centuries.
The other worrisome behavior is that there appeared to be a tipping point. In every simulation that saw an extensive retreat, rates of melting shifted from under 80 gigatonnes of ice per year to 150 gigatonnes or more, all within the span of a couple of decades. In the later conditions, this glacier alone contributed half a centimeter to sea level rise—every year.
Read the entire article here.
Image: Thwaites Glacier, Antarctica, 2012. Courtesy of NASA Earth Observatory.
- Water From Air>
Ideas and innovations that solve a particular human hardship are worthy of reward and recognition. When the idea is also ingenious and simple it should be celebrated. Take the invention of industrial designers Arturo Vittori and Andreas Vogler. Fashioned from plant stalks and nylon mess their 30 foot tall WarkaWater Towers soak up moisture from the air for later collection — often up to 25 gallons of drinking water today. When almost a quarter of the world’s population has poor access to daily potable water this remarkable invention serves a genuine need.
In some parts of Ethiopia, finding potable water is a six-hour journey.
People in the region spend 40 billion hours a year trying to find and collect water, says a group called the Water Project. And even when they find it, the water is often not safe, collected from ponds or lakes teeming with infectious bacteria, contaminated with animal waste or other harmful substances.
The water scarcity issue—which affects nearly 1 billion people in Africa alone—has drawn the attention of big-name philanthropists like actor and Water.org co-founder Matt Damon and Microsoft co-founder Bill Gates, who, through their respective nonprofits, have poured millions of dollars into research and solutions, coming up with things like a system that converts toilet water to drinking water and a “Re-invent the Toilet Challenge,” among others.
Critics, however, have their doubts about integrating such complex technologies in remote villages that don’t even have access to a local repairman. Costs and maintenance could render many of these ideas impractical.
“If the many failed development projects of the past 60 years have taught us anything,” wrote one critic, Toilets for People founder Jason Kasshe, in a New York Times editorial, ”it’s that complicated, imported solutions do not work.”
Other low-tech inventions, like this life straw, aren’t as complicated, but still rely on users to find a water source.
It was this dilemma—supplying drinking water in a way that’s both practical and convenient—that served as the impetus for a new product called Warka Water, an inexpensive, easily-assembled structure that extracts gallons of fresh water from the air.
The invention from Arturo Vittori, an industrial designer, and his colleague Andreas Vogler doesn’t involve complicated gadgetry or feats of engineering, but instead relies on basic elements like shape and material and the ways in which they work together.
At first glance, the 30-foot-tall, vase-shaped towers, named after a fig tree native to Ethiopia, have the look and feel of a showy art installation. But every detail, from carefully-placed curves to unique materials, has a functional purpose.
The rigid outer housing of each tower is comprised of lightweight and elastic juncus stalks, woven in a pattern that offers stability in the face of strong wind gusts while still allowing air to flow through. A mesh net made of nylon or polypropylene, which calls to mind a large Chinese lantern, hangs inside, collecting droplets of dew that form along the surface. As cold air condenses, the droplets roll down into a container at the bottom of the tower. The water in the container then passes through a tube that functions as a faucet, carrying the water to those waiting on the ground.
Using mesh to facilitate clean drinking water isn’t an entirely new concept. A few years back, an MIT student designed a fog-harvesting device with the material. But Vittori’s invention yields more water, at a lower cost, than some other concepts that came before it.
“[In Ethiopia], public infrastructures do not exist and building [something like] a well is not easy,” Vittori says of the country. ”To find water, you need to drill in the ground very deep, often as much as 1,600 feet. So it’s technically difficult and expensive. Moreover, pumps need electricity to run as well as access to spare parts in case the pump breaks down.”
So how would Warka Water’s low-tech design hold up in remote sub-Saharan villages? Internal field tests have shown that one Warka Water tower can supply more than 25 gallons of water throughout the course of a day, Vittori claims. He says because the most important factor in collecting condensation is the difference in temperature between nightfall and daybreak, the towers are proving successful even in the desert, where temperatures, in that time, can differ as much as 50 degrees Fahrenheit.
The structures, made from biodegradable materials, are easy to clean and can be erected without mechanical tools in less than a week. Plus, he says, “once locals have the necessary know-how, they will be able to teach other villages and communities to build the Warka.”
In all, it costs about $500 to set up a tower—less than a quarter of the cost of something like the Gates toilet, which costs about $2,200 to install and more to maintain. If the tower is mass produced, the price would be even lower, Vittori says. His team hopes to install two Warka Towers in Ethiopia by next year and is currently searching for investors who may be interested in scaling the water harvesting technology across the region.
Read the entire article here.
Image: WarkaWater Tower. Courtesy of Andreas vogler and Arturo Vittori, WARKAWATER PROJECT / www.architectureandvision.com.
- It's Happening Now>
There is one thing wrong with the dystopian future painted by climate change science — it’s not in our future; it’s happening now.
From the New York Times:
Climate change is already having sweeping effects on every continent and throughout the world’s oceans, scientists reported on Monday, and they warned that the problem was likely to grow substantially worse unless greenhouse emissions are brought under control.
The report by the Intergovernmental Panel on Climate Change, a United Nations group that periodically summarizes climate science, concluded that ice caps are melting, sea ice in the Arctic is collapsing, water supplies are coming under stress, heat waves and heavy rains are intensifying, coral reefs are dying, and fish and many other creatures are migrating toward the poles or in some cases going extinct.
The oceans are rising at a pace that threatens coastal communities and are becoming more acidic as they absorb some of the carbon dioxide given off by cars and power plants, which is killing some creatures or stunting their growth, the report found.
Organic matter frozen in Arctic soils since before civilization began is now melting, allowing it to decay into greenhouse gases that will cause further warming, the scientists said. And the worst is yet to come, the scientists said in the second of three reports that are expected to carry considerable weight next year as nations try to agree on a new global climate treaty.
In particular, the report emphasized that the world’s food supply is at considerable risk — a threat that could have serious consequences for the poorest nations.
“Nobody on this planet is going to be untouched by the impacts of climate change,” Rajendra K. Pachauri, chairman of the intergovernmental panel, said at a news conference here on Monday presenting the report.
The report was among the most sobering yet issued by the scientific panel. The group, along with Al Gore, was awarded the Nobel Peace Prize in 2007 for its efforts to clarify the risks of climate change. The report is the final work of several hundred authors; details from the drafts of this and of the last report in the series, which will be released in Berlin in April, leaked in the last few months.
The report attempts to project how the effects will alter human society in coming decades. While the impact of global warming may actually be moderated by factors like economic or technological change, the report found, the disruptions are nonetheless likely to be profound. That will be especially so if emissions are allowed to continue at a runaway pace, the report said.
It cited the risk of death or injury on a wide scale, probable damage to public health, displacement of people and potential mass migrations.
“Throughout the 21st century, climate-change impacts are projected to slow down economic growth, make poverty reduction more difficult, further erode food security, and prolong existing and create new poverty traps, the latter particularly in urban areas and emerging hot spots of hunger,” the report declared.
The report also cited the possibility of violent conflict over land, water or other resources, to which climate change might contribute indirectly “by exacerbating well-established drivers of these conflicts such as poverty and economic shocks.”
The scientists emphasized that climate change is not just a problem of the distant future, but is happening now.
Studies have found that parts of the Mediterranean region are drying out because of climate change, and some experts believe that droughts there have contributed to political destabilization in the Middle East and North Africa.
In much of the American West, mountain snowpack is declining, threatening water supplies for the region, the scientists said in the report. And the snow that does fall is melting earlier in the year, which means there is less melt water to ease the parched summers. In Alaska, the collapse of sea ice is allowing huge waves to strike the coast, causing erosion so rapid that it is already forcing entire communities to relocate.
“Now we are at the point where there is so much information, so much evidence, that we can no longer plead ignorance,” Michel Jarraud, secretary general of the World Meteorological Organization, said at the news conference.
The report was quickly welcomed in Washington, where President Obama is trying to use his executive power under the Clean Air Act and other laws to impose significant new limits on the country’s greenhouse emissions. He faces determined opposition in Congress.
“There are those who say we can’t afford to act,” Secretary of State John Kerry said in a statement. “But waiting is truly unaffordable. The costs of inaction are catastrophic.”
Amid all the risks the experts cited, they did find a bright spot. Since the intergovernmental panel issued its last big report in 2007, it has found growing evidence that governments and businesses around the world are making extensive plans to adapt to climate disruptions, even as some conservatives in the United States and a small number of scientists continue to deny that a problem exists.
“I think that dealing effectively with climate change is just going to be something that great nations do,” said Christopher B. Field, co-chairman of the working group that wrote the report and an earth scientist at the Carnegie Institution for Science in Stanford, Calif. Talk of adaptation to global warming was once avoided in some quarters, on the ground that it would distract from the need to cut emissions. But the past few years have seen a shift in thinking, including research from scientists and economists who argue that both strategies must be pursued at once.
Read the entire article here.
Image: Greenland ice melt. Courtesy of Christine Zenino / Smithsonian.
- Tales From the Office: I Hate My Job>
It is no coincidence that I post this article on a Monday. After all it’s the most loathsome day of the week according to most people this side of the galaxy. All because of the very human invention known as work.
Some present-day Bartleby (the Scrivener)’s are taking up arms and rising up against the man. A few human gears in the vast corporate machine are no longer content to suck up to the boss or accept every demand from the corner office — take the recent case of a Manhattan court stenographer.
From the Guardian:
If you want a vision of the future, imagine a wage slave typing: “I hate my job. I hate my job. I hate my job,” on a keyboard, for ever. That’s what a Manhattan court typist is accused of doing, having been fired from his post two years ago, after jeopardising upwards of 30 trials, according to the New York Post. Many of the court transcripts were “complete gibberish” as the stenographer was alledgedly suffering the effects of alcohol abuse, but the one that has caught public attention contains the phrase “I hate my job” over and over again. Officials are reportedly struggling to mitigate the damage, and the typist now says he’s in recovery, but it’s worth considering how long it took the court officials to realise he hadn’t been taking proper notes at all.
You can’t help but feel a small pang of joy at part of the story, though. Surely everyone, at some point, has longed, but perhaps not dared, to do the same. In a dreary Coventry bedsit in 2007, I read Herman Melville’s Bartleby the Scrivener, the tale of a new employee who calmly refuses to do anything he is paid to do, to the complete bafflement of his boss, and found myself thinking in wonder: “This is the greatest story I have ever read.” No wonder it still resonates. Who hasn’t sat in their office, and felt like saying to their bosses: “I would prefer not to,” when asked to stuff envelopes or run to the post office?
For some bizarre reason, it’s still taboo to admit that most jobs are unspeakably dull. On application forms, it’s anathema to write: “Reason for leaving last job: hated it”, and “Reason for applying for this post: I like money.” The fact that so many people gleefully shared this story shows that many of us, deep down, harbour a suspicion that our jobs aren’t necessarily what we want to be doing for the rest of our lives. A lot of us aren’t always happy and fulfilled at work, and aren’t always completely productive.
Dreaming of turning to our boss and saying: “I would prefer not to,” or spending an afternoon typing “I hate my job. I hate my job. I hate my job” into Microsoft Word seems like a worthy way of spending the time. And, as with the court typist, maybe people wouldn’t even notice. In one of my workplaces, before a round of redundancies, on my last day my manager piled yet more work on to my desk and said yet again that she was far too busy to do her invoices. With nothing to lose, I pointed out that she had a large plate glass window behind her, so for the entire length of my temp job, I’d been able to see that she spent most of the day playing Spider Solitaire.
Howard Beale’s rant in Network, caricaturish as it is cathartic, strikes a nerve too: there’s something endlessly satisfying in fantasising about pushing your computer over, throwing your chair through the window and telling your most hated colleagues what you’ve always thought about them. But instead we keep it bottled up, go to the pub and grind our teeth. Still, here’s to the modern-day Bartlebys.
Read the entire article here.
Image: Office cubicles. Courtesy of Nomorecubes.
- Dump Arial. Garamond is Cheaper and Less Dull>
Not only is the Arial font dreadfully sleep-inducing — most corporate Powerpoint presentations live and breathe Arial — it’s expensive. Print a document suffused with Arial and its variants and it will cost you more in expensive ink. So, jettison Arial for some sleeker typefaces like Century Gothic or Garamond; besides, they’re prettier too!
A fascinating body of research by an 8th-grader (14 years old) from Pittsburgh shows that the U.S. government could save around $400 million per year by moving away from Arial to a thinner, less thirsty typeface. Interestingly enough, researchers have also found that readers tend to retain more from documents set in more esoteric fonts versus simple typefaces such as Arial and Helvetica.
From the Guardian:
In what can only be described as an impressive piece of research, a Pittsburgh schoolboy has calculated that the US state and federal governments could save getting on for $400m (£240m) a year by changing the typeface they use for printed documents.
Shocked by the number of teacher’s handouts he was getting at his new school, 14-year-old Suvir Mirchandani – having established that ink represents up to 60% of the cost of a printed page and is, ounce for ounce, twice as expensive as Chanel No 5 – embarked on a cost-benefit analysis of a range of different typefaces, CNN reports.
He discovered that by switching to Garamond, whose thin, elegant strokes were designed by the 16th-century French publisher in the 16th century by Claude Garamond, his school district could reduce its ink consumption by 24%, saving as much as $21,000 annually. On that basis, he extrapolated, the federal and state governments could economise $370m (£222m) between them.
But should they? For starters, as the government politely pointed out, the real savings these days are in stopping printing altogether. Also, a 2010 study by the University of Wisconsin-Green Bay estimated it could save $10,000 a year by switching from Arial to Century Gothic, which uses 30% less ink – but also found that because the latter is wider, some documents that fitted on a single page in Arial would now run to two, and so use more paper.
Font choice can affect more than just the bottom line. A 2010 Princeton university study found readers consistently retained more information from material displayed in so-called disfluent or ugly fonts (Monotype Corsiva, Haettenschweiler) than in simple, more readable fonts (Helvetica, Arial).
Read the entire article here.
Image: Arial Monotype font example. Courtesy of Wikipedia.
- Love Weighs Heavily>
Paris is generally held to be one of the most romantic cities in the world. However, an increasing number of Parisian officials have had enough of love. Specifically, they’re concerned that the “love lock” craze that has covered many of Paris’ iconic bridges in padlocks may become a structural problem, as well as a eyesore (to some).
But, the French of all people should know better — love cannot be denied; it’s likely that banning locks from bridges may just move everlasting love elsewhere. Now, wouldn’t the Eiffel Tower look awesome festooned in several million padlocks?
From the Guardian:
With Paris’s bridges groaning under the weight of an estimated 700,000 padlocks scrawled with lovers’ names, campaigners say it’s time to end the love locks ‘madness’.
For some they are a symbol of everlasting love. For others they are a rusting eyesore. But now the “love locks” – padlocks engraved with the names of lovers – that line the rails of Paris’s bridges may have met their match, as a campaign takes off to have them banned.
The No Love Locks campaign, which includes a petition that currently has over 1,700 signatures, was launched in February by two Americans living in Paris who were shocked at the extent of the trend across the city. The idea is that by attaching the locks to a public place and throwing away the key, the love it represents will become unbreakable. However, with an estimated 700,000 padlocks now attached to locations across the French capital, the weight could be putting the structural integrity of the city’s architecture at risk.
Originally affecting the Pont des Arts and Pont de l’Archevêché, the padlocks can now be found on almost all of the bridges across the Seine, as well as many of the smaller footbridges that span the canals in the 10th arrondissement. On the most popular bridges the guard rails now consist of a solid wall of metal. In a testament to the popularity of the act, even Google Maps now denotes the Pont de l’Archevêché as “Lovelock bridge”.
“It’s so out of control,” says Lisa Anselmo, who co-founded the campaign with fellow expat and writer Lisa Taylor Huff. “People are climbing up lampposts to clip locks on, hanging over the bridge to put them on the other side of the rail, risking their lives to attach one. It’s a kind of mania. It’s not about romance any more – it’s just about saying ‘I did it.’”
While the reaction to the campaign from many people has been one of surprise, indifference, or anger: “We’ve been getting some hate mail over it, people calling us bitter old ladies,” says Anselmo, many have been supportive. Signatories on the petition – which includes many Parisians – cite the “dégradation publique” caused by the locks. The mayor of the 6th arrondissement, Jean-Pierre Lecoq, also supports their concerns, describing the love locks as “madness”.
“Since this walkway overlooks the Seine, and there are a lot of tourist boats that pass under it, any relatively heavy object falling from a certain height could cause a passenger an injury, or even a fatal blow,” he told RTL radio last August.
And, according to Anselmo, it’s not simply an aesthetic concern: “This isn’t just two Americans butting their noses in and saying this isn’t pretty,” she says. “The weight of the locks presents a safety issue. The Pont des Arts is just a little footbridge and is now holding 93 metric tonnes from the locks; regularly the grill work collapses. The city replaces it and two weeks later it fills up again. Sadly a ban seems to be the only way.”
The city council, evidently aware of the locks’ popularity with tourists, has so far resisted taking action, although concerns about the damage they cause to the architecture have been raised in the past and the authorities are said to keep a regular check on the pressure being placed on the bridges’ structure.
Information on the official website for Paris, while acknowledging the positive idea behind the locks, is less than enthusiastic about the reality of them, highlighting the damage they do and even encouraging tourists to send a digital “e-love lock” instead. It states: “If the tradition continues to grow in popularity and causes too much damage to the city’s monuments, solutions will be considered in a bid to address the problem.” Thankfully, they claim they will do this “without breaking the hearts of those who have sealed their undying love for each other to the Parisian bridges”.
It is not just Paris, however, where love locks can be found. Since the early noughties the trend has taken off globally with shrines visible in cities around the world, much to the bemusement of authorities who have been struggling to keep them at bay. In 2012 Dublin city council removed all the love locks on Ha’penny bridge, while threats to remove the padlocks on Hohenzollern bridge in Cologne were retracted after a public outcry.
Indeed, for those whose tokens of affection are in jeopardy, the idea of a ban is less than welcome. Ben Lifton attached a love lock in Paris last February when visiting the city with his boyfriend. “We didn’t plan to do it,” he says. “But there was a guy conveniently selling locks and permanent markers next to it, and so for a few euros we thought, ‘why not’. It’s a nice way to deposit something somewhere, and know (or at least hope) that it will be there if, and when, you ever return.”
He finds the prospect of a ban, “a bit sad”. He said: “Clearly there are some people who have gone through a messy break up recently on the Paris council, and they have a vendetta against happy couples.”
Adam Driver, who has also affixed a love lock in Paris agrees: “The bridge in Paris, near the Notre Dame cathedral, is almost entirely covered with locks of all shapes, sizes and colour,” he says. “You can hardly see the bridge underneath them all. I think the bridge looks great. It is a real thing-to-do in Paris. It’s iconic, and it would be a shame to lose all of those locks, which hold so many memories for people.”
Read the entire story here.
Image: “Love locks” on the Pont-des-Artes, Paris. Courtesy of Huffington Post.
- Gephyrophobes Not Welcome>
A gephyrophobic person is said to have a fear of crossing bridges. So, we’d strongly recommend avoiding the structures on this list of some of the world’s scariest bridges. For those who suffer no anxiety from either bridges or heights, and who crave endless vistas both horizontally and vertically, this list is for you. Our favorite, the suspension bridge over the Royal Gorge in Colorado.
From the Guardian:
From rickety rope walkways to spectacular feats of engineering, we take a look at some of the world’s scariest bridges.
Until 2001, the Royal Gorge bridge in Colorado was the highest bridge in the world. Built in 1929, the 291m-high structure is now a popular tourist attraction, not least because of the fact that it is situated within a theme park.
Read the entire story and see more images here.
Image: Royal Gorge, Colorado. Courtesy of Wikipedia / Hustvedt.
- A Quest For Skeuomorphic Noise>
Your Toyota Prius, or other electric vehicle, is a good environmental citizen. It helps reduce pollution and carbon emissions and does so rather efficiently. You and other eco-conscious owners should be proud.
But wait, not so fast. Your electric car may have a low carbon footprint, but it is a silent killer in waiting. It may be efficient, however it is far too quiet, and is thus somewhat of a hazard for pedestrians, cyclists and other motorists — they don’t hear it approaching.
Cars like the Prius are so quiet — in fact too quiet, for our own safety. So, enterprising engineers are working to add artificial noise to the next generations of almost silent cars. The irony is not lost: after years of trying to make cars quieter, engineers are now looking to make them noisier.
Perhaps, the added noise could be configurable as an option for customers — a base option would sound like a Citroen CV, a high-end model could sound like, well, a Ferrari or a classic Bugatti. Much better.
From Technology Review:
It was a pleasant June day in Munich, Germany. I was picked up at my hotel and driven to the country, farmland on either side of the narrow, two-lane road. Occasional walkers strode by, and every so often a bicyclist passed. We parked the car on the shoulder and joined a group of people looking up and down the road. “Okay, get ready,” I was told. “Close your eyes and listen.” I did so and about a minute later I heard a high-pitched whine, accompanied by a low humming sound: an automobile was approaching. As it came closer, I could hear tire noise. After the car had passed, I was asked my judgment of the sound. We repeated the exercise numerous times, and each time the sound was different. What was going on? We were evaluating sound designs for BMW’s new electric vehicles.
Electric cars are extremely quiet. The only sounds they make come from the tires, the air, and occasionally from the high-pitched whine of the electronics. Car lovers really like the silence. Pedestrians have mixed feelings, but blind people are greatly concerned. After all, they cross streets in traffic by relying upon the sounds of vehicles. That’s how they know when it is safe to cross. And what is true for the blind might also be true for anyone stepping onto the street while distracted. If the vehicles don’t make any sounds, they can kill. The United States National Highway Traffic Safety Administration determined that pedestrians are considerably more likely to be hit by hybrid or electric vehicles than by those with an internal-combustion engine. The greatest danger is when the hybrid or electric vehicles are moving slowly: they are almost completely silent.
Adding sound to a vehicle to warn pedestrians is not a new idea. For many years, commercial trucks and construction equipment have had to make beeping sounds when backing up. Horns are required by law, presumably so that drivers can use them to alert pedestrians and other drivers when the need arises, although they are often used as a way of venting anger and rage instead. But adding a continuous sound to a normal vehicle because it would otherwise be too quiet is a challenge.
What sound would you want? One group of blind people suggested putting some rocks into the hubcaps. I thought this was brilliant. The rocks would provide a natural set of cues, rich in meaning and easy to interpret. The car would be quiet until the wheels started to turn. Then the rocks would make natural, continuous scraping sounds at low speeds, change to the pitter-patter of falling stones at higher speeds. The frequency of the drops would increase with the speed of the car until the rocks ended up frozen against the circumference of the rim, silent. Which is fine: the sounds are not needed for fast-moving vehicles, because then the tire noise is audible. The lack of sound when the vehicle is not moving would be a problem, however.
The marketing divisions of automobile manufacturers thought the addition of artificial sounds would be a wonderful branding opportunity, so each car brand or model should have its own unique sound that captured just the car personality the brand wished to convey. Porsche added loudspeakers to its electric car prototype to give it the same throaty growl as its gasoline-powered cars. Nissan wondered whether a hybrid automobile should sound like tweeting birds. Some manufacturers thought all cars should sound the same, with standardized noises and sound levels, making it easier for everyone to learn how to interpret them. Some blind people thought they should sound like cars—you know, gasoline engines.
Skeuomorphic is the technical term for incorporating old, familiar ideas into new technologies, even though they no longer play a functional role. Skeuomorphic designs are often comfortable for traditionalists, and indeed the history of technology shows that new technologies and materials often slavishly imitate the old for no apparent reason except that it’s what people know how to do. Early automobiles looked like horse-driven carriages without the horses (which is also why they were called horseless carriages); early plastics were designed to look like wood; folders in computer file systems often look like paper folders, complete with tabs. One way of overcoming the fear of the new is to make it look like the old. This practice is decried by design purists, but in fact, it has its benefits in easing the transition from the old to the new. It gives comfort and makes learning easier. Existing conceptual models need only be modified rather than replaced. Eventually, new forms emerge that have no relationship to the old, but the skeuomorphic designs probably helped the transition.
When it came to deciding what sounds the new silent automobiles should generate, those who wanted differentiation ruled the day, yet everyone also agreed that there had to be some standards. It should be possible to determine that the sound is coming from an automobile, to identify its location, direction, and speed. No sound would be necessary once the car was going fast enough, in part because tire noise would be sufficient. Some standardization would be required, although with a lot of leeway. International standards committees started their procedures. Various countries, unhappy with the normally glacial speed of standards agreements and under pressure from their communities, started drafting legislation. Companies scurried to develop appropriate sounds, hiring psychologists, Hollywood sound designers, and experts in psychoacoustics.
The United States National Highway Traffic Safety Administration issued a set of principles along with a detailed list of requirements, including sound levels, spectra, and other criteria. The full document is 248 pages. The document states:
This standard will ensure that blind, visually-impaired, and other pedestrians are able to detect and recognize nearby hybrid and electric vehicles by requiring that hybrid and electric vehicles emit sound that pedestrians will be able to hear in a range of ambient environments and contain acoustic signal content that pedestrians will recognize as being emitted from a vehicle. The proposed standard establishes minimum sound requirements for hybrid and electric vehicles when operating under 30 kilometers per hour (km/h) (18 mph), when the vehicle’s starting system is activated but the vehicle is stationary, and when the vehicle is operating in reverse. The agency chose a crossover speed of 30 km/h because this was the speed at which the sound levels of the hybrid and electric vehicles measured by the agency approximated the sound levels produced by similar internal combustion engine vehicles. (Department of Transportation, 2013.)
As I write this, sound designers are still experimenting. The automobile companies, lawmakers, and standards committees are still at work. Standards are not expected until 2014 or later, and then it will take considerable time for the millions of vehicles across the world to meet them. What principles should be used for the sounds of electric vehicles (including hybrids)? The sounds have to meet several criteria:
• Alerting. The sound will indicate the presence of an electric vehicle.
• Orientation. The sound will make it possible to determine where the vehicle is located, roughly how fast it is going, and whether it is moving toward or away from the listener.
• Lack of annoyance. Because these sounds will be heard frequently even in light traffic and continually in heavy traffic, they must not be annoying. Note the contrast with sirens, horns, and backup signals, all of which are intended to be aggressive warnings. Such sounds are deliberately unpleasant, but because they are infrequent and relatively short in duration, they are acceptable. The challenge for electric vehicles is to make sounds that alert and orient, not annoy.
• Standardization versus individualization. Standardization is necessary to ensure that all electric-vehicle sounds can readily be interpreted. If they vary too much, novel sounds might confuse the listener. Individualization has two functions: safety and marketing. From a safety point of view, if there were many vehicles on the street, individualization would allow them to be tracked. This is especially important at crowded intersections. From a marketing point of view, individualization can ensure that each brand of electric vehicle has its own unique characteristic, perhaps matching the quality of the sound to the brand image.
Read the entire article here.
Image: Toyota Prius III. Courtesy of Toyota / Wikipedia.
- Daddy, What Is Snow?>
Adults living at higher latitudes will remember snow falling during the cold seasons, but most will recall having seen more snow when they were younger. As climate change continues to shift our global weather patterns, and increase global temperatures, our children and grand-children may have to make do with artificially made snow or watch a historical documentary of the real thing when they reach adulthood.
Our glaciers are retreating and snowcaps are melting. The snow is disappearing. This may be a boon to local governments that can save precious dollars from discontinuing snow and ice removal activities. But for those of us who love to ski and snowboard and skate, or just throw snowballs, build snowmen with our kids or gasp in awe at an icy panorama — snow, you’ll be sorely missed.
From the NYT:
OVER the next two weeks, hundreds of millions of people will watch Americans like Ted Ligety and Mikaela Shiffrin ski for gold on the downhill alpine course. Television crews will pan across epic vistas of the rugged Caucasus Mountains, draped with brilliant white ski slopes. What viewers might not see is the 16 million cubic feet of snow that was stored under insulated blankets last year to make sure those slopes remained white, or the hundreds of snow-making guns that have been running around the clock to keep them that way.
Officials canceled two Olympic test events last February in Sochi after several days of temperatures above 60 degrees Fahrenheit and a lack of snowfall had left ski trails bare and brown in spots. That situation led the climatologist Daniel Scott, a professor of global change and tourism at the University of Waterloo in Ontario, to analyze potential venues for future Winter Games. His thought was that with a rise in the average global temperature of more than 7 degrees Fahrenheit possible by 2100, there might not be that many snowy regions left in which to hold the Games. He concluded that of the 19 cities that have hosted the Winter Olympics, as few as 10 might be cold enough by midcentury to host them again. By 2100, that number shrinks to 6.
The planet has warmed 1.4 degrees Fahrenheit since the 1800s, and as a result, snow is melting. In the last 47 years, a million square miles of spring snow cover has disappeared from the Northern Hemisphere. Europe has lost half of its Alpine glacial ice since the 1850s, and if climate change is not reined in, two-thirds of European ski resorts will be likely to close by 2100.
The same could happen in the United States, where in the Northeast, more than half of the 103 ski resorts may no longer be viable in 30 years because of warmer winters. As far for the Western part of the country, it will lose an estimated 25 to 100 percent of its snowpack by 2100 if greenhouse gas emissions are not curtailed — reducing the snowpack in Park City, Utah, to zero and relegating skiing to the top quarter of Ajax Mountain in Aspen.
The facts are straightforward: The planet is getting hotter. Snow melts above 32 degrees Fahrenheit. The Alps are warming two to three times faster than the worldwide average, possibly because of global circulation patterns. Since 1970, the rate of winter warming per decade in the United States has been triple the rate of the previous 75 years, with the strongest trends in the Northern regions of the country. Nine of the 10 hottest years on record have occurred since 2000, and this winter is already looking to be one of the driest on record — with California at just 12 percent of its average snowpack in January, and the Pacific Northwest at around 50 percent.
To a skier, snowboarder or anyone who has spent time in the mountains, the idea of brown peaks in midwinter is surreal. Poets write of the grace and beauty by which snowflakes descend and transform a landscape. Powder hounds follow the 100-odd storms that track across the United States every winter, then drive for hours to float down a mountainside in the waist-deep “cold smoke” that the storms leave behind.
The snow I learned to ski on in northern Maine was more blue than white, and usually spewed from snow-making guns instead of the sky. I didn’t like skiing at first. It was cold. And uncomfortable.
Then, when I was 12, the mystical confluence of vectors that constitute a ski turn aligned, and I was hooked. I scrubbed toilets at my father’s boatyard on Mount Desert Island in high school so I could afford a ski pass and sold season passes in college at Mad River Glen in Vermont to get a free pass for myself. After graduating, I moved to Jackson Hole, Wyo., for the skiing. Four years later, Powder magazine hired me, and I’ve been an editor there ever since.
My bosses were generous enough to send me to five continents over the last 15 years, with skis in tow. I’ve skied the lightest snow on earth on the northern Japanese island of Hokkaido, where icy fronts spin off the Siberian plains and dump 10 feet of powder in a matter of days. In the high peaks of Bulgaria and Morocco, I slid through snow stained pink by grains of Saharan sand that the crystals formed around.
In Baja, Mexico, I skied a sliver of hardpack snow at 10,000 feet on Picacho del Diablo, sandwiched between the Sea of Cortez and the Pacific Ocean. A few years later, a crew of skiers and I journeyed to the whipsaw Taurus Mountains in southern Turkey to ski steep couloirs alongside caves where troglodytes lived thousands of years ago.
At every range I traveled to, I noticed a brotherhood among mountain folk: Say you’re headed into the hills, and the doors open. So it has been a surprise to see the winter sports community, as one of the first populations to witness effects of climate change in its own backyard, not reacting more vigorously and swiftly to reverse the fate we are writing for ourselves.
It’s easy to blame the big oil companies and the billions of dollars they spend on influencing the media and popular opinion. But the real reason is a lack of knowledge. I know, because I, too, was ignorant until I began researching the issue for a book on the future of snow.
I was floored by how much snow had already disappeared from the planet, not to mention how much was predicted to melt in my lifetime. The ski season in parts of British Columbia is four to five weeks shorter than it was 50 years ago, and in eastern Canada, the season is predicted to drop to less than two months by midcentury. At Lake Tahoe, spring now arrives two and a half weeks earlier, and some computer models predict that the Pacific Northwest will receive 40 to 70 percent less snow by 2050. If greenhouse gas emissions continue to rise — they grew 41 percent between 1990 and 2008 — then snowfall, winter and skiing will no longer exist as we know them by the end of the century.
The effect on the ski industry has already been significant. Between 1999 and 2010, low snowfall years cost the industry $1 billion and up to 27,000 jobs. Oregon took the biggest hit out West, with 31 percent fewer skier visits during low snow years. Next was Washington at 28 percent, Utah at 14 percent and Colorado at 7.7 percent.
Read the entire story here.
Image courtesy of USA Today.
- The Persistent Threat to California>
Historians dispute the etymology of the name California. One possible origin comes from the Spanish Catalan phrase which roughly translates as “hot as a lime oven”. But while this may be pure myth there is no doubting the unfolding ecological (and human) disaster caused by incessant heat and lack of water. The severe drought in many parts of the state is now in its third year, and while it is still ongoing it is already recorded as the worst in the last 500 years. The drought is forcing farmers and rural communities to rethink and in some cases resettle, and increasingly it also threatens suburban and urban neighborhoods.
From the NYT:
The punishing drought that has swept California is now threatening the state’s drinking water supply.
With no sign of rain, 17 rural communities providing water to 40,000 people are in danger of running out within 60 to 120 days. State officials said that the number was likely to rise in the months ahead after the State Water Project, the main municipal water distribution system, announced on Friday that it did not have enough water to supplement the dwindling supplies of local agencies that provide water to an additional 25 million people. It is first time the project has turned off its spigot in its 54-year history.
State officials said they were moving to put emergency plans in place. In the worst case, they said drinking water would have to be brought by truck into parched communities and additional wells would have to be drilled to draw on groundwater. The deteriorating situation would likely mean imposing mandatory water conservation measures on homeowners and businesses, who have already been asked to voluntarily reduce their water use by 20 percent.
“Every day this drought goes on we are going to have to tighten the screws on what people are doing” said Gov. Jerry Brown, who was governor during the last major drought here, in 1976-77.
This latest development has underscored the urgency of a drought that has already produced parched fields, starving livestock, and pockets of smog.
“We are on track for having the worst drought in 500 years,” said B. Lynn Ingram, a professor of earth and planetary sciences at the University of California, Berkeley.
Already the drought, technically in its third year, is forcing big shifts in behavior. Farmers in Nevada said they had given up on even planting, while ranchers in Northern California and New Mexico said they were being forced to sell off cattle as fields that should be four feet high with grass are a blanket of brown and stunted stalks.
Fishing and camping in much of California has been outlawed, to protect endangered salmon and guard against fires. Many people said they had already begun to cut back drastically on taking showers, washing their car and watering their lawns.
Rain and snow showers brought relief in parts of the state at the week’s end — people emerging from a movie theater in West Hollywood on Thursday evening broke into applause upon seeing rain splattering on the sidewalk — but they were nowhere near enough to make up for record-long dry stretches, officials said.
“I have experienced a really long career in this area, and my worry meter has never been this high,” said Tim Quinn, executive director of the Association of California Water Agencies, a statewide coalition. “We are talking historical drought conditions, no supplies of water in many parts of the state. My industry’s job is to try to make sure that these kind of things never happen. And they are happening.”
Officials are girding for the kind of geographical, cultural and economic battles that have long plagued a part of the country that is defined by a lack of water: between farmers and environmentalists, urban and rural users, and the northern and southern regions of this state.
“We do have a politics of finger-pointing and blame whenever there is a problem,” said Mr. Brown. “And we have a problem, so there is going to be a tendency to blame people.” President Obama called him last week to check on the drought situation and express his concern.
Tom Vilsack, secretary of the federal Agriculture Department, said in an interview that his agency’s ability to help farmers absorb the shock, with subsidies to buy food for cattle, had been undercut by the long deadlock in Congress over extending the farm bill, which finally seemed to be resolved last week.
Mr. Vilsack called the drought in California a “deep concern,” and a warning sign of trouble ahead for much of the West.
“That’s why it’s important for us to take climate change seriously,” he said. “If we don’t do the research, if we don’t have the financial assistance, if we don’t have the conservation resources, there’s very little we can do to help these farmers.”
The crisis is unfolding in ways expected and unexpected. Near Sacramento, the low level of streams has brought out prospectors, sifting for flecks of gold in slow-running waters. To the west, the heavy water demand of growers of medical marijuana — six gallons per plant per day during a 150-day period — is drawing down streams where salmon and other endangered fish species spawn.
“Every pickup truck has a water tank in the back,” said Scott Bauer, a coho salmon recovery coordinator with the California Department of Fish and Wildlife. “There is a potential to lose whole runs of fish.”
Without rain to scrub the air, pollution in the Los Angeles basin, which has declined over the past decade, has returned to dangerous levels, as evident from the brown-tinged air. Homeowners have been instructed to stop burning wood in their fireplaces.
In the San Joaquin Valley, federal limits for particulate matter were breached for most of December and January. Schools used flags to signal when children should play indoors.
“One of the concerns is that as concentrations get higher, it affects not only the people who are most susceptible, but healthy people as well,” said Karen Magliano, assistant chief of the air quality planning division of the state’s Air Resources Board.
The impact has been particularly severe on farmers and ranchers. “I have friends with the ground torn out, all ready to go,” said Darrell Pursel, who farms just south of Yerington, Nev. “But what are you going to plant? At this moment, it looks like we’re not going to have any water. Unless we get a lot of rain, I know I won’t be planting anything.”
The University of California Cooperative Extension held a drought survival session last week in Browns Valley, about 60 miles north of Sacramento, drawing hundreds of ranchers in person and online. “We have people coming from six or seven hours away,” said Jeffrey James, who ran the session.
Dan Macon, 46, a rancher in Auburn, Calif., said the situation was “as bad as I have ever experienced. Most of our range lands are essentially out of feed.”
With each parched sunrise, a sense of alarm is rising amid signs that this is a drought that comes along only every few centuries. Sacramento had gone 52 days without water, and Albuquerque had gone 42 days without rain or snow as of Saturday.
The snowpack in the Sierra Nevada, which supplies much of California with water during the dry season, was at just 12 percent of normal last week, reflecting the lack of rain or snow in December and January.
Read the entire article here.
Image: Dry riverbed, Kern River in Bakersfield, California. Courtesy of David McNew/Getty Images / New York Times.
- If You Can Only Visit One Place...>
Travel editors at the New York Times have compiled their annual globe-spanning list of places to visit. As eclectic as ever, the list includes the hinderlands of Iceland, a cultural tour of Indianapolis, unspoilt (at the moment) beaches of Uruguay, a trip down the Mekong river, and a pub crawl across the hills and dales of Yorkshire. All fascinating. Our favorites are Aspen during the off-season, a resurgent Athens, the highlands of Scotland and the beautiful Seychelle Islands.
Read the entire article and see all the glorious images here.
Image: Hikers pause in the Loch Lomond area. Courtesy of Paul Tomkins/Scottish Viewpoint, New York Times.
- MondayMap: The Bear Necessities>
A linguistic map of Europe shows how the word “bear” is both similar and different across the continent.
From the Washington Post:
The Cold War taught us to think of Europe in terms of East-versus-West, but this map shows that it’s more complicated than that. Most Europeans speak Romance languages (orange countries), Germanic (pink) or Slavic (green), though there are some interesting exceptions.
Map courtesy of the Washington Post.
- Selfies That Celebrate the Environment>
Not every selfie has to be about me or you, the smartphone-carrier. Sometimes a selfie can focus on something else, something bigger than ourselves. Sometimes the self in the selfie becomes just a meaningless dot on a broader, deeper, richer landscape. We need to see more selfies like those of Canadian photographer Paul Zizka — his are indeed selfies worth sharing and celebrating.
See more of Paul Zizka’s stunning images here.
Image: The northern lights at Lake Minnewanka, Banff National Park Photograph: Paul Zizka Photography/Caters News Agency.
- MondayMap: Subjectivity In Cartography>
Maps tell us wonderful visual stories on many different levels. They provide an anchor for our exploration of the world; they plot our wars or adventures, they describe our ecosystem and our religious affiliations. Maps help us visualize our food supply, our weather patterns and our trade routes. Maps help us learn about our the travels of our ancestors, and they help us navigate our dream vacations and our retirement locations.
And, our course, in the hands of politicians maps can become tools to inflame nationalism and instigate territorial disputes. A great example of the latter is the map of the Kashmir region in the foothills of the Himalayan Mountains. It is a breathtakingly beautiful part of our planet — sparkling rivers, pristine lakes and soaring peaks. Yet it is also a land ravaged by warring clans of India and Pakistan — each claim pieces as their own. As a result, there is no one map of Kashmir — there are several depending upon your political viewpoint and national affiliation.
Once again Frank Jacobs over at Strange Maps has found another map-based gem, this time leading us through the cartographic turmoil that is Kashmir.
From Strange Maps:
The conflict over Kashmir is decades old, frozen in time, and by now forgotten by most outsiders. Few non-subcontinentals could tell you more about it than this: Kashmir is disputed between India and Pakistan, who each occupy parts of it.
The standard map of the region isn’t very helpful either. Careful not to choose sides, it will show an overabundant mess of boundaries, complicated further by the already difficult terrain: high in the Western Himalayas, Kashmir is a maze of high-altitude peaks, interspersed with fertile valleys. And to top it all off, a third power – China – occupies part of the disputed lands, although that presence is disputed only by India, not by Pakistan.
How did things get so messy? A thumbnail sketch of the conflict:
For British India, the joy of independence in 1947 coincided with the trauma of Partition. In theory, majority-Muslim areas became Pakistan, while regions with a Hindu majority went on to form India. But in each of the nominally independent princely states, the decision rested with the local maharajah. The sovereign of Kashmir, a Sikh ruling a mainly Muslim people, at first tried to go it alone, but called in Indian help to ward off Pakistani incursions.
The assistance came at a price – Kashmir acceded to India, which Pakistan refused to accept. The First Indo-Pakistani War ended in 1949 with the de facto division of Kashmir along a cease-fire line also known as the LoC (Line of Control). India has since reinforced this border with landmines and an electrified fence, with the aim of keeping out terrorists.
But this ‘Berlin Wall of the East’ does not cover the entire distance between the Radcliffe Line and the Chinese border. The Siachen Glacier forms the last, deadliest piece of the puzzle. The 1972 agreement that ended the Third Indo-Pakistani War neglected to extend demarcation of the LoC across the glacier, as it was deemed too inhospitable to be of interest. Yet in 1984, India occupied the area and Pakistan moved to counter, leading to the world’s highest ever battles, fought at 20,000 feet (6,000 m) altitude; most of the over 2,000 casualties in the low-intensity conflict, which was one of the causes of the Fourth Indo-Pakistani War (a.k.a. the Kargil War) in 1999, have died from frostbite or avalanches.
Siachen is the ultimate and most absurd consequence of the geopolitical wrangling over Kashmir. The only reason either side maintains military outposts in the area is the fact that the other side does too. The intransigent overlapping of the Indian and Pakistani claims results, among many other things, in a map, brimming with an overabundance of topographical and political markers.
Could that discouragingly intricate map be a contributing factor to the obscurity of the conflict? If so, then this cartographic double-act will refocus global attention – perhaps bringing a solution closer. Which may be more crucial to world peace than you may think. Shootings across the LoC claim soldiers’ and civilians’ lives on a monthly basis. Each of those incidents could lead to a Fifth Indo-Pakistani War. Which would only be the second time two nuclear powers have engaged in direct military conflict.
Brilliant in its simplicity, and beautiful in its duplicity, the idea behind the two maps below is to isolate each side’s position in the Kashmir conflict on a separate canvas, instead of overlapping them on a single one. By unscrambling both points of view but still presenting them side by side on maps of similar scale and size, the divergences are clarified, yet remain comparable.
Read the entire story and see more maps here.
Image: Two maps of Kashmir. Separated into two maps, the competing claims for Kashmir become a lot clearer. Courtesy of Frank Jacobs, Strange Maps.
- Sea Levels Just Keep Rising, Really>
The rise in the global sea level is not a disputable fact, as some would still have you believe. The sea level is rising and it is rising faster. It is a fact backed by evidence. Period. This fact has been established through continuous, independent and corroborated scientific studies in many nations across all continents by thousands of scientists.
And, as the oceans rise communities that touch the water face increasing threats. A growing number of areas now have to plan and prepare for more frequent and more prolonged tidal erosion and storm surges. Worse still, some communities, in increasing numbers, have to confront the prospect of complete resettlement caused by the real danger of prolonged and irreversible flooding. Today it may be some of the low lying areas of Norfolk, Virginia or a remote Pacific Island; tomorrow it may be all of downtown Miami and much of the Eastern Seaboard of the US.
From the New York Times:
The little white shack at the water’s edge in Lower Manhattan is unobtrusive — so much so that the tourists strolling the promenade at Battery Park the other day did not give it a second glance.
Up close, though, the roof of the shed behind a Coast Guard building bristled with antennas and other gear. Though not much bigger than a closet, this facility is helping scientists confront one of the great environmental mysteries of the age.
The equipment inside is linked to probes in the water that keep track of the ebb and flow of the tides in New York Harbor, its readings beamed up to a satellite every six minutes.
While the gear today is of the latest type, some kind of tide gauge has been operating at the Battery since the 1850s, by a government office originally founded by Thomas Jefferson. That long data record has become invaluable to scientists grappling with this question: How much has the ocean already risen, and how much more will it go up?
Scientists have spent decades examining all the factors that can influence the rise of the seas, and their research is finally leading to answers. And the more the scientists learn, the more they perceive an enormous risk for the United States.
Much of the population and economy of the country is concentrated on the East Coast., which the accumulating scientific evidence suggests will be a global hot spot for a rising sea level over the coming century.
The detective work has required scientists to grapple with the influence of ancient ice sheets, the meaning of islands that are sinking in Chesapeake Bay, and even the effect of a giant meteor that slammed into the earth.
The work starts with the tides. Because of their importance to navigation, they have been measured for the better part of two centuries. While the record is not perfect, scientists say it leaves no doubt that the world’s oceans are rising. The best calculation suggests that from 1880 to 2009, the global average sea level rose a little over eight inches.
That may not sound like much, but scientists say even the smallest increase causes the seawater to eat away more aggressively at the shoreline in calm weather, and leads to higher tidal surges during storms. The sea-level rise of decades past thus explains why coastal towns nearly everywhere are having to spend billions of dollars fighting erosion.
The evidence suggests that the sea-level rise has probably accelerated, to about a foot a century, and scientists think it will accelerate still more with the continued emission of large amounts of greenhouse gases into the air. The gases heat the planet and cause land ice to melt into the sea.
The official stance of the world’s climate scientists is that the global sea level could rise as much as three feet by the end of this century, if emissions continue at a rapid pace. But some scientific evidence supports even higher numbers, five feet and beyond in the worst case.
Scientists say the East Coast will be hit harder for many reasons, but among the most important is that even as the seawater rises, the land in this part of the world is sinking. And that goes back to the last ice age, which peaked some 20,000 years ago.
As a massive ice sheet, more than a mile thick, grew over what are now Canada and the northern reaches of the United States, the weight of it depressed the crust of the earth. Areas away from the ice sheet bulged upward in response, as though somebody had stepped on one edge of a balloon, causing the other side to pop up. Now that the ice sheet has melted, the ground that was directly beneath it is rising, and the peripheral bulge is falling.
Some degree of sinking is going on all the way from southern Maine to northern Florida, and it manifests itself as an apparent rising of the sea.
The sinking is fastest in the Chesapeake Bay region. Whole island communities that contained hundreds of residents in the 19th century have already disappeared. Holland Island, where the population peaked at nearly 400 people around 1910, had stores, a school, a baseball team and scores of homes. But as the water rose and the island eroded, the community had to be abandoned.
Eventually just a single, sturdy Victorian house, built in 1888, stood on a remaining spit of land, seeming at high tide to rise from the waters of the bay itself. A few years ago, a Washington Post reporter, David A. Fahrenthold, chronicled its collapse.
Aside from this general sinking of land up and down the East Coast, some places sit on soft sediments that tend to compress over time, so the localized land subsidence can be even worse than the regional trend. Much of the New Jersey coast is like that. The sea-level record from the Battery has been particularly valuable in sorting out this factor, because the tide gauge there is attached to bedrock and the record is thus immune to sediment compression.
Read the entire article here.
Image: The last house on Holland Island in Chesapeake Bay, which once had a population of almost 400, finally toppled in October 2010. Courtesy of Astrid Riecken for The Washington Post.
- Art From the Tube>
The tube is question here is not one containing an artist’s oils or acrylics. And, neither is the tube the Google owned YouTube site. Rather, the tube, is The Tube — London’s metropolitan subway system, also known as the underground. The paintings are part of an exhibit to honor the 150th anniversary of the initial opening of the, mostly, subterranean marvel.
From the Telegraph:
Artist Ewing Paddock has spent three years painting people travelling on the London Underground. The Tube is the place to observe Londoners in all their glorious diversity and Ewing wanted to try to capture some of that in the paintings and also the slightly secret voyeurism that most of us indulge in when watching, and wondering about, our fellow travellers under ground.
See all the wonderful paintings here.
Image: Adam, Eve. An old, old story, deep underground, by Ewing Paddock. Courtesy the Telegraph.
- MondayMap: Best of the Worst>
Today’s map is not for the faint of heart, but fascinating nonetheless. It tells us that if are a resident of West Virginia you are more likely to die from a heart attack, whereas if you’re from Alabama you’ll die from a stroke, and in Kentucky, well, cancer will get you first, but in Georgia you more likely to contract the flu.
Utah seems to have the highest predilection for porn, while Rhode Islanders love their illicit drugs, Coloradans prefer only cocaine and residents of New Mexico have a penchant for alcohol. On the educational front, Maine tops the list with the lowest SAT scores, but Texas has the lowest high school graduation rates.
The map is based on a wide collection of published statistics, and a few less objective measures such as in the case of North Dakota (ugliest residents).
Find more details about the map here.
Map courtesy of Jeff Wysaski over at his blog Pleated Jeans.
Famed architect Norman Foster has a brilliant and restless mind. So, he’s not content to stop imagining, even with some of the world’s most innovative and recognizable architectural designs to his credit — 30 St. Mary Axe (London’s “gherkin” or pickle skyscraper), Hearst Tower, and the Millau Viaduct.
Foster is also an avid cyclist, which leads to his re-imagining of the lowly bicycle lane as a more lofty construct. Two hundred miles or so of raised bicycle lanes suspended above London, running mostly above railway lines, the SkyCycle. What a gorgeous idea.
From the Guardian:
Gliding through the air on a bike might so far be confined to the fantasy realms of singing nannies and aliens in baskets, but riding over rooftops could one day form part of your regular commute to work, if Norman Foster has his way.
Unveiled this week, in an appropriately light-headed vision for the holiday season, SkyCycle proposes a network of elevated bike paths hoisted aloft above railway lines, allowing you to zip through town blissfully liberated from the roads.
The project, which has the backing of Network Rail and Transport for London, would see over 220km of car-free routes installed above London’s suburban rail network, suspended on pylons above the tracks and accessed at over 200 entrance points. At up to 15 metres wide, each of the ten routes would accommodate 12,000 cyclists per hour and improve journey times by up to 29 minutes, according to the designers.
Lord Foster, who says that cycling is one of his great passions, describes the plan as “a lateral approach to finding space in a congested city.”
“By using the corridors above the suburban railways,” he said, “we could create a world-class network of safe, car-free cycle routes that are ideally located for commuters.”
Developed by landscape practice Exterior Architecture, with Foster and Partners and Space Syntax, the proposed network would cover a catchment area of six million people, half of whom live and work within 10 minutes of an entrance. But its ambitions stretch beyond London alone.
“The dream is that you could wake up in Paris and cycle to the Gare du Nord,” says Sam Martin of Exterior Architecture. “Then get the train to Stratford, and cycle straight into central London in minutes, without worrying about trucks and buses.”
Developed over the last two years, the initial idea came from the student project of one of Martin’s employees, Oli Clark, who proposed a network of elevated cycle routes weaving in and around Battersea power station. “It was a hobby in the office for a while,” says Martin. “Then we arranged a meeting at City Hall with the deputy mayor of transport – and bumped into Boris in the lift.”
Bumping into Boris has been the fateful beginning for some of the mayor’s other adventures in novelty infrastructure, including Anish Kapoor’s Orbit tower, apparently forged in a chance meeting with Lakshmi Mittal in the cloakrooms at Davos. Other encounters have resulted in cycle “superhighways” (which many blame for the recent increase in accidents) and a £60 million cable car that doesn’t really go anywhere. But could SkyCycle be different?
“It’s about having an eye on the future,” says Martin. “If London keeps growing and spreading itself out, with people forced to commute increasingly longer distances, then in 20 years it’s just going to be a ghetto for people in suits. After rail fare increases this week, a greater percentage of people’s income is being taken up with transport. There has to be another way to allow everyone access to the centre, and stop this doughnut effect.”
After meeting with Network Rail last year, the design team has focused on a 6.5km trial route from Stratford to Liverpool Street Station, following the path of the overground line, a stretch they estimate would cost around £220 million. Working with Roger Ridsdill-Smith, Foster’s head of structural engineering, responsible for the Millennium Bridge, they have developed what Martin describes as “a system akin to a tunnel-boring machine, but happening above ground”.
“It’s no different to the electrification of the lines west of Paddington,” he says. “It would involve a series of pylons installed along the outside edge of the tracks, from which a deck would project out. Trains could still run while the cycle decks were being installed.”
As for access, the proposal would see the installation of vertical hydraulic platforms next to existing railway stations, as well as ramps that took advantage of the raised topography around viaducts and cuttings. “It wouldn’t be completely seamless in terms of the cycling experience,” Martin admits. “But it could be a place for Boris Bike docking stations, to avoid people having to get their own equipment up there.” He says the structure could also be a source of energy creation, supporting solar panels and rain water collection.
The rail network has long been seen as a key to opening up cycle networks, given the amount of available land alongside rail lines, but no proposal has yet suggested launching cyclists into the air.
Read the entire article here.
Image: How the proposed SkyCycle tracks could look. Courtesy of Foster and Partners / Guardian.
- Asimov Fifty Years On>
In 1964, Isaac Asimov wrote an essay for the New York Times entitled, Visit the World’s Fair in 2014. The essay was a free-wheeling opinion of things to come, viewed through the lens of New York’s World’s Fair of 1964. The essay shows that even a grand master of science fiction cannot predict the future — he got some things quite right and other things rather wrong. Some examples below, and his full essay are below.
That said, what has captured recent attention is Asimov’s thinking on the complex and evolving relationship between humans and technology, and the challenges of environmental stewardship in an increasingly over-populated and resource-starved world.
So, while Asimov was certainly not a teller of fortunes, we had many insights that many, even today, still lack.
Read the entire Isaac Asimov essay here.
What Asimov got right:
“Communications will become sight-sound and you will see as well as hear the person you telephone.”
“As for television, wall screens will have replaced the ordinary set…”
“Large solar-power stations will also be in operation in a number of desert and semi-desert areas…”
“Windows… will be polarized to block out the harsh sunlight. The degree of opacity of the glass may even be made to alter automatically in accordance with the intensity of the light falling upon it.”
What Asimov got wrong:
“The appliances of 2014 will have no electric cords, of course, for they will be powered by long- lived batteries running on radioisotopes.”
“…cars will be capable of crossing water on their jets…”
“For short-range travel, moving sidewalks (with benches on either side, standing room in the center) will be making their appearance in downtown sections.”
From the Atlantic:
In August of 1964, just more than 50 years ago, author Isaac Asimov wrote a piece in The New York Times, pegged to that summer’s World Fair.
In the essay, Asimov imagines what the World Fair would be like in 2014—his future, our present.
His notions were strange and wonderful (and conservative, as Matt Novak writes in a great run-down), in the way that dreams of the future from the point of view of the American mid-century tend to be. There will be electroluminescent walls for our windowless homes, levitating cars for our transportation, 3D cube televisions that will permit viewers to watch dance performances from all angles, and “Algae Bars” that taste like turkey and steak (“but,” he adds, “there will be considerable psychological resistance to such an innovation”).
He got some things wrong and some things right, as is common for those who engage in the sport of prediction-making. Keeping score is of little interest to me. What is of interest: what Asimov understood about the entangled relationships among humans, technological development, and the planet—and the implications of those ideas for us today, knowing what we know now.
Asimov begins by suggesting that in the coming decades, the gulf between humans and “nature” will expand, driven by technological development. “One thought that occurs to me,” he writes, “is that men will continue to withdraw from nature in order to create an environment that will suit them better. “
It is in this context that Asimov sees the future shining bright: underground, suburban houses, “free from the vicissitudes of weather, with air cleaned and light controlled, should be fairly common.” Windows, he says, “need be no more than an archaic touch,” with programmed, alterable, “scenery.” We will build our own world, an improvement on the natural one we found ourselves in for so long. Separation from nature, Asimov implies, will keep humans safe—safe from the irregularities of the natural world, and the bombs of the human one, a concern he just barely hints at, but that was deeply felt at the time.
But Asimov knows too that humans cannot survive on technology alone. Eight years before astronauts’ Blue Marble image of Earth would reshape how humans thought about the planet, Asimov sees that humans need a healthy Earth, and he worries that an exploding human population (6.5 billion, he accurately extrapolated) will wear down our resources, creating massive inequality.
Although technology will still keep up with population through 2014, it will be only through a supreme effort and with but partial success. Not all the world’s population will enjoy the gadgety world of the future to the full. A larger portion than today will be deprived and although they may be better off, materially, than today, they will be further behind when compared with the advanced portions of the world. They will have moved backward, relatively.
This troubled him, but the real problems lay yet further in the future, as “unchecked” population growth pushed urban sprawl to every corner of the planet, creating a “World-Manhattan” by 2450. But, he exclaimed, “society will collapse long before that!” Humans would have to stop reproducing so quickly to avert this catastrophe, he believed, and he predicted that by 2014 we would have decided that lowering the birth rate was a policy priority.
Asimov rightly saw the central role of the planet’s environmental health to a society: No matter how technologically developed humanity becomes, there is no escaping our fundamental reliance on Earth (at least not until we seriously leave Earth, that is). But in 1964 the environmental specters that haunt us today—climate change and impending mass extinctions—were only just beginning to gain notice. Asimov could not have imagined the particulars of this special blend of planetary destruction we are now brewing—and he was overly optimistic about our propensity to take action to protect an imperiled planet.
Read the entire article here.
Image: Driverless cars as imaged in 1957. Courtesy of America’s Independent Electric Light and Power Companies/Paleofuture.
- A Cry For Attention>
If Mother Earth could post a handful of selfies to awaken us all to the damage, destruction and devastation wrought by its so-called intelligent inhabitants, these would be the images. Peter Essick, National Geographic photo-essayist, gives our host a helping hand with a stunning collection of photographs in his new book, Our Beautiful, Fragile World; images of sadness and loss.
See more of Essick’s photographs here.
From ars technica:
The first song, The Ballad of Bill Hubbard, on Roger Waters’ album Amused to Death begins with an anecdote. It is the story of a wounded soldier asking to be abandoned to die on the battlefield. Told in a matter-of-fact tone by the aged voice of the soldier who abandoned him, it creates a strong counterpoint to the emotion that underlies the story. It evokes sepia-toned images of pain and loss.
Matter-of-fact story telling makes Peter Essick’s book, Our Beautiful, Fragile World, an emotional snapshot of environmental tragedies in progress. Essick is a photojournalist for National Geographic who has spent the last 25 years documenting man’s devastating impact on the environment. In this respect, Essick has the advantage of Waters in that the visual imagery linked to each story leaves nothing to chance.
Essick has put about a hundred of his most evocative images in a coffee table book. The images range over the world in location. We go from the wilds of Alaska, the Antarctic, and Torres Del Paine National Park in Chile, to the everyday in a Home Depot parking lot in Baltimore and a picnic on the banks of the Patuxent River.
The storytelling complements the imagery very well. Indeed, Essick’s matter-of-fact voice lets the reader draw their emotional response from the photos and their relationship to the story. The strongest are often the most mundane. The tragedy of incomplete and unsuccessful cleanup efforts in Chesapeake Bay is made all the more poignant by the image of recreational users enjoying the bay while adding further damage. This is the second theme of the book: even environmental damage can be made to look stunningly beautiful. The infinity room at Idaho Nuclear Engineering and Environmental Laboratory dazzles the eye, while one can’t help but stare in wonder at the splendid desolation created by mining the Canadian Oil Sands.
Despite the beauty, though, the overriding tone is one of sadness. Sadness for what we have lost, what we are losing, and what will soon be lost. In some sense, these images are about documenting what we have thrown away. This is a sepia-toned book, even though the images are not. I consider myself to be environmentally aware. I have made efforts to reduce my carbon footprint; I don’t own a car; we have reduced the amount of meat in our diet; we read food labels to try to purchase from sustainable sources. Yet, this book makes me realise how much more we have to do, while my own life tells me how hard that actually is.
This book is really a cry for attention. It brings into stark relief the hidden consequences of modern life. Our appetite for energy, for plastics, for food, and for metals is, without doubt, causing huge damage to the Earth. Some of it is local: hard rock mining leaving water not just undrinkable but too acidic to touch and land nearby unusable. Other problems are global: carbon emissions and climate change. Even amidst the evidence of this devastation, Essick remains sympathetic to the people caught in the story; that hard rock mining is done by people who are to be treated with dignity. This aspect of Essick’s approach gives his book a humanity that a simple environmental-warrior story would lack.
In only one place does Essick’s matter-of-fact approach breakdown. The story of climate change is deeply troubling, and he lets his pessimism and anger leak through. Although these feelings are not discussed directly, Essick—and, indeed many of us—are deeply frustrated by the lack of political will. Although the climate vignettes are too short to capture the issues, the failure of our society to act are laid out in plain sight.
The images are, without exception, stunning, and Essick has done about as well as is possible given the format. And, therein lies my only real complaint about the book. I don’t really get on with coffee table books. As you may have guessed from my effusiveness above, I love the photography. The central theme of the book is strong and compelling. The imagery, combined with the vignettes, are individually evocative. But, as with all coffee table books, the individual stories lack a certain… something. A good short story is evocative and complete, while still telling a complex story. The vignettes in coffee table books, however, are more like extended captions. What I want instead is a good short story.
Read the entire story here.
Image: Fertilizer: it helps more than just the plants grow. Unfortunately, all that is green is not good for you. The myth that because farmers use the land they are environmentally conscious is just that: a myth. Courtesy of Peter Essick, from his book Our Beautiful, Fragile World.
- The AbFab Garden Bridge>
Should it come to fruition, London’s answer to Lower Manhattan’s High Line promises to be a delightful walker’s paradise and another visitor magnet. The Garden Bridge is a new pedestrian walkway across the River Thames designed with nature in mind, and planted throughout with trees, shrubs and wildflowers. Interestingly, the idea for the design came from British national treasure, actress Joanna Lumley.
British designer Thomas Heatherwick has a knack for reinventing iconic designs. See, for example, his modern take on a midcentury double-decker bus or his 2012 Olympic cauldron, made of 204 copper petals representing participating nations. Heatherwick is also known for whimsical inventions like his 2004 rolling bridge, which curls up on itself to let boats pass beneath it.
The latest proposal from Heatherwick, the man that mentor Terence Conran branded a modern-day Leonardo da Vinci, is a nature-inspired walkway across the Thames: The Garden Bridge.
Oddly, the idea for the design came from Absolutely Fabulous actress Joanna Lumley, who approached Heatherwick years ago. The bridge would be a new structure across the river intended to help improve pedestrian life by connecting North and South London with a planted garden path landscaped by U.K. designer and horticulturalist Dan Pearson. It would be filled with indigenous river edge trees, shrubs, and wildflowers and include benches and walkways of varying widths to create both intimate and more expansive spaces along the walkway. If built, the bridge would be an obvious crowdpleaser as a public green space, lookout point, and tourist destination. In London it would be a rare new jewel in the crown of a city already famed for its gardens.
Why is the idea of a slow garden path through a bustling urban landscape so appealing? Perhaps it’s because, like a vertical garden, such greenways inject our concrete metropolises with a stylized dose of the natural world we destroyed to build them. (Even if the inevitable crowds might detract from the imagined experience.)
Or perhaps it has something to do with biologist Edward O. Wilson’s biophilia hypothesis, described in Charles Montgomery’s new book Happy City as the notion that “humans are hardwired to find particular scenes of nature calming and restorative.” Montgomery also discusses a theory by biologists Stephen and Rachel Kaplan that explains how negotiating busy city streets demands draining “voluntary attention,” whereas “involuntary attention, the kind we give to nature, is effortless, like a daydream or a song washing through your brain. You might not even realize you are paying attention and yet you may be restored and transformed by the act.”
Is this London’s answer to NYC’s High Line, itself inspired by Paris’ Promenade plantée? (Although those projects were built on the ruins of abandoned railway tracks, the parallels are clear.) Earlier this week, the Financial Times noted that the initiative has been “seen by many as the capital’s answer to New York’s much-praised High Line,” adding that “the project appealed to the rivalry between New York and London.”
While the proposed Garden Bridge has the informal support of Mayor Boris Johnson, it would be built using mostly private funding (and board trustees have rejected the idea of selling naming rights to corporate sponsors). Half of that money has already been raised, through private donations and a recent injection of cash from the government, notes The Independent, which reported on Wednesday that Transport for London, the city’s transit authority, has pledged 30 million pounds in support of the project.
Until Dec. 20, the public can visit the website of the Garden Bridge Trust that has been set up to welcome suggestions and thoughts on the plan, which if built could be open to the public in late 2017. In the meantime, this Garden Bridge video narrated by Lumley offers a sneak peek.
Read the entire article here.
Image: Thomas Heatherwick’s Garden Bridge would provide a leisurely garden path across the Thames River. Courtesy Arup.
- Journey to the Center of Consumerism>
Our collective addiction for purchasing anything, anytime may be wonderfully satisfying for a culture that collects objects and values unrestricted choice and instant gratification. However, it comes at a human cost. Not merely for those who produce our toys, clothes, electronics and furnishings in faraway, anonymous factories, but for those who get the products to our swollen mailboxes.
An intrepid journalist ventured to the very heart of the beast — an Amazon fulfillment center — to discover how the blood of internet commerce circulates; the Observer’s Carole Cadwalladr worked at Amazon’s warehouse, in Swansea, UK, for a week. We excerpt her tale below.
From the Guardian:
The first item I see in Amazon’s Swansea warehouse is a package of dog nappies. The second is a massive pink plastic dildo. The warehouse is 800,000 square feet, or, in what is Amazon’s standard unit of measurement, the size of 11 football pitches (its Dunfermline warehouse, the UK’s largest, is 14 football pitches). It is a quarter of a mile from end to end. There is space, it turns out, for an awful lot of crap.
But then there are more than 100m items on its UK website: if you can possibly imagine it, Amazon sells it. And if you can’t possibly imagine it, well, Amazon sells it too. To spend 10½ hours a day picking items off the shelves is to contemplate the darkest recesses of our consumerist desires, the wilder reaches of stuff, the things that money can buy: a One Direction charm bracelet, a dog onesie, a cat scratching post designed to look like a DJ’s record deck, a banana slicer, a fake twig. I work mostly in the outsize “non-conveyable” section, the home of diabetic dog food, and bio-organic vegetarian dog food, and obese dog food; of 52in TVs, and six-packs of water shipped in from Fiji, and oversized sex toys – the 18in double dong (regular-sized sex toys are shelved in the sortables section).
On my second day, the manager tells us that we alone have picked and packed 155,000 items in the past 24 hours. Tomorrow, 2 December – the busiest online shopping day of the year – that figure will be closer to 450,000. And this is just one of eight warehouses across the country. Amazon took 3.5m orders on a single day last year. Christmas is its Vietnam – a test of its corporate mettle and the kind of challenge that would make even the most experienced distribution supply manager break down and weep. In the past two weeks, it has taken on an extra 15,000 agency staff in Britain. And it expects to double the number of warehouses in Britain in the next three years. It expects to continue the growth that has made it one of the most powerful multinationals on the planet.
Right now, in Swansea, four shifts will be working at least a 50-hour week, hand-picking and packing each item, or, as the Daily Mail put it in an article a few weeks ago, being “Amazon’s elves” in the “21st-century Santa’s grotto”.
If Santa had a track record in paying his temporary elves the minimum wage while pushing them to the limits of the EU working time directive, and sacking them if they take three sick breaks in any three-month period, this would be an apt comparison. It is probably reasonable to assume that tax avoidance is not “constitutionally” a part of the Santa business model as Brad Stone, the author of a new book on Amazon, The Everything Store: Jeff Bezos and the Age of Amazon, tells me it is in Amazon’s case. Neither does Santa attempt to bully his competitors, as Mark Constantine, the founder of Lush cosmetics, who last week took Amazon to the high court, accuses it of doing. Santa was not called before the Commons public accounts committee and called “immoral” by MPs.
For a week, I was an Amazon elf: a temporary worker who got a job through a Swansea employment agency – though it turned out I wasn’t the only journalist who happened upon this idea. Last Monday, BBC’s Panorama aired a programme that featured secret filming from inside the same warehouse. I wonder for a moment if we have committed the ultimate media absurdity and the show’s undercover reporter, Adam Littler, has secretly filmed me while I was secretly interviewing him. He didn’t, but it’s not a coincidence that the heat is on the world’s most successful online business. Because Amazon is the future of shopping; being an Amazon “associate” in an Amazon “fulfilment centre” – take that for doublespeak, Mr Orwell – is the future of work; and Amazon’s payment of minimal tax in any jurisdiction is the future of global business. A future in which multinational corporations wield more power than governments.
But then who hasn’t absent-mindedly clicked at something in an idle moment at work, or while watching telly in your pyjamas, and, in what’s a small miracle of modern life, received a familiar brown cardboard package dropping on to your doormat a day later. Amazon is successful for a reason. It is brilliant at what it does. “It solved these huge challenges,” says Brad Stone. “It mastered the chaos of storing tens of millions of products and figuring out how to get them to people, on time, without fail, and no one else has come even close.” We didn’t just pick and pack more than 155,000 items on my first day. We picked and packed the right items and sent them to the right customers. “We didn’t miss a single order,” our section manager tells us with proper pride.
At the end of my first day, I log into my Amazon account. I’d left my mum’s house outside Cardiff at 6.45am and got in at 7.30pm and I want some Compeed blister plasters for my toes and I can’t do it before work and I can’t do it after work. My finger hovers over the “add to basket” option but, instead, I look at my Amazon history. I made my first purchase, The Rough Guide to Italy, in February 2000 and remember that I’d bought it for an article I wrote on booking a holiday on the internet. It’s so quaint reading it now. It’s from the age before broadband (I itemise my phone bill for the day and it cost me £25.10), when Google was in its infancy. It’s littered with the names of defunct websites (remember Sir Bob Geldof’s deckchair.com, anyone?). It was a frustrating task and of pretty much everything I ordered, only the book turned up on time, as requested.
But then it’s a phenomenal operation. And to work in – and I find it hard to type these words without suffering irony seizure – a “fulfilment centre” is to be a tiny cog in a massive global distribution machine. It’s an industrialised process, on a truly massive scale, made possible by new technology. The place might look like it’s been stocked at 2am by a drunk shelf-filler: a typical shelf might have a set of razor blades, a packet of condoms and a My Little Pony DVD. And yet everything is systemised, because it has to be. It’s what makes it all the more unlikely that at the heart of the operation, shuffling items from stowing to picking to packing to shipping, are those flesh-shaped, not-always-reliable, prone-to-malfunctioning things we know as people.
It’s here, where actual people rub up against the business demands of one of the most sophisticated technology companies on the planet, that things get messy. It’s a system that includes unsystemisable things like hopes and fears and plans for the future and children and lives. And in places of high unemployment and low economic opportunities, places where Amazon deliberately sites its distribution centres – it received £8.8m in grants from the Welsh government for bringing the warehouse here – despair leaks around the edges. At the interview – a form-filling, drug- and alcohol-testing, general-checking-you-can-read session at a local employment agency – we’re shown a video. The process is explained and a selection of people are interviewed. “Like you, I started as an agency worker over Christmas,” says one man in it. “But I quickly got a permanent job and then promoted and now, two years later, I’m an area manager.”
Amazon will be taking people on permanently after Christmas, we’re told, and if you work hard, you can be one of them. In the Swansea/Neath/Port Talbot area, an area still suffering the body blows of Britain’s post-industrial decline, these are powerful words, though it all starts to unravel pretty quickly. There are four agencies who have supplied staff to the warehouse, and their reps work from desks on the warehouse floor. Walking from one training session to another, I ask one of them how many permanent employees work in the warehouse but he mishears me and answers another question entirely: “Well, obviously not everyone will be taken on. Just look at the numbers. To be honest, the agencies have to say that just to get people through the door.”
It does that. It’s what the majority of people in my induction group are after. I train with Pete – not his real name – who has been unemployed for the past three years. Before that, he was a care worker. He lives at the top of the Rhondda Valley, and his partner, Susan (not her real name either), an unemployed IT repair technician, has also just started. It took them more than an hour to get to work. “We had to get the kids up at five,” he says. After a 10½-hour shift, and about another hour’s drive back, before picking up the children from his parents, they got home at 9pm. The next day, they did the same, except Susan twisted her ankle on the first shift. She phones in but she will receive a “point”. If she receives three points, she will be “released”, which is how you get sacked in modern corporatese.
Read the entire article here.
Image: Amazon distribution warehouse in Milton Keynes, UK. Courtesy of Reuters / Dylan Martinez.
The Federal Communications Commission (FCC) recently relaxed rules governing the use of electronics onboard aircraft. We can now use our growing collection of electronic gizmos during take-off and landing, not just during the cruise portion of the flight. But, during flight said gizmos still need to be set to “airplane mode” which shuts off a device’s wireless transceiver.
However, the FCC is considering relaxing the rule even further, allowing cell phone use during flight. Thus, many flyers will soon have yet another reason to hate airlines and hate flying. We’ll be able to add loud cell phone conversations to the lengthy list of aviation pain inducers: cramped seating, fidgety kids, screaming babies, business bores, snorers, Microsoft Powerpoint, body odor, non-existent or bad food, and worst of all travelers who still can’t figure out how to buckle the seat belt.
FCC, please don’t do it!
If cellphone calling comes to airplanes, it is likely to be the last call for manners.
The prospect is still down the road a bit, and a good percentage of the population can be counted on to be polite. But etiquette experts who already are fuming over the proliferation of digital rudeness aren’t optimistic.
Jodi R.R. Smith, owner of Mannersmith Etiquette Consulting in Massachusetts, says the biggest problem is forced proximity. It is hard to be discreet when just inches separate passengers. And it isn’t possible to escape.
“If I’m on an airplane, and my seatmate starts making a phone call, there’s not a lot of places I can go,” she says.
Should the Federal Communications Commission allow cellphone calls on airplanes above 10,000 feet, and if the airlines get on board, one solution would be to create yakking and non-yakking sections of aircraft, or designate flights for either the chatty or the taciturn, as airlines used to do for smoking.
Barring such plans, there are four things you should consider before placing a phone call on an airplane, Ms. Smith says:
• Will you disturb those around you?
• Will you be ignoring companions you should be paying attention to?
• Will you be discussing confidential topics?
• Is it an emergency?
The answer to the last question needs to be “Yes,” she says, and even then, make the call brief.
“I find that the vast majority of people will get it,” she says. “It’s just the few that don’t who will make life uncomfortable for the rest of us.”
FCC Chairman Tom Wheeler said last week that there is no technical reason to maintain what has been a long-standing ban.
Airlines are approaching the issue cautiously because many customers have expressed strong feelings against cellphone use.
“I believe fistfights at 39,000 feet would become common place,” says Alan Smith, a frequent flier from El Dorado Hills, Calif. “I would be terrified that some very large fellow, after a few drinks, would beat up a passenger annoying him by using the phone.”
Minneapolis etiquette consultant Gretchen Ditto says cellphone use likely will become commonplace on planes since our expectations have changed about when people should be reachable.
Passengers will feel obliged to answer calls, she says. “It’s going to become more prevalent for returning phone calls, and it’s going to be more annoying to everybody.”
Electronic devices are taking over our lives, says Arden Clise, an etiquette expert in Seattle. We text during romantic dinners, answer email during meetings and shop online during Thanksgiving. Making a call on a plane is only marginally more rude.
“Are we saying that our tools are more important than the people in front of us?” she asks. Even if you don’t know your in-flight neighbor, ask yourself, “Do I want to be that annoying person,” Ms. Clise says.
If airlines decide to allow calls, punching someone’s lights out clearly wouldn’t be the best way to get some peace, says New Jersey etiquette consultant Mary Harris. But tensions often run high during flights, and fights could happen.
If someone is bothering you with a phone call, Ms. Harris advises asking politely for the person to end the conversation.
If that doesn’t work, you’re stuck.
In-flight cellphone calls have been possible in Europe for several years. But U.K. etiquette expert William Hanson says they haven’t caught on.
If you need to make a call, he advises leaving your seat for the area near the lavatory or door. If it is night and the lights are dimmed, “you should not make a call at your seat,” he says.
Calls used to be possible on U.S. flights using Airfone units installed on the planes, but the technology never became popular. When people made calls, they were usually brief, in part because they cost $2 a minute, says Tony Lent, a telecommunication consultant in Detroit who worked on Airfone products in the 1980s.
The situation might be different today. “People were much more prudent about using their mobile phones,” Mr. Lent says. “Nowadays, those social mores are gone.”
Several years ago, when the government considered lifting its cellphone ban, U.S. Rep. Tom Petri co-sponsored the Halting Airplane Noise to Give Us Peace Act of 2008. The bill would have allowed texting and other data applications but banned voice calls. He was motivated by “a sense of courtesy,” he says. The bill was never brought to a vote.
Mr. Petri says he will try again if the FCC allows calls this time around. What if his bill doesn’t pass? “I suppose you can get earplugs,” he says.
Read the entire article here.
Image: Smartphone user. Courtesy of CNN / Money.
- Two-Thirds From a Mere Ninety>
Two-thirds is the overall proportion of man-made carbon emissions released into the atmosphere, since the dawn of the industrial age. Ninety is the number of companies responsible for the two-thirds.
The leader in global fossil fuel emissions is Chevron Texaco, which accounts for a staggering 3.5 percent (since 1750). Other leading emitters include Exxon Mobil, BP, Royal Dutch Shell, Saudi Aramco, and Gazprom. See an interactive graphic of the top polluters — companies and nations — here.
From the Guardian:
The climate crisis of the 21st century has been caused largely by just 90 companies, which between them produced nearly two-thirds of the greenhouse gas emissions generated since the dawning of the industrial age, new research suggests.
The companies range from investor-owned firms – household names such as Chevron, Exxon and BP – to state-owned and government-run firms.
The analysis, which was welcomed by the former vice-president Al Gore as a “crucial step forward” found that the vast majority of the firms were in the business of producing oil, gas or coal, found the analysis, which has been published in the journal Climatic Change.
“There are thousands of oil, gas and coal producers in the world,” climate researcher and author Richard Heede at the Climate Accountability Institute in Colorado said. “But the decision makers, the CEOs, or the ministers of coal and oil if you narrow it down to just one person, they could all fit on a Greyhound bus or two.”
Half of the estimated emissions were produced just in the past 25 years – well past the date when governments and corporations became aware that rising greenhouse gas emissions from the burning of coal and oil were causing dangerous climate change.
Many of the same companies are also sitting on substantial reserves of fossil fuel which – if they are burned – puts the world at even greater risk of dangerous climate change.
Climate change experts said the data set was the most ambitious effort so far to hold individual carbon producers, rather than governments, to account.
The United Nations climate change panel, the IPCC, warned in September that at current rates the world stood within 30 years of exhausting its “carbon budget” – the amount of carbon dioxide it could emit without going into the danger zone above 2C warming. The former US vice-president and environmental champion, Al Gore, said the new carbon accounting could re-set the debate about allocating blame for the climate crisis.
Leaders meeting in Warsaw for the UN climate talks this week clashed repeatedly over which countries bore the burden for solving the climate crisis – historic emitters such as America or Europe or the rising economies of India and China.
Gore in his comments said the analysis underlined that it should not fall to governments alone to act on climate change.
“This study is a crucial step forward in our understanding of the evolution of the climate crisis. The public and private sectors alike must do what is necessary to stop global warming,” Gore told the Guardian. “Those who are historically responsible for polluting our atmosphere have a clear obligation to be part of the solution.”
Between them, the 90 companies on the list of top emitters produced 63% of the cumulative global emissions of industrial carbon dioxide and methane between 1751 to 2010, amounting to about 914 gigatonne CO2 emissions, according to the research. All but seven of the 90 were energy companies producing oil, gas and coal. The remaining seven were cement manufacturers.
The list of 90 companies included 50 investor-owned firms – mainly oil companies with widely recognised names such as Chevron, Exxon, BP , and Royal Dutch Shell and coal producers such as British Coal Corp, Peabody Energy and BHP Billiton.
Some 31 of the companies that made the list were state-owned companies such as Saudi Arabia’s Saudi Aramco, Russia’s Gazprom and Norway’s Statoil.
Nine were government run industries, producing mainly coal in countries such as China, the former Soviet Union, North Korea and Poland, the host of this week’s talks.
Experts familiar with Heede’s research and the politics of climate change said they hoped the analysis could help break the deadlock in international climate talks.
“It seemed like maybe this could break the logjam,” said Naomi Oreskes, professor of the history of science at Harvard. “There are all kinds of countries that have produced a tremendous amount of historical emissions that we do not normally talk about. We do not normally talk about Mexico or Poland or Venezuela. So then it’s not just rich v poor, it is also producers v consumers, and resource rich v resource poor.”
Michael Mann, the climate scientist, said he hoped the list would bring greater scrutiny to oil and coal companies’ deployment of their remaining reserves. “What I think could be a game changer here is the potential for clearly fingerprinting the sources of those future emissions,” he said. “It increases the accountability for fossil fuel burning. You can’t burn fossil fuels without the rest of the world knowing about it.”
Others were less optimistic that a more comprehensive accounting of the sources of greenhouse gas emissions would make it easier to achieve the emissions reductions needed to avoid catastrophic climate change.
John Ashton, who served as UK’s chief climate change negotiator for six years, suggested that the findings reaffirmed the central role of fossil fuel producing entities in the economy.
“The challenge we face is to move in the space of not much more than a generation from a carbon-intensive energy system to a carbonneutral energy system. If we don’t do that we stand no chance of keeping climate change within the 2C threshold,” Ashton said.
“By highlighting the way in which a relatively small number of large companies are at the heart of the current carbon-intensive growth model, this report highlights that fundamental challenge.”
Meanwhile, Oreskes, who has written extensively about corporate-funded climate denial, noted that several of the top companies on the list had funded the climate denial movement.
“For me one of the most interesting things to think about was the overlap of large scale producers and the funding of disinformation campaigns, and how that has delayed action,” she said.
The data represents eight years of exhaustive research into carbon emissions over time, as well as the ownership history of the major emitters.
The companies’ operations spanned the globe, with company headquarters in 43 different countries. “These entities extract resources from every oil, natural gas and coal province in the world, and process the fuels into marketable products that are sold to consumers on every nation on Earth,” Heede writes in the paper.
The largest of the investor-owned companies were responsible for an outsized share of emissions. Nearly 30% of emissions were produced just by the top 20 companies, the research found.
Read the entire article here.
Image: Strip coal mine. Courtesy of Wikipedia.
- Let the Sunshine In>
A ingeniously simple and elegant idea brings sunshine to a small town in Norway.
From the Guardian:
On the market square in Rjukan stands a statue of the town’s founder, a noted Norwegian engineer and industrialist called Sam Eyde, sporting a particularly fine moustache. One hand thrust in trouser pocket, the other grasping a tightly rolled drawing, the great man stares northwards across the square at an almost sheer mountainside in front of him.
Behind him, to the south, rises the equally sheer 1,800-metre peak known as Gaustatoppen. Between the mountains, strung out along the narrow Vestfjord valley, lies the small but once mighty town that Eyde built in the early years of the last century, to house the workers for his factories.
He was plainly a smart guy, Eyde. He harnessed the power of the 100-metre Rjukanfossen waterfall to generate hydro-electricity in what was, at the time, the world’s biggest power plant. He pioneered new technologies – one of which bears his name – to produce saltpetre by oxidising nitrogen from air, and made industrial quantities of hydrogen by water electrolysis.
But there was one thing he couldn’t do: change the elevation of the sun. Deep in its east-west valley, surrounded by high mountains, Rjukan and its 3,400 inhabitants are in shadow for half the year. During the day, from late September to mid-March, the town, three hours’ north-west of Oslo, is not dark (well, it is almost, in December and January, but then so is most of Norway), but it’s certainly not bright either. A bit … flat. A bit subdued, a bit muted, a bit mono.
Since last week, however, Eyde’s statue has gazed out upon a sight that even the eminent engineer might have found startling. High on the mountain opposite, 450 metres above the town, three large, solar-powered, computer-controlled mirrors steadily track the movement of the sun across the sky, reflecting its rays down on to the square and bathing it in bright sunlight. Rjukan – or at least, a small but vital part of Rjukan – is no longer stuck where the sun don’t shine.
“It’s the sun!” grins Ingrid Sparbo, disbelievingly, lifting her face to the light and closing her eyes against the glare. A retired secretary, Sparbo has lived all her life in Rjukan and says people “do sort of get used to the shade. You end up not thinking about it, really. But this … This is so warming. Not just physically, but mentally. It’s mentally warming.”
Two young mothers wheel their children into the square, turn, and briefly bask: a quick hit. On a freezing day, an elderly couple sit wide-eyed on one of the half-dozen newly installed benches, smiling at the warmth on their faces. Children beam. Lots of people take photographs. A shop assistant, Silje Johansen, says it’s “awesome. Just awesome.”
Pushing his child’s buggy, electrical engineer Eivind Toreid is more cautious. “It’s a funny thing,” he says. “Not real sunlight, but very like it. Like a spotlight. I’ll go if I’m free and in town, yes. Especially in autumn and in the weeks before the sun comes back. Those are the worst: you look just a short way up the mountainside and the sun is right there, so close you can almost touch it. But not here.”
Pensioners Valborg and Eigil Lima have driven from Stavanger – five long hours on the road – specially to see it. Heidi Fieldheim, who lives in Oslo now but spent six years in Rjukan with her husband, a local man, says she heard all about it on the radio. “But it’s far more than I expected,” she says. “This will bring much happiness.”
Across the road in the Nyetider cafe, sporting – by happy coincidence – a particularly fine set of mutton chops, sits the man responsible for this unexpected access to happiness. Martin Andersen is a 40-year-old artist and lifeguard at the municipal baths who, after spells in Berlin, Paris, Mali and Oslo, pitched up in Rjukan in the summer of 2001.
The first inkling of an artwork Andersen dubbed the Solspeil, or sun mirror, came to him as the month of September began to fade: “Every day, we would take our young child for a walk in the buggy,” he says, “and every day I realised we were having to go a little further down the valley to find the sun.” By 28 September, Andersen realised, the sun completely disappears from Rjukan’s market square. The occasion of its annual reappearance, lighting up the bridge across the river by the old fire station, is a date indelibly engraved in the minds of all Rjukan residents: 12 March.
And throughout the seemingly endless intervening months, Andersen says: “We’d look up and see blue sky above, and the sun high on the mountain slopes, but the only way we could get to it was to go out of town. The brighter the day, the darker it was down here. And it’s sad, a town that people have to leave in order to feel the sun.”
A hundred years ago, Eyde had already grasped the gravity of the problem. Researching his own plan, Andersen discovered that, as early as 1913, Eyde was considering a suggestion by one of his factory workers for a system of mountain-top mirrors to redirect sunlight into the valley below.
The industrialist eventually abandoned the plan for want of adequate technology, but soon afterwards his company, Norsk Hydro, paid for the construction of a cable car to carry the long-suffering townsfolk, for a modest sum, nearly 500m higher up the mountain and into the sunlight. (Built in 1928, the Krossobanen is still running, incidentally; £10 for the return trip. The view is majestic and the coffee at the top excellent. A brass plaque in the ticket office declares the facility a gift from the company “to the people of Rjukan, because for six months of the year, the sun does not shine in the bottom of the valley”.)
Andersen unearthed a partially covered sports stadium in Arizona that was successfully using small mirrors to keep its grass growing. He learned that in the Middle East and other sun-baked regions of the world, vast banks of hi-tech tracking mirrors called heliostats concentrate sufficient reflected sunlight to heat steam turbines and drive whole power plants.He persuaded the town hall to come up with the cash to allow him to develop his project further. He contacted an expert in the field, Jonny Nersveen, who did the maths and told him it could probably work. He visited Viganella, an Italian village that installed a similar sun mirror in 2006.
And 12 years after he first dreamed of his Solspeil, a German company specialising in so-called CSP – concentrated solar power – helicoptered in the three 17 sq m glass mirrors that now stand high above the market square in Rjukan. “It took,” he says, “a bit longer than we’d imagined.” First, the municipality wasn’t used to dealing with this kind of project: “There’s no rubber stamp for a sun mirror.” But Andersen also wanted to be sure it was right – that Rjukan’s sun mirror would do what it was intended to do.
Viganella’s single polished steel mirror, he says, lights a much larger area, but with a far weaker, more diffuse light. “I wanted a smaller, concentrated patch of sunlight: a special sunlit spot in the middle of town where people could come for a quick five minutes in the sun.” The result, you would have to say, is pretty much exactly that: bordered on one side by the library and town hall, and on the other by the tourist office, the 600 sq ms of Rjukan’s market square, to be comprehensively remodelled next year in celebration, now bathes in a focused beam of bright sunlight fully 80-90% as intense as the original.
Their efforts monitored by webcams up on the mountain and down in the square, their movement dictated by computer in a Bavarian town outside Munich, the heliostats generate the solar power they need to gradually tilt and rotate, following the sun on its brief winter dash across the sky.
It really works. Even the objectors – and there were, in town, plenty of them; petitions and letter-writing campaigns and a Facebook page organised against what a large number of locals saw initially as a vanity project and, above all, a criminal waste of money – now seem largely won over.
Read the entire article here.
Image: Light reflected by the mirrors of Rjukan, Norway. Courtesy of David Levene / Guardian.
- The Coming Energy Crash>
By some accounts the financial crash that began in 2008 is a mere economic hiccup compared with the next big economic (and environmental) disaster — the fossil fuel crisis accompanied by risk denial syndrome.
From the New Scientist:
FIVE years ago the world was in the grip of a financial crisis that is still reverberating around the globe. Much of the blame for that can be attributed to weaknesses in human psychology: we have a collective tendency to be blind to the kind of risks that can crash economies and imperil civilisations.
Today, our risk blindness is threatening an even bigger crisis. In my book The Energy of Nations, I argue that the energy industry’s leaders are guilty of a risk blindness that, unless action is taken, will lead to a global crash – and not just because of the climate change they fuel.
Let me begin by explaining where I come from. I used to be a creature of the oil and gas industry. As a geologist on the faculty at Imperial College London, I was funded by BP, Shell and others, and worked on oil and gas in shale deposits, among other things. But I became worried about society’s overdependency on fossil fuels, and acted on my concerns.
In 1989, I quit Imperial College to become a climate campaigner. A decade later I set up a solar energy business. In 2000 I co-founded a private equity fund investing in renewables.
In these capacities, I have watched captains of the energy and financial industries at work – frequently close to, often behind closed doors – as the financial crisis has played out and the oil price continued its inexorable rise. I have concluded that too many people across the top levels of business and government have found ways to close their eyes and ears to systemic risk-taking. Denial, I believe, has become institutionalised.
As a result of their complacency we face four great risks. The first and biggest is no surprise: climate change. We have way more unburned conventional fossil fuel than is needed to wreck the climate. Yet much of the energy industry is discovering and developing unconventional deposits – shale gas and tar sands, for example – to pile onto the fire, while simultaneously abandoning solar power just as it begins to look promising. It has been vaguely terrifying to watch how CEOs of the big energy companies square that circle.
Second, we risk creating a carbon bubble in the capital markets. If policymakers are to achieve their goal of limiting global warming to 2 °C, 60 to 80 per cent of proved reserves of fossil fuels will have to remain in the ground unburned. If so, the value of oil and gas companies would crash and a lot of people would lose a lot of money.
I am chairman of Carbon Tracker, a financial think tank that aims to draw attention to that risk. Encouragingly, some financial institutions have begun withdrawing investment in fossil fuels after reading our warnings. The latest report from the Intergovernmental Panel on Climate Change (IPCC) should spread appreciation of how crazy it is to have energy markets that are allowed to account for assets as though climate policymaking doesn’t exist.
Third, we risk being surprised by the boom in shale gas production. That, too, may prove to be a bubble, maybe even a Ponzi scheme. Production from individual shale wells declines rapidly, and large amounts of capital have to be borrowed to drill replacements. This will surprise many people who make judgement calls based on the received wisdom that limits to shale drilling are few. But I am not alone in these concerns.
Even if the US shale gas drilling isn’t a bubble, it remains unprofitable overall and environmental downsides are emerging seemingly by the week. According to the Texas Commission on Environmental Quality, whole towns in Texas are now running out of water, having sold their aquifers for fracking. I doubt that this is a boom that is going to appeal to the rest of the world; many others agree.
Fourth, we court disaster with assumptions about oil depletion. Most of us believe the industry mantra that there will be adequate flows of just-about-affordable oil for decades to come. I am in a minority who don’t. Crude oil production peaked in 2005, and oil fields are depleting at more than 6 per cent per year, according to the International Energy Agency. The much-hyped 2 million barrels a day of new US production capacity from shale needs to be put in context: we live in a world that consumes 90 million barrels a day.
It is because of the sheer prevalence of risk blindness, overlain with the pervasiveness of oil dependency in modern economies, that I conclude system collapse is probably inevitable within a few years.
Mine is a minority position, but it would be wise to remember how few whistleblowers there were in the run-up to the financial crash, and how they were vilified in the same way “peakists” – believers in premature peak oil – are today.
Read the entire article here.
Image: power plant. Courtesy of Think Progress.
- Millionaires are So Yesterday>
Not far from London’s beautiful Hampstead Heath lies The Bishops Avenue. From the 1930s until the mid-1970s this mile-long street became the archetypal symbol for new wealth; the nouveau riche millionaires made this the most sought after — and well-known — address for residential property in the nation (of course “old money” still preferred its stately mansions and castles). But since then, The Bishops Avenue has changed, with many properties now in the hands of billionaires, hedge fund investors and oil rich plutocrats.
From the Telegraph:
You can tell when a property is out of your price bracket if the estate agent’s particulars come not on a sheet of A4 but are presented in a 50-page hardback coffee-table book, with a separate section for the staff quarters.
Other giveaway signs, in case you were in any doubt, are the fact the lift is leather-lined, there are 62 internal CCTV cameras, a private cinema, an indoor swimming pool, sauna, steam room, and a series of dressing rooms – “for both summer and winter”, the estate agent informs me – which are larger than many central London flats.
But then any property on The Bishops Avenue in north London is out of most people’s price bracket – such as number 62, otherwise known as Jersey House, which is on the market for £38 million. I am being shown around by Grant Alexson, from Knight Frank estate agents, both of us in our socks to ensure that we do not grubby the miles of carpets or marble floors in the bathrooms (all of which have televisions set into the walls).
My hopes of picking up a knock-down bargain had been raised after the news this week that one property on The Bishops Avenue, Dryades, had been repossessed. The owners, the family of the former Pakistan privatisation minister Waqar Ahmed Khan, were unable to settle a row with their lender, Deutsche Bank.
It is not the only property in the hands of the receivers on this mile-long stretch. One was tied up in a Lehman Brothers property portfolio and remains boarded up. Meanwhile, the Saudi royal family, which bought 10 properties during the First Gulf War as boltholes in case Saddam Hussein invaded, has offloaded the entire package for a reported £80 million in recent weeks. And the most expensive property on the market, Heath Hall, had £35 million knocked off the asking price (taking it down to a mere £65 million).
This has all thrown the spotlight once again on this strange road, which has been nicknamed “Millionaires’ Row” since the 1930s – when a million meant something. Now, it is called “Billionaires’ Row”. It was designed, from its earliest days, to be home to the very wealthy. One of the first inhabitants was George Sainsbury, son of the supermarket founder; another was William Lyle, who used his sugar fortune to build a vast mansion in the Arts and Crafts style. Stars such as Gracie Fields also lived here.
But between the wars, the road became the butt of Music Hall comedians who joked about it being full of “des-reses” for the nouveaux riches such as Billy Butlin. Evelyn Waugh, the master of social nuance, made sure his swaggering newspaper baron Lord Copper of Scoop resided here. It was the 1970s, however, that saw the road vault from being home to millionaires to a pleasure ground for international plutocrats, who used their shipping or oil wealth to snap up properties, knock them down and build monstrous mansions in “Hollywood Tudor” style. Worse were the pastiches of Classical temples, the most notorious of which was built by the Turkish industrialist Halis Toprak, who decided the bath big enough to fit 20 people was not enough of a statement. So he slapped “Toprak Mansion” on the portico (causing locals to dub it “Top Whack Mansion”). It was sold a couple of years ago to the Kazakhstani billionairess Horelma Peramam, who renamed it Royal Mansion.
Perhaps the most famous of recent inhabitants was Lakshmi Mittal, the steel magnate, and for a long time Britain’s richest man. But he sold Summer Palace, for £38 million in 2011 to move to the much grander Kensington Palace Gardens, in the heart of London. The cast list became even more varied with the arrival of Salman Rushdie who hid behind bullet-proof glass and tycoon Asil Nadir, whose address is now HM Belmarsh Prison.
Of course, you can be hard-pressed to discover who owns these properties or how much anyone paid. These are not run-of-the-mill transactions between families moving home. Official Land Registry records reveal a complex web of deals between offshore companies. Miss Peramam holds Royal Mansion in the name of Hartwood Resources Company, registered in the British Virgin Islands, and the records suggest she paid closer to £40 million than the £50 million reported.
Alexson says the complexity of the deals are not just about avoiding stamp duty (which is now at 7 per cent for properties over £2 million). “Discretion first, tax second,” he argues. “Look, some of the Middle Eastern families own £500 billion. Stamp duty is not an issue for them.” Still, new tax rules this year, which increase stamp duty to 15 per cent if the property is bought through an offshore vehicle, have had an effect, according to Alexson, who says that the last five houses he sold have been bought by an individual, not a company.
But there is little sign of these individuals on the road itself. Walking down the main stretch of the Avenue from the beautiful Hampstead Heath to the booming A1, which bisects the road, more than 10 of these 39 houses are either boarded up or in a state of severe disrepair. Behind the high gates and walls, moss and weeds climb over the balustrades. Many others are clearly uninhabited, except for a crew of builders and a security guard. (Barnet council defends all the building work it has sanctioned, with Alexson pointing out that the new developments are invariably rectifying the worst atrocities of the 1980s.)
Read the entire article here.
Image: Toprak Mansion (now known as Royal Mansion), The Bishops Avenue. Courtesy of Daily Mail.
- Hotels of the Future>
See more designs here.
Image: The Heart hotel, designed by Arina Agieieva and Dmitry Zhuikov, is a proposed design for a New York hotel. The project aims to draw local residents and hotel visitors closer together by embedding the hotel into city life; bedrooms are found in the converted offices that flank the core of the structure – its heart – and leisure facilities are available for the use of everyone. Courtesy of Telegraph.
- Mid-21st Century Climate>
From ars technica:
If there was one overarching point that the fifth Intergovernmental Panel on Climate Change report took pains to stress, it was that the degree of change in the global climate system since the mid-1950s is unusual in scope. Depending on what exactly you measure, the planet hasn’t seen conditions like these for decades to millennia. But that conclusion leaves us with a question: when exactly can we expect the climate to look radically new, with features that have no historical precedent?
The answer, according to a modeling study published in this week’s issue of Nature, is “very soon”—as soon as 2047 under a “business-as-usual” emission scenario and only 22 years later under a reduced emissions scenario. Tropical countries will likely be the first to enter this new age of climatic erraticness and could experience extreme temperatures monthly after 2050. This, the authors argue, underscores the need for robust efforts targeted not only at protecting those vulnerable countries but also the rich biodiversity that they harbor.
Developing an index, one model at a time
Before attempting to peer into the future, the authors, led by the University of Hawaii’s Camilo Mora, first had to ensure that they could accurately replicate the recent past. To do so, they pooled together the predictive capabilities of 39 different models, using near-surface air temperature as their indicator of choice.
For each model, they established the bounds of natural climate variability as the minimum and maximum values attained between 1860 and 2005. Simultaneously crunching the outputs from all of these models proved to be the right decision, as Mora and his colleagues consistently found that a multi-model average best fit the real data.
Next, they turned to two widely used emission scenarios, or Representative Concentration Pathways (RCP) as they’re known in modeling vernacular, to predict the arrival of different climates over a period extending from 2006 to 2100. The first scenario, RCP45, assumes a concerted mitigation initiative and anticipates CO2 concentrations of up to 538 ppm by 2100 (up from the current 393 ppm). The second, RCP85, is the trusty “business-as-usual” scenario that anticipates concentrations of up to 936 ppm by the same year.
Timing the new normals
While testing the sensitivity of their index, Mora and his colleagues concluded that the length of the reference period—the number of years between 1860 and 2005 used as a basis for establishing the limits of historical climate variability—had no effect on the ultimate outcome. A longer period would include more instances of temperature extremes, both low and high, so you would expect that it would yield a broader range of limits. That would mean that any projections of extreme future events might not seem so extreme by comparison.
In practice, it didn’t matter whether the authors used 20 years or 140 years as the length of their reference period. What did matter, they found, was the number of consecutive years where the climate was out of historical bounds. This makes intuitive sense: if you consider fewer consecutive years, the departure from “normal” will come sooner.
Rather than pick one arbitrary number of consecutive years versus another, the authors simply used all of the possible values from each of the 39 models. That accounts for the relatively large standard deviations in the estimated starting dates of exceptional climates—18 years for the RCP45 scenario and 14 years for the RCP85 scenario. That means that the first clear climate shift could occur as early as 2033 or as late as 2087.
Though temperature served as the main proxy for climate in their study, the authors also analyzed four other variables for the atmosphere and two for the ocean. These included evaporation, transpiration, sensible heat flux (the conductive transfer of heat from the planet’s surface to the atmosphere) and precipitation, as well as sea surface temperature and surface pH in the ocean.
Replacing temperature with, or considering it alongside, any of the other four variables for atmosphere did not change the timing of climate departures. This is because temperature is the most sensitive variable and therefore also the earliest to exceed the normal bounds of historical variability.
When examining the ocean through the prism of sea surface temperature, the researchers determined that it would reach its tipping point by 2051 or 2072 under the RCP85 and RCP45 scenarios, respectively. However, when they considered both sea surface temperature and surface pH together, the estimated tipping point was moved all the way up to this decade.
Seawater pH has an extremely narrow range of historical variability, and it moved out of this range 5 years ago, which caused the year of the climate departure to jump forward several decades. This may be an extreme case, but it serves as a stark reminder that the ocean is already on the edge of uncharted territory.
Read the entire article here.
Image courtesy of Salon.
- Nuclear Near Miss>
Just over 50 years ago the United States Air Force came within a hair’s breadth of destroying much of the South Eastern part of the country. While on a routine flight along the eastern seaboard of the United States, a malfunctioning B-52 bomber accidentally dropped two 4-Megaton hydrogen bombs over Goldsboro, North Carolina on 23 January 1961.
Had either one of these bombs exploded — with a force over 200 times that of the bomb dropped over Hiroshima — the effects would have been calamitous.
From the Guardian:
A secret document, published in declassified form for the first time by the Guardian today, reveals that the US Air Force came dramatically close to detonating an atom bomb over North Carolina that would have been 260 times more powerful than the device that devastated Hiroshima.
The document, obtained by the investigative journalist Eric Schlosser under the Freedom of Information Act, gives the first conclusive evidence that the US was narrowly spared a disaster of monumental proportions when two Mark 39 hydrogen bombs were accidentally dropped over Goldsboro, North Carolina on 23 January 1961. The bombs fell to earth after a B-52 bomber broke up in mid-air, and one of the devices behaved precisely as a nuclear weapon was designed to behave in warfare: its parachute opened, its trigger mechanisms engaged, and only one low-voltage switch prevented untold carnage.
Each bomb carried a payload of 4 megatons – the equivalent of 4 million tons of TNT explosive. Had the device detonated, lethal fallout could have been deposited over Washington, Baltimore, Philadelphia and as far north as New York city – putting millions of lives at risk.
Though there has been persistent speculation about how narrow the Goldsboro escape was, the US government has repeatedly publicly denied that its nuclear arsenal has ever put Americans’ lives in jeopardy through safety flaws. But in the newly-published document, a senior engineer in the Sandia national laboratories responsible for the mechanical safety of nuclear weapons concludes that “one simple, dynamo-technology, low voltage switch stood between the United States and a major catastrophe”.
Writing eight years after the accident, Parker F Jones found that the bombs that dropped over North Carolina, just three days after John F Kennedy made his inaugural address as president, were inadequate in their safety controls and that the final switch that prevented disaster could easily have been shorted by an electrical jolt, leading to a nuclear burst. “It would have been bad news – in spades,” he wrote.
Jones dryly entitled his secret report “Goldsboro Revisited or: How I learned to Mistrust the H-Bomb” – a quip on Stanley Kubrick’s 1964 satirical film about nuclear holocaust, Dr Strangelove or: How I Learned to Stop Worrying and Love the Bomb.
The accident happened when a B-52 bomber got into trouble, having embarked from Seymour Johnson Air Force base in Goldsboro for a routine flight along the East Coast. As it went into a tailspin, the hydrogen bombs it was carrying became separated. One fell into a field near Faro, North Carolina, its parachute draped in the branches of a tree; the other plummeted into a meadow off Big Daddy’s Road.
Read the entire article here.
Image: Nuclear weapon test Romeo (yield 11 Mt) on Bikini Atoll. The test was part of the Operation Castle. Romeo was the first nuclear test conducted on a barge. The barge was located in the Bravo crater. Courtesy of Wikipedia.
- Post-Apocalyptic Transportation>
What better way to get around your post-apocalyptic neighborhhood after the end-of-times than on a trusty bicycle. Let’s face it, a simple human-powered, ride-share bike is likely to fare much better in a dystopian landscape than a gas-guzzling truck or an electric vehicle or even s fuel-efficient moped.
There’s something post-apocalyptic about Citi Bike, the bike-sharing program that debuted a few months ago in parts of New York City. Or perhaps better terms would be “pre-post-apocalyptic” and “pre-dystopian.” Because these bikes basically are designed for the end of the world.
Bike-sharing programs have arisen around the world—from Washington, D.C., to Hangzhou, China. The New York bikes are almost disturbingly durable: Human-powered, solar-charged, and with aluminum frames so sturdy that during stress testing the bike broke the testing equipment. Sure, riding one through Midtown Manhattan is like entering a speedboat race on a manatee. And yes, they’re geared so that it feels you’re at a very goofy spinning class when riding up Second Avenue. But if you think post-apocalyptically, that gear ratio means a very efficient bike for carrying heavy loads. With the help of the local blacksmith, as long as he’s not too busy making helmets for fight-to-the-death cage matches, you could find a way to attach a hitch to the back of a Citi Bike and that could carry, say, a laser cannon, or a seat for the local warlord. Now all that’s left to do is to attach a hipster to one of the bikes, perhaps with an iron neck collar. Voila! The Citi Bike has become the Escalade SUV in the cannibal culture that arises after peak oil.
A Citi Bike would have made perfect sense in Cormac McCarthy’s The Road. Imagine:
The man put the boy on the handlebars of the bicycle.
It had once been blue. Streaks of cerulean remained in the spectral lines of dulled gray aluminum. It was heavy and his leg ached as he pedaled.
Who used to ride this bicycle, Papa?
No one person rode this bicycle. The bike was shared by everyone who could pay.
Did the people who shared the bikes carry the fire?
They thought they carried the fire.
Turns out the lack of bikes in end-of-the-world narratives has been identified as a cultural issue of significance; there’s even a TV Tropes page called “No Bikes in the Apocalypse” that takes this sort of story to task for forgetting that bicycles would work just fine if there wasn’t any gasoline.
The real problem is that your typical grizzled mutant-killing protagonist wearing a bandolier and carrying a shotgun would look ridiculous huffing up a hill on an 18-speed Trek. And think about where movies are made. In Hollywood, bikes are for immature losers, like Steve Carell’s character in The 40-Year-Old Virgin. Heroes don’t downshift. In a good Hollywood post-apocalypse, if the hero doesn’t have a jacked-up Dodge Charger with guns mounted on top, he (it’s always a he) trudges through the ashes, cradling his gun—or, if they haven’t all been eaten, he scores himself a horse.
There is one notable recent exception (not including Premium Rush, which wasn’t post-apocalyptic, and which no one saw): In the film version of World War Z, Brad Pitt and company have to sneak from a bunker to an airplane on creaky old bicycles. (The zombies are sensitive to noise.) It was obvious from the moment you first saw these old janky bikes that they were going to cause trouble. Future citizens facing a zombie pandemic should note the Citi Bikes tend to whirr, not rattle, so they would be perfect for slow, quiet travel through stumbling sleepy zombies.
The NYC bike-share program is also very much for-profit–it’s owned by Alta Bike Share, a global company that builds out programs like this–and super-mega-ultra-branded by Citicorp, down to the i in Citi Bike. It almost feels like they’re tempting fate, because nothing satisfies the consumer of science fiction like the failed optimism of a logotype creeping out from under dangling scraps of fabric and glue (see this Onion AV Club article on brands in post-apocalytic films). It’s magic when brands poke through under a pile of bones.
Why is this? Well, from a narrative-efficiency viewpoint it’s a pretty elegant way to set up your world. Marketing is so relentlessly positive, the smiles so big, that the sight of a skeleton wearing Lululemon or holding an iPhone does a lot of your expository work for you. Which, back in reality, is one of the things that makes branded stadiums slightly disturbing. The Coliseum got its name from the colossal statue of Nero that adjoined it. (The statue was given a number of different heads over the years, depending on who was in power.) As the Venerable Bede wrote in the eighth century: “as long as the Colossus stands, so shall Rome; when the Colossus falls, Rome shall fall; when Rome falls, so falls the world.” The stadium is still there, the statue is gone, and today photos of the Coliseum, and its cheesy fake gladiators posing with tourists, serve as a global shorthand for “empires eventually fall.” (If you want to know who’s in charge of a culture, look at what they name their stadiums.) Citi Bikes thus also seems particularly well-suited for a sort of Hunger Games-style future: 1) The economy crashes utterly 2) poor, hungry people compete in hyperviolent Citi Bike chariot races at Madison Square Garden, now renamed Velodrome 17.
A trundling Citi Bike would make sense in just about any post-apocalyptic or dystopian book or movie. In the post-humanity 1949 George R. Stewart classic Earth Abides, about a Berkeley student who survives a plague, the bikes would have been very practical as people rebuilt society across generations, especially after electricity stopped working. And Walter M. Miller Jr.’s legendary 1960 A Canticle for Leibowitz, about monks rebuilding the world after “the Flame Deluge,” could easily have featured monks pedaling around the empty desert after that deluge. Riding a Citi Bike (likely renamed something like “urbem vehentem”) would probably have been a tremendous, abbot-level privilege, and the repair manual would have been an illuminated manuscript. It’s gotten so that when I ride a Citi Bike I invariably end up thinking of all the buildings with their windows shattered, gray snow falling on people trudging in rags on their way to the rat market to buy a nice rat for Thanksgiving.
You have to wonder if “sharing” could survive. Probably not. I mean, at some level working headlights are more liability than asset, especially if you’re worried about being eaten. But the charging stations? As reliable sources of a steady flow of electricity, it’s pretty easy to imagine local chieftains taking those over, and lines of desperate people lining up to charge their cracked mobile devices so that they can look one last time at pictures of the people they lost, trading whatever of value they still possess for one last hour with their smartphones. It will be like the blackout, but forever.
If you prefer a nice total-surveillance dystopia to an absolute apocalypse, Citi Bikes are eminently trackable—they have a GPS-driven beacon installed in case they need to be retrieved. Three million rides have been made, 3 million swipes of the little Citi Bike keyfob that is used to keep track of who has which bike for how long. The service knows who you are and where you ride, and data visualizations show where people are traveling.
It’s only a matter of time, then, before a 24-style TV show gives us a bike-riding serial killer being tracked around New York City, clusters of incognito cops waiting by the docking station for their target to dock his bike with a ca-chunk. But that sort of government surveillance is almost passé in the age of the NSA.
Read the entire article here.
Image: Citi Bike, New York City. Courtesy of Velojoy.
- The Rim Fire>
One of the largest wildfires in California history — the Rim Fire — threatens some of the most spectacular vistas in the U.S. Yet, as it reshaped part of Yosemite Valley and surroundings it is forcing another reshaping: a fundamental re-thinking of the wildland urban interface (WUI) and the role of human activity in catalyzing natural processes.
For nearly two weeks, the nation has been transfixed by wildfire spreading through Yosemite National Park, threatening to pollute San Francisco’s water supply and destroy some of America’s most cherished landscapes. As terrible as the Rim Fire seems, though, the question of its long-term effects, and whether in some ways it could actually be ecologically beneficial, is a complicated one.
Some parts of Yosemite may be radically altered, entering entire new ecological states. Yet others may be restored to historical conditions that prevailed for for thousands of years from the last Ice Age’s end until the 19th century, when short-sighted fire management disrupted natural fire cycles and transformed the landscape.
In certain areas, “you could absolutely consider it a rebooting, getting the system back to the way it used to be,” said fire ecologist Andrea Thode of Northern Arizona University. “But where there’s a high-severity fire in a system that wasn’t used to having high-severity fires, you’re creating a new system.”
The Rim Fire now covers 300 square miles, making it the largest fire in Yosemite’s recent history and the sixth-largest in California’s. It’s also the latest in a series of exceptionally large fires that over the last several years have burned across the western and southwestern United States.
Fire is a natural, inevitable phenomenon, and one to which western North American ecologies are well-adapted, and even require to sustain themselves. The new fires, though, fueled by drought, a warming climate and forest mismanagement — in particular the buildup of small trees and shrubs caused by decades of fire suppression — may reach sizes and intensities too severe for existing ecosystems to withstand.
The Rim Fire may offer some of both patterns. At high elevations, vegetatively dominated by shrubs and short-needled conifers that produce a dense, slow-to-burn mat of ground cover, fires historically occurred every few hundred years, and they were often intense, reaching the crowns of trees. In such areas, the current fire will fit the usual cycle, said Thode.
Decades- and centuries-old seeds, which have remained dormant in the ground awaiting a suitable moment, will be cracked open by the heat, explained Thode. Exposed to moisture, they’ll begin to germinate and start a process of vegetative succession that results again in forests.
At middle elevations, where most of the Rim Fire is currently concentrated, a different fire dynamic prevails. Those forests are dominated by long-needled conifers that produce a fluffy, fast-burning ground cover. Left undisturbed, fires occur regularly.
“Up until the middle of the 20th century, the forests of that area would burn very frequently. Fires would go through them every five to 12 years,” said Carl Skinner, a U.S. Forest Service ecologist who specializes in relationships between fire and vegetation in northern California. “Because the fires burned as frequently as they did, it kept fuels from accumulating.”
A desire to protect houses, commercial timber and conservation lands by extinguishing these small, frequent fires changed the dynamic. Without fire, dead wood accumulated and small trees grew, creating a forest that’s both exceptionally flammable and structurally suited for transferring flames from ground to tree-crown level, at which point small burns can become infernos.
Though since the 1970s some fires have been allowed to burn naturally in the western parts of Yosemite, that’s not the case where the Rim Fire now burns, said Skinner. An open question, then, is just how big and hot it will burn.
Where the fire is extremely intense, incinerating soil seed banks and root structures from which new trees would quickly sprout, the forest won’t come back, said Skinner. Those areas will become dominated by dense, fast-growing shrubs that burn naturally every few years, killing young trees and creating a sort of ecological lock-in.
If the fire burns at lower intensities, though, it could result in a sort of ecological recalibration, said Skinner. In his work with fellow U.S. Forest Service ecologist Eric Knapp at the Stanislaus-Tuolumne Experimental Forest, Skinner has found that Yosemite’s contemporary, fire-suppressed forests are actually far more homogeneous and less diverse than a century ago.
The fire could “move the forests in a trajectory that’s more like the historical,” said Skinner, both reducing the likelihood of large future fires and generating a mosaic of habitats that contain richer plant and animal communities.
“It may well be that, across a large landscape, certain plants and animals are adapted to having a certain amount of young forest recovering after disturbances,” said forest ecologist Dan Binkley of Colorado State University. “If we’ve had a century of fires, the landscape might not have enough of this.”
Read the entire article here.
Image: Rim Fire, August 2013. Courtesy of Earth Observatory, NASA.
- Ethical Meat and Idiotic Media>
Lab grown meat is now possible. But is not available on an industrial scale to satisfy the human desire for burgers, steak and ribs. While this does represent a breakthrough it’s likely to be a while before the last cow or chicken or pig is slaughtered. Of course, the mainstream media picked up this important event and immediately labeled it with captivating headlines featuring the word “frankenburger”. Perhaps a well-intentioned lab will someday come up with an intelligent form of media organization.
From the New York Times (dot earth):
I first explored livestock-free approaches to keeping meat on menus in 2008 in a pieced titled “Can People Have Meat and a Planet, Too?”
It’s been increasingly clear since then that there are both environmental and — obviously — ethical advantages to using technology to sustain omnivory on a crowding planet. This presumes humans will not all soon shift to a purely vegetarian lifestyle, even though there are signs of what you might call “peak meat” (consumption, that is) in prosperous societies (Mark Bittman wrote a nice piece on this). Given dietary trends as various cultures rise out of poverty, I would say it’s a safe bet meat will remain a favored food for decades to come.
Now non-farmed meat is back in the headlines, with a patty of in-vitro beef – widely dubbed a “frankenburger” — fried and served in London earlier today.
The beef was grown in a lab by a pioneer in this arena — Mark Post of Maastricht University in the Netherlands. My colleague Henry Fountain has reported the details in a fascinating news article. Here’s an excerpt followed by my thoughts on next steps in what I see as an important area of research and development:
According to the three people who ate it, the burger was dry and a bit lacking in flavor. One taster, Josh Schonwald, a Chicago-based author of a book on the future of food [link], said “the bite feels like a conventional hamburger” but that the meat tasted “like an animal-protein cake.”
But taste and texture were largely beside the point: The event, arranged by a public relations firm and broadcast live on the Web, was meant to make a case that so-called in-vitro, or cultured, meat deserves additional financing and research…..
Dr. Post, one of a handful of scientists working in the field, said there was still much research to be done and that it would probably take 10 years or more before cultured meat was commercially viable. Reducing costs is one major issue — he estimated that if production could be scaled up, cultured beef made as this one burger was made would cost more than $30 a pound.
The two-year project to make the one burger, plus extra tissue for testing, cost $325,000. On Monday it was revealed that Sergey Brin, one of the founders of Google, paid for the project. Dr. Post said Mr. Brin got involved because “he basically shares the same concerns about the sustainability of meat production and animal welfare.”
The enormous potential environmental benefits of shifting meat production, where feasible, from farms to factories were estimated in “Environmental Impacts of Cultured Meat Production,”a 2011 study in Environmental Science and Technology.
Read the entire article here.
Image: Professor Mark Post holds the world’s first lab-grown hamburger. Courtesy of Reuters/David Parry / The Atlantic.
- A Smarter Smart Grid>
If you live somewhere rather toasty you know how painful your electricity bills can be during the summer months. So, wouldn’t it be good to have a system automatically find you the cheapest electricity when you need it most? Welcome to the artificially intelligent smarter smart grid.
From the New Scientist:
An era is coming in which artificially intelligent systems can manage your energy consumption to save you money and make the electricity grid even smarter
IF YOU’RE tired of keeping track of how much you’re paying for energy, try letting artificial intelligence do it for you. Several start-up companies aim to help people cut costs, flex their muscles as consumers to promote green energy, and usher in a more efficient energy grid – all by unleashing smart software on everyday electricity usage.
Several states in the US have deregulated energy markets, in which customers can choose between several energy providers competing for their business. But the different tariff plans, limited-time promotional rates and other products on offer can be confusing to the average consumer.
A new company called Lumator aims to cut through the morass and save consumers money in the process. Their software system, designed by researchers at Carnegie Mellon University in Pittsburgh, Pennsylvania, asks new customers to enter their energy preferences – how they want their energy generated, and the prices they are willing to pay. The software also gathers any available metering measurements, in addition to data on how the customer responds to emails about opportunities to switch energy provider.
A machine-learning system digests that information and scans the market for the most suitable electricity supply deal. As it becomes familiar with the customer’s habits it is programmed to automatically switch energy plans as the best deals become available, without interrupting supply.
“This ensures that customers aren’t taken advantage of by low introductory prices that drift upward over time, expecting customer inertia to prevent them from switching again as needed,” says Lumator’s founder and CEO Prashant Reddy.
The goal is not only to save customers time and money – Lumator claims it can save people between $10 and $30 a month on their bills – but also to help introduce more renewable energy into the grid. Reddy says power companies have little idea whether or not their consumers want to get their energy from renewables. But by keeping customer preferences on file and automatically switching to a new service when those preferences are met, Reddy hopes renewable energy suppliers will see the demand more clearly.
A firm called Nest, based in Palo Alto, California, has another way to save people money. It makes Wi-Fi-enabled thermostats that integrate machine learning to understand users’ habits. Energy companies in southern California and Texas offer deals to customers if they allow Nest to make small adjustments to their thermostats when the supplier needs to reduce customer demand.
“The utility company gives us a call and says they’re going to need help tomorrow as they’re expecting a heavy load,” says Matt Rogers, one of Nest’s founders. “We provide about 5 megawatts of load shift, but each home has a personalised demand response. The entire programme is based on data collected by Nest.”
Rogers says that about 5000 Nest users have opted-in to such load-balancing programmes.
Read the entire article here.
Image courtesy of Treehugger.
- En Vie: Bio-Fabrication Expo>
En Vie, french for “alive” is an exposition like no other. It’s a fantastical place defined through a rich collaboration of material scientists, biologists, architects, designers and engineers. The premise of En Vie is quite elegant — put these disparate minds together and ask them to imagine what the future will look like. And, it’s a quite magical world; a world where biological fabrication replaces traditional mechanical and chemical fabrication. Here shoes grow from plants, furniture from fungi and bees construct vases. The En Vie exhibit is open at the Space Foundation EDF in Paris, France until September 1.
From ars technica:
The natural world has, over millions of years, evolved countless ways to ensure its survival. The industrial revolution, in contrast, has given us just a couple hundred years to play catch-up using technology. And while we’ve been busily degrading the Earth since that revolution, nature continues to outdo us in the engineering of materials that are stronger, tougher, and multipurpose.
Take steel for example. According to the World Steel Association, for every ton produced, 1.8 tons of carbon dioxide is emitted into the atmosphere. In total in 2010, the iron and steel industries, combined, were responsible for 6.7 percent of total global CO2 emissions. Then there’s the humble spider, which produces silk that is—weight for weight—stronger than steel. Webs spun by Darwin’s bark spider in Madagascar, meanwhile, are 10 times tougher than steel and more durable than Kevlar, the synthetic fiber used in bulletproof vests. Material scientists savvy to this have ensured biomimicry is now high on the agenda at research institutions, and an exhibit currently on at the Space Foundation EDF in Paris is doing its best to popularize the notion that we should not just be salvaging the natural world but also learning from it.
En Vie (Alive), curated by Reader and Deputy Director of the Textile Futures Research Center at Central Saint Martins College Carole Collet, is an exposition for what happens when material scientists, architects, biologists, and engineers come together with designers to ask what the future will look like. According to them, it will be a world where plants grow our products, biological fabrication replaces traditional manufacturing, and genetically reprogrammed bacteria build new materials, energy, or even medicine.
It’s a fantastical place where plants are magnetic, a vase is built by 60,000 bees, furniture is made from funghi, and shoes from cellulose. You can print algae onto rice paper, then eat it or encourage gourds to grow in the shape of plastic components found in things like torches or radios (you’ll have to wait a few months for the finished product, though). These are not fanciful designs but real products, grown or fashioned with nature’s direct help.
In other parts of the exhibit, biology is the inspiration and shows what might be. Eskin, for instance, provides visitors with a simulation of how a building’s exterior could mimic and learn from the human body in keeping it warm and cool.
Alive shows that, speculative or otherwise, design has a real role to play in bringing different research fields together, which will be essential if there’s any hope of propelling the field into mass commercialization.
“More than any other point in history, advances in science and engineering are making it feasible to mimic natural processes in the laboratory, which makes it a very exciting time,” Craig Vierra, Professor and Assistant Chair, Biological Sciences at University of the Pacific, tells Wired.co.uk. In his California lab, Vierra has for the past few years been growing spider silk proteins from bacteria in order to engineer fibers that are close, if not quite ready, to give steel a run for its money. The technique involves purifying the spider silk proteins away from the bacteria proteins before concentrating these using a freeze-dryer in order to render them into powder form. A solvent is then added, and the material is spun into fiber using wet spinning techniques and stretched to three times its original length.
“Although the mechanical properties of the synthetic spider fibers haven’t quite reached those of natural fibers, research scientists are rapidly approaching this level of performance. Our laboratory has been working on improving the composition of the spinning dope and spinning parameters of the fibers to enhance their performance.”
Vierra is a firm believer that nature will save us.
“Mother Nature has provided us with some of the most outstanding biomaterials that can be used for a plethora of applications in the textile industry. In addition to these, modern technological advances will also allow us to create new biocomposite materials that rely on the fundamentals of natural processes, elevating the numbers and types of materials that are available. But, more importantly, we can generate eco-friendly materials.
“As the population size increases, the availability of natural resources will become more scarce and limiting for humans. It will force society to develop new methods and strategies to produce larger quantities of materials at a faster pace to meet the demands of the world. We simply must find more cost-efficient methods to manufacture materials that are non-toxic for the environment. Many of the materials being synthesized today are very dangerous after they degrade and enter the environment, which is severely impacting the wildlife and disrupting the ecology of the animals on the planet.”
According to Vierra, the fact that funding in the field has become extremely competitive over the past ten years is proof of the quality of research today. “The majority of scientists are expected to justify how their research has a direct, immediate tie to applications in society in order to receive funding.”
We really have no alternative but to continue down this route, he argues. Without advances in material science, we will continue to produce “inferior materials” and damage the environment. “Ultimately, this will affect the way humans live and operate in society.”
We’re agreed that the field is a vital and rapidly growing one. But what value, if any, can a design-led project bring to the table, aside from highlighting the related issues. Vierra has assessed a handful of the incredible designs on display at Alive for us to see which he thinks could become a future biomanufacturing reality.
Read the entire article here.
Image: Radiant Soil, En Vie Exposition. Courtesy of Philip Beesley, En Vie / Wired.
- Only Three Feet>
Three feet. Three feet is nothing you say. Three feet is less than the difference between the shallow and deep ends of most swimming pools. Well, when the three feet is the mean ocean level rise it becomes a little more significant. And, when that three feet is the rise predicted to happen within the next 87 years, by 2100, it’s, well, how do you say, catastrophic.
A rise like that and you can kiss goodbye to your retirement home in Miami, and for that matter, kiss goodbye to much of southern Florida, and many coastal communities around the world.
From the New York Times:
An international team of scientists has found with near certainty that human activity is the cause of most of the temperature increases of recent decades, and warns that sea levels could rise by more than three feet by the end of the century if emissions continue at a runaway pace.
The scientists, whose findings are reported in a summary of the next big United Nations climate report, largely dismiss a recent slowdown in the pace of warming, which is often cited by climate change contrarians, as probably related to short-term factors. The report emphasizes that the basic facts giving rise to global alarm about future climate change are more established than ever, and it reiterates that the consequences of runaway emissions are likely to be profound.
“It is extremely likely that human influence on climate caused more than half of the observed increase in global average surface temperature from 1951 to 2010,” the draft report says. “There is high confidence that this has warmed the ocean, melted snow and ice, raised global mean sea level, and changed some climate extremes in the second half of the 20th century.”
The “extremely likely” language is stronger than in the last major United Nations report, published in 2007, and it means the authors of the draft document are now 95 percent to 100 percent confident that human activity is the primary influence on planetary warming. In the 2007 report, they said they were 90 percent to 100 percent certain on that issue.
On another closely watched issue, however, the authors retreated slightly from their 2007 position.
On the question of how much the planet could warm if carbon dioxide levels in the atmosphere doubled, the previous report had largely ruled out any number below 3.6 degrees Fahrenheit. The new draft says the rise could be as low as 2.7 degrees, essentially restoring a scientific consensus that prevailed from 1979 to 2007.
Most scientists see only an outside chance that the warming will be as low as either of those numbers, with the published evidence suggesting that an increase above 5 degrees Fahrenheit is likely if carbon dioxide doubles.
The new document is not final and will not become so until an intensive, closed-door negotiating session among scientists and government leaders in Stockholm in late September. But if the past is any guide, most of the core findings of the document will survive that final review.
The document was leaked over the weekend after it was sent to a large group of people who had signed up to review it. It was first reported on in detail by the Reuters news agency, and The New York Times obtained a copy independently to verify its contents.
It was prepared by the Intergovernmental Panel on Climate Change, a large, international group of scientists appointed by the United Nations. The group does no original research, but instead periodically assesses and summarizes the published scientific literature on climate change.
“The text is likely to change in response to comments from governments received in recent weeks and will also be considered by governments and scientists at a four-day approval session at the end of September,” the panel’s spokesman, Jonathan Lynn, said in a statement Monday. “It is therefore premature and could be misleading to attempt to draw conclusions from it.”
The intergovernmental panel won the Nobel Peace Prize along with Al Gore in 2007 for seeking to educate the world’s citizens about the risks of global warming. But it has also become a political target for climate contrarians, who helped identify several minor errors in the last big report from 2007. This time, the group adopted rigorous procedures in hopes of preventing such mistakes.
On sea level, one of the biggest single worries about climate change, the new report goes well beyond the one from 2007, which largely sidestepped the question of how much the ocean could rise this century.
The new report lays out several scenarios. In the most optimistic, the world’s governments would prove far more successful at getting emissions under control than they have been in the recent past, helping to limit the total warming.
In that circumstance, sea level could be expected to rise as little as 10 inches by the end of the century, the report found. That is a bit more than the eight-inch rise in the 20th century, which proved manageable even though it caused severe erosion along the world’s shorelines.
Read the entire article here.
Image courtesy of the Telegraph.
No, it’s not another network cop show. CSA began life as community supported agriculture — neighbors buying fresh produce from collectives of local growers and farmers. Now, CSA has grown itself to include art — community supported art — exposing neighbors to local color and creativity.
From the New York Times:
For years, Barbara Johnstone, a professor of linguistics at Carnegie Mellon University here, bought shares in a C.S.A. — a community-supported agriculture program — and picked up her occasional bags of tubers or tomatoes or whatever the member farms were harvesting.
Her farm shares eventually lapsed. (“Too much kale,” she said.) But on a recent summer evening, she showed up at a C.S.A. pickup location downtown and walked out carrying a brown paper bag filled with a completely different kind of produce. It was no good for eating, but it was just as homegrown and sustainable as what she used to get: contemporary art, fresh out of local studios.
“It’s kind of like Christmas in the middle of July,” said Ms. Johnstone, who had just gone through her bag to see what her $350 share had bought. The answer was a Surrealistic aluminum sculpture (of a pig’s jawbone, by William Kofmehl III), a print (a deadpan image appropriated from a lawn-care book, by Kim Beck) and a ceramic piece (partly about slavery, by Alexi Morrissey).
Without even having to change the abbreviation, the C.S.A. idea has fully made the leap from agriculture to art. After the first program started four years ago in Minnesota, demonstrating that the concept worked just as well for art lovers as for locavores, community-supported art programs are popping up all over the country: in Pittsburgh, now in its first year; Miami; Brooklyn; Lincoln, Neb.; Fargo, N.D.
The goal, borrowed from the world of small farms, is a deeper-than-commerce connection between people who make things and people who buy them. The art programs are designed to be self-supporting: Money from shares is used to pay the artists, who are usually chosen by a jury, to produce a small work in an edition of 50 or however many shares have been sold. The shareholders are often taking a leap of faith. They don’t know in advance what the artists will make and find out only at the pickup events, which are as much about getting to know the artists as collecting the fruits of their shares.
The C.S.A.’s have flourished in larger cities as a kind of organic alternative to the dominance of the commercial gallery system and in smaller places as a way to make up for the dearth of galleries, as a means of helping emerging artists and attracting people who are interested in art but feel they have neither the means nor the connections to collect it.
“A lot of our people who bought shares have virtually no real experience with contemporary art,” said Dayna Del Val, executive director of the Arts Partnership in Fargo, which began a C.S.A. last year, selling 50 shares at $300 each for pieces from nine local artists. “They’re going to a big-box store and buying prints of Monet’s ‘Water Lilies,’ if they have anything.”
Read the entire article here.
Image courtesy of Daily Camera.
- Citizens and Satellites: SkyTruth>
Daily we are reminded how much our world has changed and how it continues to transform. Technology certainly aids those who seek to profit from Earth’s resources, as they drill, cut, dig, and explode. Some use it wisely, while others leave our fragile home covered in scars of pollution and exploitation — often unseen.
For those who care passionately about the planet, satellite surveillance has become an tool essential tool — in powerful yet unexpected ways.
From the Washington Post:
Somewhere in the South Pacific, thousands of miles from the nearest landfall, there is a fishing ship. Let’s say you’re on it. Go onto the open deck, scream, jump around naked, fire a machine gun into the air — who will ever know? You are about as far from anyone as it is possible to be.
But you know what you should do? You should look up and wave.
Because 438 miles above you, moving at 17,000 miles per hour, a polar-orbiting satellite is taking your photograph. A man named John Amos is looking at you. He knows the name and size of your ship, how fast you’re moving and, perhaps, if you’re dangling a line in the water, what type of fish you’re catching.
Sheesh, you’re thinking, Amos must be some sort of highly placed international official in maritime law. … Nah.
He’s a 50-year-old geologist who heads a tiny nonprofit called SkyTruth in tiny Shepherdstown, W.Va., year-round population, 805.
Amos is looking at these ships to monitor illegal fishing in Chilean waters. He’s doing it from a quiet, shaded street, populated mostly with old houses, where the main noises are (a) birds and (b) the occasional passing car. His office, in a one-story building, shares a toilet with a knitting shop.
With a couple of clicks on the keyboard, Amos switches his view from the South Pacific to Tioga County, Pa., where SkyTruth is cataloguing, with a God’s-eye view, the number and size of fracking operations. Then it’s over to Appalachia for a 40-year history of what mountaintop-removal mining has wrought, all through aerial and satellite imagery, 59 counties covering four states.
“You can track anything in the world from anywhere in the world,” Amos is saying, a smile coming into his voice. “That’s the real revolution.”
Amos is, by many accounts, reshaping the postmodern environmental movement. He is among the first, if not the only, scientist to take the staggering array of satellite data that have accumulated over 40 years, turn it into maps with overlays of radar or aerial flyovers, then fan it out to environmental agencies, conservation nonprofit groups and grass-roots activists. This arms the little guys with the best data they’ve ever had to challenge oil, gas, mining and fishing corporations over how they’re changing the planet.
His satellite analysis of the gulf oil spill in 2010, posted on SkyTruth’s Web site, almost single-handedly forced BP and the U.S. government to acknowledge that the spill was far worse than either was saying.
He was the first to document how many Appalachian mountains have been decapitated in mining operations (about 500) because no state or government organization had ever bothered to find out, and no one else had, either. His work was used in the Environmental Protection Agency’s rare decision to block a major new mine in West Virginia, a decision still working its way through the courts.
“John’s work is absolutely cutting-edge,” says Kert Davies, research director of Greenpeace. “No one else in the nonprofit world is watching the horizon, looking for how to use satellite imagery and innovative new technology.”
“I can’t think of anyone else who’s doing what John is,” says Peter Aengst, regional director for the Wilderness Society’s Northern Rockies office.
Amos’s complex maps “visualize what can’t be seen with the human eye — the big-picture, long-term impact of environment damage,” says Linda Baker, executive director of the Upper Green River Alliance, an activist group in Wyoming that has used his work to illustrate the growth of oil drilling.
This distribution of satellite imagery is part of a vast, unparalleled democratization of humanity’s view of the world, an event not unlike cartography in the age of Magellan, the unknowable globe suddenly brought small.
Read the entire article here.
Image: Detail from a September 2012 satellite image of natural gas drilling infrastructure on public lands near Pinedale, Wyoming. Courtesy of SkyTruth.
- Earth as the New Venus>
New research models show just how precarious our planet’s climate really is. Runaway greenhouse warming would make a predicted 2-6 feet rise in average sea levels over the next 50-100 years seem like a puddle at the local splash pool.
From ars technica:
With the explosion of exoplanet discoveries, researchers have begun to seriously revisit what it takes to make a planet habitable, defined as being able to support liquid water. At a basic level, the amount of light a planet receives sets its temperature. But real worlds aren’t actually basic—they have atmospheres, reflect some of that light back into space, and experience various feedbacks that affect the temperature.
Attempts to incorporate all those complexities into models of other planets have produced some unexpected results. Some even suggest that Earth teeters on the edge of experiencing a runaway greenhouse, one that would see its oceans boil off. The fact that large areas of the planet are covered in ice may make that conclusion seem a bit absurd, but a second paper looks at the problem from a somewhat different angle—and comes to the same conclusion. If it weren’t for clouds and our nitrogen-rich atmosphere, the Earth might be an uninhabitable hell right now.
The new work focuses on a very simple model of an atmosphere: a linear column of nothing but water vapor. This clearly doesn’t capture the complex dynamics of weather and the different amounts of light to reach the poles, but it does include things like the amount of light scattered back out into space and the greenhouse impact of the water vapor. These sorts of calculations are simple enough that they were first done decades ago, but the authors note that this particular problem hadn’t been revisited in 25 years. Our knowledge of how water vapor absorbs both visible and infrared light has improved over that time.
Water vapor, like other greenhouse gasses, allows visible light to reach the surface of a planet, but it absorbs most of the infrared light that gets emitted back toward space. Only a narrow window, centered around 10 micrometer wavelengths, makes it back out to space. Once the incoming energy gets larger than the amount that can escape, the end result is a runaway greenhouse: heat evaporates more surface water, which absorbs more infrared, trapping even more heat. At some point, the atmosphere gets so filled with water vapor that light no longer even reaches the surface, instead getting absorbed by the atmosphere itself.
The model shows that, once temperatures reach 1,800K, a second window through the water vapor opens up at about four microns, which allows additional energy to escape into space. The authors suggest that this could be used when examining exoplanets, as high emissions in this region could be taken as an indication that the planet was undergoing a runaway greenhouse.
The authors also used the model to look at what Earth would be like if it had a cloud-free, water atmosphere. The surprise was that the updated model indicated that this alternate-Earth atmosphere would absorb 30 percent more energy than previous estimates suggested. That’s enough to make a runaway greenhouse atmosphere stable at the Earth’s distance from the Sun.
So, why is the Earth so relatively temperate? The authors added a few additional factors to their model to find out. Additional greenhouse gasses like carbon dioxide and methane made runaway heating more likely, while nitrogen scattered enough light to make it less likely. The net result is that, under an Earth-like atmosphere composition, our planet should experience a runaway greenhouse. (In fact, greenhouse gasses can lower the barrier between a temperate climate and a runaway greenhouse, although only at concentrations much higher than we’ll reach even if we burn all the fossil fuels available.) But we know it hasn’t. “A runaway greenhouse has manifestly not occurred on post-Hadean Earth,” the authors note. “It would have sterilized Earth (there is observer bias).”
So, what’s keeping us cool? The authors suggest two things. The first is that our atmosphere isn’t uniformly saturated with water; some areas are less humid and allow more heat to radiate out into space. The other factor is the existence of clouds. Depending on their properties, clouds can either insulate or reflect sunlight back into space. On balance, however, it appears they are key to keeping our planet’s climate moderate.
But clouds won’t help us out indefinitely. Long before the Sun expands and swallows the Earth, the amount of light it emits will rise enough to make a runaway greenhouse more likely. The authors estimate that, with an all-water atmosphere, we’ve got about 1.5 billion years until the Earth is sterilized by skyrocketing temperatures. If other greenhouse gasses are present, then that day will come even sooner.
The authors don’t expect that this will be the last word on exoplanet conditions—in fact, they revisited waterlogged atmospheres in the hopes of stimulating greater discussion of them. But the key to understanding exoplanets will ultimately involve adapting the planetary atmospheric models we’ve built to understand the Earth’s climate. With full, three-dimensional circulation of the atmosphere, these models can provide a far more complete picture of the conditions that could prevail under a variety of circumstances. Right now, they’re specialized to model the Earth, but work is underway to change that.
Read the entire article here.
Image: Venus shrouded in perennial clouds of carbon dioxide, sulfur dioxide and sulfuric acid, as seen by the Messenger probe, 2004. Courtesy of Wikipedia.
- MondayMap: Feeding the Mississippi>
The system of streams and tributaries that feeds the great Mississippi river is a complex interconnected web covering around half of the United States. A new mapping tool puts it all in one intricate chart.
A new online tool released by the Department of the Interior this week allows users to select any major stream and trace it up to its sources or down to its watershed. The above map, exported from the tool, highlights all the major tributaries that feed into the Mississippi River, illustrating the river’s huge catchment area of approximately 1.15 million square miles, or 37 percent of the land area of the continental U.S. Use the tool to see where the streams around you are getting their water (and pollution).
See a larger version of the map here.
Image: Map of the Mississippi river system. Courtesy of Nationalatlas.gov.
- Helping the Honeybees>
Agricultural biotechnology giant Monsanto is joining efforts to help the honeybee. Honeybees the world over have been suffering from a widespread and catastrophic condition often referred to a colony collapse disorder.
From Technology Review:
Beekeepers are desperately battling colony collapse disorder, a complex condition that has been killing bees in large swaths and could ultimately have a massive effect on people, since honeybees pollinate a significant portion of the food that humans consume.
A new weapon in that fight could be RNA molecules that kill a troublesome parasite by disrupting the way its genes are expressed. Monsanto and others are developing the molecules as a means to kill the parasite, a mite that feeds on honeybees.
The killer molecule, if it proves to be efficient and passes regulatory hurdles, would offer welcome respite. Bee colonies have been dying in alarming numbers for several years, and many factors are contributing to this decline. But while beekeepers struggle with malnutrition, pesticides, viruses, and other issues in their bee stocks, one problem that seems to be universal is the Varroa mite, an arachnid that feeds on the blood of developing bee larvae.
“Hives can survive the onslaught of a lot of these insults, but with Varroa, they can’t last,” says Alan Bowman, a University of Aberdeen molecular biologist in Scotland, who is studying gene silencing as a means to control the pest.
The Varroa mite debilitates colonies by hampering the growth of young bees and increasing the lethality of the viruses that it spreads. “Bees can quite happily survive with these viruses, but now, in the presence of Varroa, these viruses become lethal,” says Bowman. Once a hive is infested with Varroa, it will die within two to four years unless a beekeeper takes active steps to control it, he says.
One of the weapons beekeepers can use is a pesticide that kills mites, but “there’s always the concern that mites will become resistant to the very few mitocides that are available,” says Tom Rinderer, who leads research on honeybee genetics at the U.S. Department of Agriculture Research Service in Baton Rouge, Louisiana. And new pesticides to kill mites are not easy to come by, in part because mites and bees are found in neighboring branches of the animal tree. “Pesticides are really difficult for chemical companies to develop because of the relatively close relationship between the Varroa and the bee,” says Bowman.
RNA interference could be a more targeted and effective way to combat the mites. It is a natural process in plants and animals that normally defends against viruses and potentially dangerous bits of DNA that move within genomes. Based upon their nucleotide sequence, interfering RNAs signal the destruction of the specific gene products, thus providing a species-specific self-destruct signal. In recent years, biologists have begun to explore this process as a possible means to turn off unwanted genes in humans (see “Gene-Silencing Technique Targets Scarring”) and to control pests in agricultural plants (see “Crops that Shut Down Pests’ Genes”). Using the technology to control pests in agricultural animals would be a new application.
In 2011 Monsanto, the maker of herbicides and genetically engineered seeds, bought an Israeli company called Beeologics, which had developed an RNA interference technology that can be fed to bees through sugar water. The idea is that when a nurse bee spits this sugar water into each cell of a honeycomb where a queen bee has laid an egg, the resulting larvae will consume the RNA interference treatment. With the right sequence in the interfering RNA, the treatment will be harmless to the larvae, but when a mite feeds on it, the pest will ingest its own self-destruct signal.
The RNA interference technology would not be carried from generation to generation. “It’s a transient effect; it’s not a genetically modified organism,” says Bowman.
Monsanto says it has identified a few self-destruct triggers to explore by looking at genes that are fundamental to the biology of the mite. “Something in reproduction or egg laying or even just basic housekeeping genes can be a good target provided they have enough difference from the honeybee sequence,” says Greg Heck, a researcher at Monsanto.
Read the entire article here.
Image: Honeybee, Apis mellifera. Courtesy of Wikipedia.
- MondayMap: U.S. Interstate Highway System>
It’s summer, which means lots of people driving every-which-way for family vacations.
So, this is a good time to refresh you with the map of the arteries that distribute lifeblood across the United States — the U.S. Interstate Highway System. The network of highways stretching around 46,800 miles from coast to coast is sometimes referred to as the Eisenhower Interstate System. President Eisenhower signed the Federal-Aid Highway Act in June 29, 1956 making the current system possible.
Thus the father of the Interstate System is also responsible for the never-ending choruses of: “are we there yet?”, “how much further?”, “I need to go to the bathroom”, and “can we stop at the next Starbucks (from the adults) / McDonalds (from the kids)?”.
Get a full-size map here.
Map courtesy of WikiCommons.
- Surveillance, British Style>
While the revelations about the National Security Agency (NSA) snooping on private communications of U.S. citizens are extremely troubling, the situation could be much worse. Cast a sympathetic thought to the Her Majesty’s subjects in the United Kingdom of Great Britain and Northern Island, where almost everyone eavesdrops on everyone else. While the island nation of 60 million covers roughly the same area as Michigan, it is swathed in over 4 million CCTV (closed circuit television) surveillance cameras.
We adore the English here in the States. They’re just so precious! They call traffic circles “roundabouts,” prostitutes “prozzies,” and they have a queen. They’re ever so polite and carry themselves with such admirable poise. We love their accents so much, we use them in historical films to give them a bit more gravitas. (Just watch The Last Temptation of Christ to see what happens when we don’t: Judas doesn’t sound very intimidating with a Brooklyn accent.)
What’s not so cute is the surveillance society they’ve built—but the U.S. government seems pretty enamored with it.
The United Kingdom is home to an intense surveillance system. Most of the legal framework for this comes from the Regulation of Investigatory Powers Act, which dates all the way back to the year 2000. RIPA is meant to support criminal investigation, preventing disorder, public safety, public health, and, of course, “national security.” If this extremely broad application of law seems familiar, it should: The United States’ own PATRIOT Act is remarkably similar in scope and application. Why should the United Kingdom have the best toys, after all?
This is one of the problems with being the United Kingdom’s younger sibling. We always want what Big Brother has. Unless it’s soccer. Wiretaps, though? We just can’t get enough!
The PATRIOT Act, broad as it is, doesn’t match RIPA’s incredible wiretap allowances. In 1994, the United States passed the Communications Assistance for Law Enforcement Act, which mandated that service providers give the government “technical assistance” in the use of wiretaps. RIPA goes a step further and insists that wiretap capability be implemented right into the system. If you’re a service provider and can’t set up plug-and-play wiretap capability within a short time, Johnny English comes knocking at your door to say, ” ‘Allo, guvna! I ‘ear tell you ‘aven’t put in me wiretaps yet. Blimey! We’ll jus’ ‘ave to give you a hefty fine! Ods bodkins!” Wouldn’t that be awful (the law, not the accent)? It would, and it’s just what the FBI is hoping for. CALEA is getting a rewrite that, if it passes, would give the FBI that very capability.
I understand. Older siblings always get the new toys, and it’s only natural that we want to have them as well. But why does it have to be legal toys for surveillance? Why can’t it be chocolate? The United Kingdom enjoys chocolate that’s almost twice as good as American chocolate. Literally, they get 20 percent solid cocoa in their chocolate bars, while we suffer with a measly 11 percent. Instead, we’re learning to shut off the Internet for entire families.
That’s right. In the United Kingdom, if you are just suspected of having downloaded illegally obtained material three times (it’s known as the “three strikes” law), your Internet is cut off. Not just for you, but for your entire household. Life without the Internet, let’s face it, sucks. You’re not just missing out on videos of cats falling into bathtubs. You’re missing out of communication, jobs, and being a 21st-century citizen. Maybe this is OK in the United Kingdom because you can move up north, become a farmer, and enjoy a few pints down at the pub every night. Or you can just get a new ISP, because the United Kingdom actually has a competitive market for ISPs. The United States, as an homage, has developed the so-called “copyright alert system.” It works much the same way as the U.K. law, but it provides for six “strikes” instead of three and has a limited appeals system, in which the burden of proof lies on the suspected customer. In the United States, though, the rights-holders monitor users for suspected copyright infringement on their own, without the aid of ISPs. So far, we haven’t adopted the U.K. system in which ISPs are expected to monitor traffic and dole out their three strikes at their discretion.
These are examples of more targeted surveillance of criminal activities, though. What about untargeted mass surveillance? On June 21, one of Edward Snowden’s leaks revealed that the Government Communications Headquarters, the United Kingdom’s NSA equivalent, has been engaging in a staggering amount of data collection from civilians. This development generated far less fanfare than the NSA news, perhaps because the legal framework for this data collection has existed for a very long time under RIPA, and we expect surveillance in the United Kingdom. (Or maybe Americans were just living down to the stereotype of not caring about other countries.) The NSA models follow the GCHQ’s very closely, though, right down to the oversight, or lack thereof.
Media have labeled the FISA court that regulates the NSA’s surveillance as a “rubber-stamp” court, but it’s no match for the omnipotence of the Investigatory Powers Tribunal, which manages oversight for MI5, MI6, and the GCHQ. The Investigatory Powers Tribunal is exempt from the United Kingdom’s Freedom of Information Act, so it doesn’t have to share a thing about its activities (FISA apparently does not have this luxury—yet). On top of that, members of the tribunal are appointed by the queen. The queen. The one with the crown who has jubilees and a castle and probably a court wizard. Out of 956 complaints to the Investigatory Powers Tribunal, five have been upheld. Now that’s a rubber-stamp court we can aspire to!
Or perhaps not. The future of U.S. surveillance looks very grim if we’re set on following the U.K.’s lead. Across the United Kingdom, an estimated 4.2 million CCTV cameras, some with facial-recognition capability, keep watch on nearly the entire nation. (This can lead to some Monty Python-esque high jinks.) Washington, D.C., took its first step toward strong camera surveillance in 2008, when several thousand were installed ahead of President Obama’s inauguration.
Read the entire article here.
Image: Royal coat of arms of Queen Elizabeth II of the United Kingdom, as used in England and Wales, and Scotland. Courtesy of Wikipedia.
- United States of Strange>
With the United States turning another year older it reminds us to ponder some of the lesser known components of this beautiful yet paradoxical place. All nations have their esoteric cultural wonders and benign local oddities: the British (actually the Scots) have kilts, bowler hats, the Royal Family; Italians have Vespas, governments that last on average 8 months; the French, well they’re just French; the Germans love fast cars and lederhosen. But for sheer variety and volume the United States probably surpasses all for its extreme absurdity.
From the Telegraph:
Run by the improbably named Genghis Cohen, Machine Gun Vegas bills itself as the ‘world’s first luxury gun lounge’. It opened last year, and claims to combine “the look and feel of an ultra-lounge with the functionality of a state of the art indoor gun range”. The team of NRA-certified on-site instructors, however, may be its most unique appeal. All are female, and all are ex-US military personnel.
See other images and read the entire article here.
Image courtesy of the Telegraph.
- Circadian Rhythm in Vegetables>
The vegetables you eat may be better for you based on how and when they are exposed to light. Just as animals adhere to circadian rhythms, research shows that some plants may generate different levels of healthy nutritional metabolites based the light cycle as well.
From ars technica:
When you buy vegetables at the grocery store, they are usually still alive. When you lock your cabbage and carrots in the dark recess of the refrigerator vegetable drawer, they are still alive. They continue to metabolize while we wait to cook them.
Why should we care? Well, plants that are alive adjust to the conditions surrounding them. Researchers at Rice University have shown that some plants have circadian rhythms, adjusting their production of certain chemicals based on their exposure to light and dark cycles. Understanding and exploiting these rhythms could help us maximize the nutritional value of the vegetables we eat.
According to Janet Braam, a professor of biochemistry at Rice, her team’s initial research looked at how Arabidopsis, a common plant model for scientists, responded to light cycles. “It adjusts its defense hormones before the time of day when insects attack,” Braam said. Arabidopsis is in the same plant family as the cruciforous vegetables—broccoli, cabbage, and kale—so Braam and her colleagues decided to look for a similar light response in our foods.
They bought some grocery store cabbage and brought it back to the lab so they could subject the cabbage to the same tests they gave their model plant, which involved offering up living, leafy vegetables to a horde of hungry caterpillars. First, half the cabbages were exposed to a normal light and dark cycle, the same schedule as the caterpillars, while the other half were exposed to the opposite light cycle.
The caterpillars tend to feed in the late afternoon, according to Braam, so the light signals the plants to increase production of glucosinolates, a chemical that the insects don’t like. The study found that cabbages that adjusted to the normal light cycle had far less insect damage than the jet-lagged cabbages.
While it’s cool to know that cabbages are still metabolizing away and responding to light stimulus days after harvest, Braam said that this process could affect the nutritional value of the cabbage. “We eat cabbage, in part, because these glucosinolates are anti-cancer compounds,” Braam said.
Glucosinolates are only found in the cruciform vegetable family, but the Rice team wanted to see if other vegetables demonstrated similar circadian rhythms. They tested spinach, lettuce, zucchini, blueberries, carrots, and sweet potatoes. “Luckily, our caterpillar isn’t picky,” Braam said. “It’ll eat just about anything.”
Just like with the cabbage, the caterpillars ate far less of the vegetables trained on the normal light schedule. Even the fruits and roots increased production of some kind of anti-insect compound in response to light stimulus.
Metabolites affected by circadian rhythms could include vitamins and antioxidants. The Rice team is planning follow-up research to begin exploring how the cycling phenomenon affects known nutrients and if the magnitude of the shifts are large enough to have an impact on our diets. “We’ve uncovered some very basic stimuli, but we haven’t yet figured out how to amplify that for human nutrition,” Braam said.
Read the entire article here.
- Sci-Fi Begets Cli-Fi>
The world of fiction is populated with hundreds of different genres — most of which were invented by clever marketeers anxious to ensure vampire novels (teen / horror) don’t live next to classic works (literary) on real or imagined (think Amazon) book shelves. So, it should come as no surprise to see a new category recently emerge: cli-fi.
Short for climate fiction, cli-fi novels explore the dangers of environmental degradation and apocalyptic climate change. Not light reading for your summer break at the beach. But, then again, more books in this category may get us to think often and carefully about preserving our beaches — and the rest of the planet — for our kids.
From the Guardian:
A couple of days ago Dan Bloom, a freelance news reporter based in Taiwan, wrote on the Teleread blog that his word had been stolen from him. In 2012 Bloom had “produced and packaged” a novella called Polar City Red, about climate refugees in a post-apocalyptic Alaska in the year 2075. Bloom labelled the book “cli-fi” in the press release and says he coined that term in 2007, cli-fi being short for “climate fiction”, described as a sub-genre of sci-fi. Polar City Red bombed, selling precisely 271 copies, until National Public Radio (NPR) and the Christian Science Monitor picked up on the term cli-fi last month, writing Bloom out of the story. So Bloom has blogged his reply on Teleread, saying he’s simply pleased the term is now out there – it has gone viral since the NPR piece by Scott Simon. It’s not quite as neat as that – in recent months the term has been used increasingly in literary and environmental circles – but there’s no doubt it has broken out more widely. You can search for cli-fi on Amazon, instantly bringing up a plethora of books with titles such as 2042: The Great Cataclysm, or Welcome to the Greenhouse. Twitter has been abuzz.
Whereas 10 or 20 years ago it would have been difficult to identify even a handful of books that fell under this banner, there is now a growing corpus of novels setting out to warn readers of possible environmental nightmares to come. Barbara Kingsolver’s Flight Behaviour, the story of a forest valley filled with an apparent lake of fire, is shortlisted for the 2013 Women’s prize for fiction. Meanwhile, there’s Nathaniel Rich’s Odds Against Tomorrow, set in a future New York, about a mathematician who deals in worst-case scenarios. In Liz Jensen’s 2009 eco-thriller The Rapture, summer temperatures are asphyxiating and Armageddon is near; her most recent book, The Uninvited, features uncanny warnings from a desperate future. Perhaps the most high-profile cli-fi author is Margaret Atwood, whose 2009 The Year of the Flood features survivors of a biological catastrophe also central to her 2003 novel Oryx and Crake, a book Atwood sometimes preferred to call “speculative fiction”.
Engaging with this subject in fiction increases debate about the issue; finely constructed, intricate narratives help us broaden our understanding and explore imagined futures, encouraging us to think about the kind of world we want to live in. This can often seem difficult in our 24?hour news-on-loop society where the consequences of climate change may appear to be everywhere, but intelligent discussion of it often seems to be nowhere. Also, as the crime genre can provide the dirty thrill of, say, reading about a gruesome fictional murder set on a street the reader recognises, the best cli-fi novels allow us to be briefly but intensely frightened: climate chaos is closer, more immediate, hovering over our shoulder like that murderer wielding his knife. Outside of the narrative of a novel the issue can seem fractured, incoherent, even distant. As Gregory Norminton puts it in his introduction to an anthology on the subject, Beacons: Stories for Our Not-So-Distant Future: “Global warming is a predicament, not a story. Narrative only comes in our response to that predicament.” Which is as good an argument as any for engaging with those stories.
All terms are reductive, all labels simplistic – clearly, the likes of Kingsolver, Jensen and Atwood have a much broader canvas than this one issue. And there’s an argument for saying this is simply rebranding: sci-fi writers have been engaging with the climate-change debate for longer than literary novelists – Snow by Adam Roberts comes to mind – and I do wonder whether this is a term designed for squeamish writers and critics who dislike the box labelled “science fiction”. So the term is certainly imperfect, but it’s also valuable. Unlike sci-fi, cli-fi writing comes primarily from a place of warning rather than discovery. There are no spaceships hovering in the sky; no clocks striking 13. On the contrary, many of the horrors described seem oddly familiar.
Read the entire article after the jump.
Image: Aftermath of Superstorm Sandy. Courtesy of the Independent.
- Self-Assured Destruction (SAD)>
The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.
Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.
What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.
For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.
And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.
The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.
There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.
In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”
Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.
Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.
So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.
Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.
And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.
So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.
The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.
We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.
What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.
So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.
Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.
And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.
Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.
Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.
Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.
At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.
Read the entire article here.
Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.
- Living Long and Prospering on Ikaria>
It’s safe to suggest that most of us above a certain age — let’s say 30 — wish to stay young. It is also safer to suggest, in the absence of a solution to this first wish, that many of us wish to age gracefully and happily. Yet for most of us, especially in the West, we age in a less dignified manner in combination with colorful medicines, lengthy tubes, and unpronounceable procedures. We are collectively living longer. But, the quality of those extra years leaves much to be desired.
In a quest to understand the process of aging more thoroughly researchers regularly descend on areas the world over that are known to have higher than average populations of healthy older people. These have become known as “Blue Zones”. One such place is a small, idyllic (there’s a clue right there) Greek island called Ikaria.
From the Guardian:
Gregoris Tsahas has smoked a packet of cigarettes every day for 70 years. High up in the hills of Ikaria, in his favourite cafe, he draws on what must be around his half-millionth fag. I tell him smoking is bad for the health and he gives me an indulgent smile, which suggests he’s heard the line before. He’s 100 years old and, aside from appendicitis, has never known a day of illness in his life.
Tsahas has short-cropped white hair, a robustly handsome face and a bone-crushing handshake. He says he drinks two glasses of red wine a day, but on closer interrogation he concedes that, like many other drinkers, he has underestimated his consumption by a couple of glasses.
The secret of a good marriage, he says, is never to return drunk to your wife. He’s been married for 60 years. “I’d like another wife,” he says. “Ideally one about 55.”
Tsahas is known at the cafe as a bit of a gossip and a joker. He goes there twice a day. It’s a 1km walk from his house over uneven, sloping terrain. That’s four hilly kilometres a day. Not many people half his age manage that far in Britain.
In Ikaria, a Greek island in the far east of the Mediterranean, about 30 miles from the Turkish coast, characters such as Gregoris Tsahas are not exceptional. With its beautiful coves, rocky cliffs, steep valleys and broken canopy of scrub and olive groves, Ikaria looks similar to any number of other Greek islands. But there is one vital difference: people here live much longer than the population on other islands and on the mainland. In fact, people here live on average 10 years longer than those in the rest of Europe and America – around one in three Ikarians lives into their 90s. Not only that, but they also have much lower rates of cancer and heart disease, suffer significantly less depression and dementia, maintain a sex life into old age and remain physically active deep into their 90s. What is the secret of Ikaria? What do its inhabitants know that the rest of us don’t?
The island is named after Icarus, the young man in Greek mythology who flew too close to the sun and plunged into the sea, according to legend, close to Ikaria. Thoughts of plunging into the sea are very much in my mind as the propeller plane from Athens comes in to land. There is a fierce wind blowing – the island is renowned for its wind – and the aircraft appears to stall as it turns to make its final descent, tipping this way and that until, at the last moment, the pilot takes off upwards and returns to Athens. Nor are there any ferries, owing to a strike. “They’re always on strike,” an Athenian back at the airport tells me.
Stranded in Athens for the night, I discover that a fellow thwarted passenger is Dan Buettner, author of a book called The Blue Zones, which details the five small areas in the world where the population outlive the American and western European average by around a decade: Okinawa in Japan, Sardinia, the Nicoya peninsula in Costa Rica, Loma Linda in California and Ikaria.
Tall and athletic, 52-year-old Buettner, who used to be a long-distance cyclist, looks a picture of well-preserved youth. He is a fellow with National Geographic magazine and became interested in longevity while researching Okinawa’s aged population. He tells me there are several other passengers on the plane who are interested in Ikaria’s exceptional demographics. “It would have been ironic, don’t you think,” he notes drily, “if a group of people looking for the secret of longevity crashed into the sea and died.”
Chatting to locals on the plane the following day, I learn that several have relations who are centenarians. One woman says her aunt is 111. The problem for demographers with such claims is that they are often very difficult to stand up. Going back to Methuselah, history is studded with exaggerations of age. In the last century, longevity became yet another battleground in the cold war. The Soviet authorities let it be known that people in the Caucasus were living deep into their hundreds. But subsequent studies have shown these claims lacked evidential foundation.
Since then, various societies and populations have reported advanced ageing, but few are able to supply convincing proof. “I don’t believe Korea or China,” Buettner says. “I don’t believe the Hunza Valley in Pakistan. None of those places has good birth certificates.”
However, Ikaria does. It has also been the subject of a number of scientific studies. Aside from the demographic surveys that Buettner helped organise, there was also the University of Athens’ Ikaria Study. One of its members, Dr Christina Chrysohoou, a cardiologist at the university’s medical school, found that the Ikarian diet featured a lot of beans and not much meat or refined sugar. The locals also feast on locally grown and wild greens, some of which contain 10 times more antioxidants than are found in red wine, as well as potatoes and goat’s milk.
Chrysohoou thinks the food is distinct from that eaten on other Greek islands with lower life expectancy. “Ikarians’ diet may have some differences from other islands’ diets,” she says. “The Ikarians drink a lot of herb tea and small quantities of coffee; daily calorie consumption is not high. Ikaria is still an isolated island, without tourists, which means that, especially in the villages in the north, where the highest longevity rates have been recorded, life is largely unaffected by the westernised way of living.”
But she also refers to research that suggests the Ikarian habit of taking afternoon naps may help extend life. One extensive study of Greek adults showed that regular napping reduced the risk of heart disease by almost 40%. What’s more, Chrysohoou’s preliminary studies revealed that 80% of Ikarian males between the ages of 65 and 100 were still having sex. And, of those, a quarter did so with “good duration” and “achievement”. “We found that most males between 65 and 88 reported sexual activity, but after the age of 90, very few continued to have sex.”
Read the entire article here.
Image: Agios Giorgis Beach, Ikaria. Courtesy of Island-Ikaria travel guide.
- MondayMap: The Double Edge of Climate Change>
So the changing global climate will imperil our coasts, flood low-lying lands, fuel more droughts, increase weather extremes, and generally make the planet more toasty. But, a new study — for the first time — links increasing levels of CO2 to an increase in global vegetation. Perhaps this portends our eventual fate — ceding the Earth back to the plants — unless humans make some drastic behavioral changes.
From the New Scientist:
The planet is getting lusher, and we are responsible. Carbon dioxide generated by human activity is stimulating photosynthesis and causing a beneficial greening of the Earth’s surface.
For the first time, researchers claim to have shown that the increase in plant cover is due to this “CO2 fertilisation effect” rather than other causes. However, it remains unclear whether the effect can counter any negative consequences of global warming, such as the spread of deserts.
Recent satellite studies have shown that the planet is harbouring more vegetation overall, but pinning down the cause has been difficult. Factors such as higher temperatures, extra rainfall, and an increase in atmospheric CO2 – which helps plants use water more efficiently – could all be boosting vegetation.
To home in on the effect of CO2, Randall Donohue of Australia’s national research institute, the CSIRO in Canberra, monitored vegetation at the edges of deserts in Australia, southern Africa, the US Southwest, North Africa, the Middle East and central Asia. These are regions where there is ample warmth and sunlight, but only just enough rainfall for vegetation to grow, so any change in plant cover must be the result of a change in rainfall patterns or CO2 levels, or both.
If CO2 levels were constant, then the amount of vegetation per unit of rainfall ought to be constant, too. However, the team found that this figure rose by 11 per cent in these areas between 1982 and 2010, mirroring the rise in CO2 (Geophysical Research Letters, doi.org/mqx). Donohue says this lends “strong support” to the idea that CO2 fertilisation drove the greening.
Climate change studies have predicted that many dry areas will get drier and that some deserts will expand. Donohue’s findings make this less certain.
However, the greening effect may not apply to the world’s driest regions. Beth Newingham of the University of Idaho, Moscow, recently published the result of a 10-year experiment involving a greenhouse set up in the Mojave desert of Nevada. She found “no sustained increase in biomass” when extra CO2 was pumped into the greenhouse. “You cannot assume that all these deserts respond the same,” she says. “Enough water needs to be present for the plants to respond at all.”
The extra plant growth could have knock-on effects on climate, Donohue says, by increasing rainfall, affecting river flows and changing the likelihood of wildfires. It will also absorb more CO2 from the air, potentially damping down global warming but also limiting the CO2 fertilisation effect itself.
Read the entire article here.
Image: Global vegetation mapped: Normalized Difference Vegetation Index (NDVI) from Nov. 1, 2007, to Dec. 1, 2007, during autumn in the Northern Hemisphere. This monthly average is based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The greenness values depict vegetation density; higher values (dark greens) show land areas with plenty of leafy green vegetation, such as the Amazon Rainforest. Lower values (beige to white) show areas with little or no vegetation, including sand seas and Arctic areas. Areas with moderate amounts of vegetation are pale green. Land areas with no data appear gray, and water appears blue. Courtesy of NASA.
- Your Home As Eco-System>
For centuries biologists, zoologists and ecologists have been mapping the wildlife that surrounds us in the great outdoors. Now a group led by microbiologist Noah Fierer at the University of Colorado Boulder is pursuing flora and fauna in one of the last unexplored eco-systems — the home. (Not for the faint of heart).
From the New York Times:
On a sunny Wednesday, with a faint haze hanging over the Rockies, Noah Fierer eyed the field site from the back of his colleague’s Ford Explorer. Two blocks east of a strip mall in Longmont, one of the world’s last underexplored ecosystems had come into view: a sandstone-colored ranch house, code-named Q. A pair of dogs barked in the backyard.
Dr. Fierer, 39, a microbiologist at the University of Colorado Boulder and self-described “natural historian of cooties,” walked across the front lawn and into the house, joining a team of researchers inside. One swabbed surfaces with sterile cotton swabs. Others logged the findings from two humming air samplers: clothing fibers, dog hair, skin flakes, particulate matter and microbial life.
Ecologists like Dr. Fierer have begun peering into an intimate, overlooked world that barely existed 100,000 years ago: the great indoors. They want to know what lives in our homes with us and how we “colonize” spaces with other species — viruses, bacteria, microbes. Homes, they’ve found, contain identifiable ecological signatures of their human inhabitants. Even dogs exert a significant influence on the tiny life-forms living on our pillows and television screens. Once ecologists have more thoroughly identified indoor species, they hope to come up with strategies to scientifically manage homes, by eliminating harmful taxa and fostering species beneficial to our health.
But the first step is simply to take a census of what’s already living with us, said Dr. Fierer; only then can scientists start making sense of their effects. “We need to know what’s out there first. If you don’t know that, you’re wandering blind in the wilderness.”
Here’s an undeniable fact: We are an indoor species. We spend close to 90 percent of our lives in drywalled caves. Yet traditionally, ecologists ventured outdoors to observe nature’s biodiversity, in the Amazon jungles, the hot springs of Yellowstone or the subglacial lakes of Antarctica. (“When you train as an ecologist, you imagine yourself tromping around in the forest,” Dr. Fierer said. “You don’t imagine yourself swabbing a toilet seat.”)
But as humdrum as a home might first appear, it is a veritable wonderland. Ecology does not stop at the front door; a home to you is also home to an incredible array of wildlife.
Besides the charismatic fauna commonly observed in North American homes — dogs, cats, the occasional freshwater fish — ants and roaches, crickets and carpet bugs, mites and millions upon millions of microbes, including hundreds of multicellular species and thousands of unicellular species, also thrive in them. The “built environment” doubles as a complex ecosystem that evolves under the selective pressure of its inhabitants, their behavior and the building materials. As microbial ecologists swab DNA from our homes, they’re creating an atlas of life much as 19th-century naturalists like Alfred Russel Wallace once logged flora and fauna on the Malay Archipelago.
Take an average kitchen. In a study published in February in the journal Environmental Microbiology, Dr. Fierer’s lab examined 82 surfaces in four Boulder kitchens. Predictable patterns emerged. Bacterial species associated with human skin, like Staphylococcaceae or Corynebacteriaceae, predominated. Evidence of soil showed up on the floor, and species associated with raw produce (Enterobacteriaceae, for example) appeared on countertops. Microbes common in moist areas — including sphingomonads, some strains infamous for their ability to survive in the most toxic sites — splashed in a kind of jungle above the faucet.
A hot spot of unrivaled biodiversity was discovered on the stove exhaust vent, probably the result of forced air and settling. The counter and refrigerator, places seemingly as disparate as temperate and alpine grasslands, shared a similar assemblage of microbial species — probably less because of temperature and more a consequence of cleaning. Dr. Fierer’s lab also found a few potential pathogens, like Campylobacter, lurking on the cupboards. There was evidence of the bacterium on a microwave panel, too, presumably a microbial “fingerprint” left by a cook handling raw chicken.
If a kitchen represents a temperate forest, few of its plants would be poison ivy. Most of the inhabitants are relatively benign. In any event, eradicating them is neither possible nor desirable. Dr. Fierer wants to make visible this intrinsic, if unseen, aspect of everyday life. “For a lot of the general public, they don’t care what’s in soil,” he said. “People care more about what’s on their pillowcase.” (Spoiler alert: The microbes living on your pillowcase are not all that different from those living on your toilet seat. Both surfaces come in regular contact with exposed skin.)
Read the entire article after the jump.
Image: Animals commonly found in the home. Courtesy of North Carolina State University.
- Your State Bird>
The official national bird of the United States is the Bald Eagle. For that matter, it’s also the official animal. Thankfully it was removed from the endangered species list a mere 5 years ago. Aside from the bird itself Americans love the symbolism that the eagle implies — strength, speed, leadership and achievement. But do Americans know their State bird. A recent article from the bird-lovers over at Slate will refresh your memory, and also recommend a more relevant alternative.
I drove over a bridge from Maryland into Virginia today and on the big “Welcome to Virginia” sign was an image of the state bird, the northern cardinal—with a yellow bill. I should have scoffed, but it hardly registered. Everyone knows that state birds are a big joke. There are a million cardinals, a scattering of robins, and just a general lack of thought put into the whole thing.
States should have to put more thought into their state bird than I put into picking my socks in the morning. “Ugh, state bird? I dunno, what’re the guys next to us doing? Cardinal? OK, let’s do that too. Yeah put it on all the signs. Nah, no time to research the bill color, let’s just go.” It’s the official state bird! Well, since all these jackanape states are too busy passing laws requiring everyone to own guns or whatever to consider what their state bird should be, I guess I’ll have to do it.
1. Alabama. Official state bird: yellowhammer
Right out of the gate with this thing. Yellowhammer? C’mon. I Asked Jeeves and it told me that Yellowhammer is some backwoods name for a yellow-shafted flicker. The origin story dates to the Civil War, when some Alabama troops wore yellow-trimmed uniforms. Sorry, but that’s dumb, mostly because it’s just a coincidence and has nothing to do with the actual bird. If you want a woodpecker, go for something with a little more cachet, something that’s at least a full species.
What it should be: red-cockaded woodpecker
2. Alaska. Official state bird: willow ptarmigan
Willow Ptarmigans are the dumbest-sounding birds on Earth, sorry. They sound like rejected Star Wars aliens, angrily standing outside the Mos Eisley Cantina because their IDs were rejected. Why go with these dopes, Alaska, when you’re the best state to see the most awesome falcon on Earth?
What it should be: gyrfalcon
3. Arizona. Official state bird: cactus wren
Cactus Wren is like the only boring bird in the entire state. I can’t believe it.
What it should be: red-faced warbler
4. Arkansas. Official state bird: northern mockingbird
Christ. What makes this even less funny is that there are like eight other states with mockingbird as their official bird. I’m convinced that the guy whose job it was to report to the state’s legislature on what the official bird should be forgot until the day it was due and he was in line for a breakfast sandwich at Burger King. In a panic he walked outside and selected the first bird he could find, a dirty mockingbird singing its stupid head off on top of a dumpster.
What it should be: painted bunting
5. California. Official state bird: California quail
… Or perhaps the largest, most radical bird on the continent?
What it should be: California condor
6. Colorado. Official state bird: lark bunting
Read the entire article here.
Image: Bald Eagle, Kodiak Alaska, 2010. Courtesy of Yathin S Krishnappa / Wikipedia.
- MondayMap: Global Intolerance>
Following on from last week’s MondayMap post on intolerance and hatred within the United States — according to tweets on the social media site Twitter — we expand our view this week to cover the globe. This map is a based on a more detailed, global research study of people’s attitudes to having neighbors of a different race.
From the Washington Post:
When two Swedish economists set out to examine whether economic freedom made people any more or less racist, they knew how they would gauge economic freedom, but they needed to find a way to measure a country’s level of racial tolerance. So they turned to something called the World Values Survey, which has been measuring global attitudes and opinions for decades.
Among the dozens of questions that World Values asks, the Swedish economists found one that, they believe, could be a pretty good indicator of tolerance for other races. The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society. (The study concluded that economic freedom had no correlation with racial tolerance, but it does appear to correlate with tolerance toward homosexuals.)
Unfortunately, the Swedish economists did not include all of the World Values Survey data in their final research paper. So I went back to the source, compiled the original data and mapped it out on the infographic above. In the bluer countries, fewer people said they would not want neighbors of a different race; in red countries, more people did.
If we treat this data as indicative of racial tolerance, then we might conclude that people in the bluer countries are the least likely to express racist attitudes, while the people in red countries are the most likely.
Update: Compare the results to this map of the world’s most and least diverse countries.
Before we dive into the data, a couple of caveats. First, it’s entirely likely that some people lied when answering this question; it would be surprising if they hadn’t. But the operative question, unanswerable, is whether people in certain countries were more or less likely to answer the question honestly. For example, while the data suggest that Swedes are more racially tolerant than Finns, it’s possible that the two groups are equally tolerant but that Finns are just more honest. The willingness to state such a preference out loud, though, might be an indicator of racial attitudes in itself. Second, the survey is not conducted every year; some of the results are very recent and some are several years old, so we’re assuming the results are static, which might not be the case.
• Anglo and Latin countries most tolerant. People in the survey were most likely to embrace a racially diverse neighbor in the United Kingdom and its Anglo former colonies (the United States, Canada, Australia and New Zealand) and in Latin America. The only real exceptions were oil-rich Venezuela, where income inequality sometimes breaks along racial lines, and the Dominican Republic, perhaps because of its adjacency to troubled Haiti. Scandinavian countries also scored high.
• India, Jordan, Bangladesh and Hong Kong by far the least tolerant. In only three of 81 surveyed countries, more than 40 percent of respondents said they would not want a neighbor of a different race. This included 43.5 percent of Indians, 51.4 percent of Jordanians and an astonishingly high 71.8 percent of Hong Kongers and 71.7 percent of Bangladeshis.
Read more about this map here.
- 1920s London in Moving Color>
A recently unearthed celluloid (yes, celluloid) film of London in 1927 shows the capital in bustling, colorful splendor. The film was shot by Claude Fosse-Green, a pioneer of colour film in the UK.
Film courtesy of Claude Fosse-Green archives / Telegraph.
- MondayMap: Intolerance and Hatred>
A fascinating map of tweets espousing hatred and racism across the United States. The data analysis and map were developed by researchers at Humboldt State University.
From the Guardian:
[T]he students and professors at Humboldt State University who produced this map read the entirety of the 150,000 geo-coded tweets they analysed.
Using humans rather than machines means that this research was able to avoid the basic pitfall of most semantic analysis where a tweet stating ‘the word homo is unacceptable’ would still be classed as hate speech. The data has also been ‘normalised’, meaning that the scale accounts for the total twitter traffic in each county so that the final result is something that shows the frequency of hateful words on Twitter. The only question that remains is whether the views of US Twitter users can be a reliable indication of the views of US citizens.
See the interactive map and read the entire article here.
- More CO2 is Good, Right?>
Yesterday, May 10, 2013, scientists published new measures of atmospheric carbon dioxide (CO2). For the first time in human history CO2 levels reached an average of 400 parts per million (ppm). This is particularly troubling since CO2 has long been known as the most potent heat trapping component of the atmosphere. The sobering milestone was recorded from the Mauna Loa Observatory in Hawaii — monitoring has been underway at the site since the mid-1950s.
This has many climate scientists re-doubling their efforts to warn of the consequences of climate change, which is believed to be driven by human activity and specifically the generation of atmospheric CO2 in ever increasing quantities. But not to be outdone, the venerable Wall Street Journal — seldom known for its well-reasoned scientific journalism — chimed in with an op-ed on the subject. According to the WSJ we have nothing to worry about because increased levels of CO2 are good for certain crops and the Earth had historically much higher levels of CO2 (though pre-humanity).
Ashutosh Jogalekar over at The Curious Wavefunction dissects the WSJ article line by line:
Since we were discussing the differences between climate change “skeptics” and “deniers” (or “denialists”, whatever you want to call them) the other day this piece is timely. The Wall Street Journal is not exactly known for reasoned discussion of climate change, but this Op-Ed piece may set a new standard even for its own naysayers and skeptics. It’s a piece by William Happer and Harrison Schmitt that’s so one-sided, sparse on detail, misleading and ultimately pointless that I am wondering if it’s a spoof.
Happer and Schmitt’s thesis can be summed up in one line: More CO2 in the atmosphere is a good thing because it’s good for one particular type of crop plant. That’s basically it. No discussion of the downsides, not even a pretense of a balanced perspective. Unfortunately it’s not hard to classify their piece as a denialist article because it conforms to some of the classic features of denial; it’s entirely one sided, it’s very short on detail, it does a poor job even with the little details that it does present and it simply ignores the massive amount of research done on the topic. In short it’s grossly misleading.
First of all Happer and Schmitt simply dismiss any connection that might exist between CO2 levels and rising temperatures, in the process consigning a fair amount of basic physics and chemistry to the dustbin. There are no references and no actual discussion of why they don’t believe there’s a connection. That’s a shoddy start to put it mildly; you would expect a legitimate skeptic to start with some actual evidence and references. Most of the article after that consists of a discussion of the differences between so-called C3 plants (like rice) and C4 plants (like corn and sugarcane). This is standard stuff found in college biochemistry textbooks, nothing revealing here. But Happer and Schmitt leverage a fundamental difference between the two – the fact that C4 plants can utilize CO2 more efficiently than C3 plants under certain conditions – into an argument for increasing CO2 levels in the atmosphere.
This of course completely ignores all the other potentially catastrophic effects that CO2 could have on agriculture, climate, biodiversity etc. You don’t even have to be a big believer in climate change to realize that focusing on only a single effect of a parameter on a complicated system is just bad science. Happer and Schmitt’s argument is akin to the argument that everyone should get themselves addicted to meth because one of meth’s effects is euphoria. So ramping up meth consumption will make everyone feel happier, right?
But even if you consider that extremely narrowly defined effect of CO2 on C3 and C4 plants, there’s still a problem. What’s interesting is that the argument has been countered by Matt Ridley in the pages of this very publication:
But it is not quite that simple. Surprisingly, the C4 strategy first became common in the repeated ice ages that began about four million years ago. This was because the ice ages were a very dry time in the tropics and carbon-dioxide levels were very low—about half today’s levels. C4 plants are better at scavenging carbon dioxide (the source of carbon for sugars) from the air and waste much less water doing so. In each glacial cold spell, forests gave way to seasonal grasslands on a huge scale. Only about 4% of plant species use C4, but nearly half of all grasses do, and grasses are among the newest kids on the ecological block.
So whereas rising temperatures benefit C4, rising carbon-dioxide levels do not. In fact, C3 plants get a greater boost from high carbon dioxide levels than C4. Nearly 500 separate experiments confirm that if carbon-dioxide levels roughly double from preindustrial levels, rice and wheat yields will be on average 36% and 33% higher, while corn yields will increase by only 24%.
So no, the situation is more subtle than the authors think. In fact I am surprised that, given that C4 plants actually do grow better at higher temperatures, Happer and Schmitt missed an opportunity for making the case for a warmer planet. In any case, there’s a big difference between improving yields of C4 plants under controlled greenhouse conditions and expecting these yields to improve without affecting other components of the ecosystem by doing a giant planetary experiment.
Read the entire article after the jump.
Image courtesy of Sierra Club.
- Your Weekly Groceries>
Photographer Peter Menzel traveled to over 20 countries to compile his culinary atlas Hungry Planet. But this is no ordinary cookbook or trove of local delicacies. The book is a visual catalog of a family’s average weekly grocery shopping.
It is both enlightening and sobering to see the nutritional inventory of a Western family juxtaposed with that of a sub-Saharan African family. It puts into perspective the internal debate within the United States of the 1 percent versus the 99 percent. Those of us lucky enough to have been born in one of the world’s richer nations, even though we may be part of the 99 percent are still truly in the group of haves, rather than the have-nots.
For more on Menzel’s book jump over to Amazon.
The Melander family from Bargteheide, Germany, who spend around £320 [$480] on a week’s worth of food.
Images courtesy of Peter Menzel /Barcroft Media.
- Anti-Eco-Friendly Consumption>
It should come as no surprise that those who deny the science of climate change and human-propelled impact on the environment would also shirk from purchasing products and services that are friendly to the environment.
A recent study shows how extreme political persuasion sways purchasing behavior of light bulbs: conservatives are more likely to purchase incandescent bulbs, while moderates and liberals lean towards more eco-friendly bulbs.
Joe Barton, U.S. Representative from Texas, sums up the issue of light bulb choice quite neatly, “… it is about personal freedom”. All the while our children shake their heads in disbelief.
Presumably many climate change skeptics prefer to purchase items that are harmful to the environment and also to humans just to make a political statement. This might include continuing to purchase products containing dangerous levels of unpronounceable acronyms and questionable chemicals: rBGH (recombinant Bovine Growth Hormone) in milk, BPA (Bisphenol_A) in plastic utensils and bottles, KBrO3 (Potassium Bromate) in highly processed flour, BHA (Butylated Hydroxyanisole) food preservative, Azodicarbonamide in dough.
Freedom truly does come at a cost.
From the Guardian:
Eco-friendly labels on energy-saving bulbs are a turn-off for conservative shoppers, a new study has found.
The findings, published this week in the Proceedings of the National Academy of Sciences, suggest that it could be counterproductive to advertise the environmental benefits of efficient bulbs in the US. This could make it even more difficult for America to adopt energy-saving technologies as a solution to climate change.
Consumers took their ideological beliefs with them when they went shopping, and conservatives switched off when they saw labels reading “protect the environment”, the researchers said.
The study looked at the choices of 210 consumers, about two-thirds of them women. All were briefed on the benefits of compact fluorescent (CFL) bulbs over old-fashioned incandescents.
When both bulbs were priced the same, shoppers across the political spectrum were uniformly inclined to choose CFL bulbs over incandescents, even those with environmental labels, the study found.
But when the fluorescent bulb cost more – $1.50 instead of $0.50 for an incandescent – the conservatives who reached for the CFL bulb chose the one without the eco-friendly label.
“The more moderate and conservative participants preferred to bear a long-term financial cost to avoid purchasing an item associated with valuing environmental protections,” the study said.
The findings suggest the extreme political polarisation over environment and climate change had now expanded to energy-savings devices – which were once supported by right and left because of their money-saving potential.
“The research demonstrates how promoting the environment can negatively affect adoption of energy efficiency in the United States because of the political polarisation surrounding environmental issues,” the researchers said.
Earlier this year Harvard academic Theda Skocpol produced a paper tracking how climate change and the environment became a defining issue for conservatives, and for Republican-elected officials.
Conservative activists elevated opposition to the science behind climate change, and to action on climate change, to core beliefs, Skocpol wrote.
There was even a special place for incandescent bulbs. Republicans in Congress two years ago fought hard to repeal a law phasing out incandescent bulbs – even over the objections of manufacturers who had already switched their product lines to the new energy-saving technology.
Republicans at the time cast the battle of the bulb as an issue of liberty. “This is about more than just energy consumption. It is about personal freedom,” said Joe Barton, the Texas Republican behind the effort to keep the outdated bulbs burning.
Read the entire article following the jump.
Image courtesy of Housecraft.
- Science and Art of the Brain>
Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.
From the New York Times:
This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.
Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.
This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.
Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.
The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.
As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.
Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.
The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.
This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.
Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.
Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?
Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”
Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.
This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.
Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.
Read the entire article following the jump.
- Cheap Hydrogen>
Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.
This new method of production appears to be less costly, less dangerous and also more environmentally sound.
From the Independent:
Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.
The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.
Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.
Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.
In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.
Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.
Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already offers potential and promise and can help in making the storage of green energy cheaper.”
Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.
But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.
Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate the atoms.
Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”
Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen to generate our electrical power at home. It would also allow us to significantly reduce the country’s carbon footprint.”
Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”
Read the entire article following the jump.
- Dark Lightning>
It’s fascinating how a seemingly well-understood phenomenon, such as lightning, can still yield enormous surprises. Researchers have found that visible flashes of lightning can also be accompanied by non-visible, and more harmful, radiation such as x- and gamma-rays.
From the Washington Post:
A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.
But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.
Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.
Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.
The only way to determine whether an airplane had been struck by dark lightning, Dwyer says, “would be to use a radiation detector. Right in the middle of [a flash], a very brief bluish-purple glow around the plane might be perceptible. Inside an aircraft, a passenger would probably not be able to feel or hear much of anything, but the radiation dose could be significant.”
However, because there’s only about one dark lightning occurrence for every thousand visible flashes and because pilots take great pains to avoid thunderstorms, Dwyer says, the risk of injury is quite limited. No one knows for sure if anyone has ever been hit by dark lightning.
About 25 million visible thunderbolts hit the United States every year, killing about 30 people and many farm animals, says John Jensenius, a lightning safety specialist with the National Weather Service in Gray, Maine. Worldwide, thunderstorms produce about a billion or so lightning bolts annually.
Read the entire article after the jump.
Image: Lightning in Foshan, China. Courtesy of Telegraph.
No, the drawing is not a construction from the mind of sci fi illustrator extraordinaire Michael Whelan. This is reality. Or, to be more precise an architectural rendering of buildings to come — in China of course.
From the Independent:
A French architecture firm has unveiled their new ambitious ‘farmscraper’ project – six towering structures which promise to change the way that we think about green living.
Vincent Callebaut Architects’ innovative Asian Cairns was planned specifically for Chinese city Shenzhen in response to the growing population, increasing CO2 emissions and urban development.
The structures will consist of a series of pebble-shaped levels – each connected by a central spinal column – which will contain residential areas, offices, and leisure spaces.
Sustainability is key to the innovative project – wind turbines will cover the roof of each tower, water recycling systems will be in place to recycle waste water, and solar panels will be installed on the buildings, providing renewable energy. The structures will also have gardens on the exterior, further adding to the project’s green credentials.
Vincent Callebaut, the Belgian architect behind the firm, is well-known for his ambitious, eco-friendly projects, winning many awards over the years.
His self-sufficient amphibious city Lilypad – ‘a floating ecopolis for climate refugees’ – is perhaps his most famous design. The model has been proposed as a long-term solution to rising water levels, and successfully meets the four challenges of climate, biodiversity, water, and health, that the OECD laid out in 2008.
Vincent Callebaut Architects said: “It is a prototype to build a green, dense, smart city connected by technology and eco-designed from biotechnologies.”
Read the entire article and see more illustrations after the jump.
Image: “Farmscrapers” take eco-friendly architecture to dizzying heights in China. Courtesy of Vincent Callebaut Architects / Independent.
- The Richest Person in the Solar System>
Forget Warren Buffet, Bill Gates and Carlos Slim or the Russian oligarchs and the emirs of the Persian Gulf. These guys are merely multi-billionaires. Their fortunes — combined — account for less than half of 1 percent of the net worth of Dennis Hope, the world’s first trillionaire. In fact, you could describe Dennis as the solar system’s first trillionaire, with an estimated wealth of $100 trillion.
So, why have you never heard of Dennis Hope, trillionaire? Where does he invest his money? And, how did he amass this jaw-dropping uber-fortune? The answer to the first question is that he lives a relatively ordinary and quiet life in Nevada. The answer to the second question is: property. The answer to the third, and most fascinating question: well, he owns most of the Moon. He also owns the majority of the planets Mars, Venus and Mercury, and 90 or so other celestial plots. You too could become an interplanetary property investor for the starting and very modest sum of $19.99. Please write your check to… Dennis Hope.
The New York Times has a recent story and documentary on Mr.Hope, here.From Discover:
Dennis Hope, self-proclaimed Head Cheese of the Lunar Embassy, will promise you the moon. Or at least a piece of it. Since 1980, Hope has raked in over $9 million selling acres of lunar real estate for $19.99 a pop. So far, 4.25 million people have purchased a piece of the moon, including celebrities like Barbara Walters, George Lucas, Ronald Reagan, and even the first President Bush. Hope says he exploited a loophole in the 1967 United Nations Outer Space Treaty, which prohibits nations from owning the moon.
Because the law says nothing about individual holders, he says, his claim—which he sent to the United Nations—has some clout. “It was unowned land,” he says. “For private property claims, 197 countries at one time or another had a basis by which private citizens could make claims on land and not make payment. There are no standardized rules.”
Hope is right that the rules are somewhat murky—both Japan and the United States have plans for moon colonies—and lunar property ownership might be a powder keg waiting to spark. But Ram Jakhu, law professor at the Institute of Air and Space Law at McGill University in Montreal, says that Hope’s claims aren’t likely to hold much weight. Nor, for that matter, would any nation’s. “I don’t see a loophole,” Jakhu says. “The moon is a common property of the international community, so individuals and states cannot own it. That’s very clear in the U.N. treaty. Individuals’ rights cannot prevail over the rights and obligations of a state.”
Jakhu, a director of the International Institute for Space Law, believes that entrepreneurs like Hope have misread the treaty and that the 1967 legislation came about to block property claims in outer space. Historically, “the ownership of private property has been a major cause of war,” he says. “No one owns the moon. No one can own any property in outer space.”
Hope refuses to be discouraged. And he’s focusing on expansion. “I own about 95 different planetary bodies,” he says. “The total amount of property I currently own is about 7 trillion acres. The value of that property is about $100 trillion. And that doesn’t even include mineral rights.”Video courtesy of the New York Times.
- MondayMap: New Jersey Under Water>
We love maps here at theDiagonal. So much so that we’ve begun a new feature: MondayMap. As the name suggests, we plan to feature fascinating new maps on Mondays. For our readers who prefer their plots served up on a Saturday, sorry. Usually we like to highlight maps that cause us to look at our world differently or provide a degree of welcome amusement, such as the wonderful trove of maps over at Strange Maps curated by Frank Jacobs.
However, this first MondayMap is a little different and serious. It’s an interactive map that shows the impact of estimated sea level rise on the streets of New Jersey. Obviously, such a tool would be a great boon for emergency services and urban planners. For the rest of us, whether we live in New Jersey or not, maps like this one — of extreme weather events and projections — are likely to become much more common over the coming decades. Kudos to researchers at Rutgers University for developing the NJ Flood Mapper.From Wall Street Journal:
While superstorm Sandy revealed the Northeast’s vulnerability, a new map by New Jersey scientists suggests how rising seas could make future storms even worse.
The map shows ocean waters surging more than a mile into communities along Raritan Bay, engulfing nearly all of New Jersey’s barrier islands and covering northern sections of the New Jersey Turnpike and land surrounding the Port Newark Container Terminal.
Such damage could occur under a scenario in which sea levels rise 6 feet—or a 3-foot rise in tandem with a powerful coastal storm, according to the map produced by Rutgers University researchers.
The satellite-based tool, one of the first comprehensive, state-specific maps of its kind, uses a Google-maps-style interface that allows viewers to zoom into street-level detail.
“We are not trying to unduly frighten people,” said Rick Lathrop, director of the Grant F. Walton Center for Remote Sensing and Spatial Analysis at Rutgers, who led the map’s development. “This is providing people a look at where our vulnerability is.”
Still, the implications of the Rutgers project unnerve residents of Surf City, on Long Beach Island, where the map shows water pouring over nearly all of the barrier island’s six municipalities with a 6-foot increase in sea levels.
“The water is going to come over the island and there will be no island,” said Barbara Epstein, a 73-year-old resident of nearby Barnegat Light, who added that she is considering moving after 12 years there. “The storms are worsening.”
To be sure, not everyone agrees that climate change will make sea-level rise more pronounced.
Politically, climate change remains an issue of debate. New York Gov. Andrew Cuomo has said Sandy showed the need to address the issue, while New Jersey Gov. Chris Christie has declined to comment on whether Sandy was linked to climate change.
Scientists have gone ahead and started to map sea-level-rise scenarios in New Jersey, New York City and flood-prone communities along the Gulf of Mexico to help guide local development and planning.
Sea levels have risen by 1.3 feet near Atlantic City and 0.9 feet by Battery Park between 1911 and 2006, according to data from the National Oceanic and Atmospheric Administration.
A serious storm could add at least another 3 feet, with historic storm surges—Sandy-scale—registering at 9 feet. So when planning for future coastal flooding, 6 feet or higher isn’t far-fetched when combining sea-level rise with high tides and storm surges, Mr. Lathrop said.
NOAA estimated in December that increasing ocean temperatures could cause sea levels to rise by 1.6 feet in 100 years, and by 3.9 feet if considering some level of Arctic ice-sheet melt.
Such an increase amounts to 0.16 inches per year, but the eventual impact could mean that a small storm could “do the same damage that Sandy did,” said Peter Howd, co-author of a 2012 U.S. Geological Survey report that found the rate of sea level rise had increased in the northeast.Image: NJ Flood Mapper. Courtesy of Grant F. Walton Center for Remote Sensing and Spatial Analysis (CRSSA), Rutgers University, in partnership with the Jacques Cousteau National Estuarine Research Reserve (JCNERR), and in collaboration with the NOAA Coastal Services Center (CSC).
- Engineering Your Food Addiction >
Fast food, snack foods and all manner of processed foods are a multi-billion dollar global industry. So, it’s no surprise that companies collectively spend $100s of millions each year to perfect the perfect bite. Importantly, part of this perfection (for the businesses) is to ensure that you keep coming back for more.
By all accounts the “cheeto” is as close to processed-food-addiction-heaven as we can get — so far. It has just the right amount of salt (too much) and fat (too much), crunchiness, and something known as vanishing caloric density (melts in the mouth at the optimum rate). Aesthetically sad, but scientifically true.From the New York Times:
On the evening of April 8, 1999, a long line of Town Cars and taxis pulled up to the Minneapolis headquarters of Pillsbury and discharged 11 men who controlled America’s largest food companies. Nestlé was in attendance, as were Kraft and Nabisco, General Mills and Procter & Gamble, Coca-Cola and Mars. Rivals any other day, the C.E.O.’s and company presidents had come together for a rare, private meeting. On the agenda was one item: the emerging obesity epidemic and how to deal with it. While the atmosphere was cordial, the men assembled were hardly friends. Their stature was defined by their skill in fighting one another for what they called “stomach share” — the amount of digestive space that any one company’s brand can grab from the competition.
James Behnke, a 55-year-old executive at Pillsbury, greeted the men as they arrived. He was anxious but also hopeful about the plan that he and a few other food-company executives had devised to engage the C.E.O.’s on America’s growing weight problem. “We were very concerned, and rightfully so, that obesity was becoming a major issue,” Behnke recalled. “People were starting to talk about sugar taxes, and there was a lot of pressure on food companies.” Getting the company chiefs in the same room to talk about anything, much less a sensitive issue like this, was a tricky business, so Behnke and his fellow organizers had scripted the meeting carefully, honing the message to its barest essentials. “C.E.O.’s in the food industry are typically not technical guys, and they’re uncomfortable going to meetings where technical people talk in technical terms about technical things,” Behnke said. “They don’t want to be embarrassed. They don’t want to make commitments. They want to maintain their aloofness and autonomy.”
A chemist by training with a doctoral degree in food science, Behnke became Pillsbury’s chief technical officer in 1979 and was instrumental in creating a long line of hit products, including microwaveable popcorn. He deeply admired Pillsbury but in recent years had grown troubled by pictures of obese children suffering from diabetes and the earliest signs of hypertension and heart disease. In the months leading up to the C.E.O. meeting, he was engaged in conversation with a group of food-science experts who were painting an increasingly grim picture of the public’s ability to cope with the industry’s formulations — from the body’s fragile controls on overeating to the hidden power of some processed foods to make people feel hungrier still. It was time, he and a handful of others felt, to warn the C.E.O.’s that their companies may have gone too far in creating and marketing products that posed the greatest health concerns.
In This Article:
• ‘In This Field, I’m a Game Changer.’
• ‘Lunchtime Is All Yours’
• ‘It’s Called Vanishing Caloric Density.’
• ‘These People Need a Lot of Things, but They Don’t Need a Coke.’
The discussion took place in Pillsbury’s auditorium. The first speaker was a vice president of Kraft named Michael Mudd. “I very much appreciate this opportunity to talk to you about childhood obesity and the growing challenge it presents for us all,” Mudd began. “Let me say right at the start, this is not an easy subject. There are no easy answers — for what the public health community must do to bring this problem under control or for what the industry should do as others seek to hold it accountable for what has happened. But this much is clear: For those of us who’ve looked hard at this issue, whether they’re public health professionals or staff specialists in your own companies, we feel sure that the one thing we shouldn’t do is nothing.”
As he spoke, Mudd clicked through a deck of slides — 114 in all — projected on a large screen behind him. The figures were staggering. More than half of American adults were now considered overweight, with nearly one-quarter of the adult population — 40 million people — clinically defined as obese. Among children, the rates had more than doubled since 1980, and the number of kids considered obese had shot past 12 million. (This was still only 1999; the nation’s obesity rates would climb much higher.) Food manufacturers were now being blamed for the problem from all sides — academia, the Centers for Disease Control and Prevention, the American Heart Association and the American Cancer Society. The secretary of agriculture, over whom the industry had long held sway, had recently called obesity a “national epidemic.”
Mudd then did the unthinkable. He drew a connection to the last thing in the world the C.E.O.’s wanted linked to their products: cigarettes. First came a quote from a Yale University professor of psychology and public health, Kelly Brownell, who was an especially vocal proponent of the view that the processed-food industry should be seen as a public health menace: “As a culture, we’ve become upset by the tobacco companies advertising to children, but we sit idly by while the food companies do the very same thing. And we could make a claim that the toll taken on the public health by a poor diet rivals that taken by tobacco.”
“If anyone in the food industry ever doubted there was a slippery slope out there,” Mudd said, “I imagine they are beginning to experience a distinct sliding sensation right about now.”
Mudd then presented the plan he and others had devised to address the obesity problem. Merely getting the executives to acknowledge some culpability was an important first step, he knew, so his plan would start off with a small but crucial move: the industry should use the expertise of scientists — its own and others — to gain a deeper understanding of what was driving Americans to overeat. Once this was achieved, the effort could unfold on several fronts. To be sure, there would be no getting around the role that packaged foods and drinks play in overconsumption. They would have to pull back on their use of salt, sugar and fat, perhaps by imposing industrywide limits. But it wasn’t just a matter of these three ingredients; the schemes they used to advertise and market their products were critical, too. Mudd proposed creating a “code to guide the nutritional aspects of food marketing, especially to children.”
“We are saying that the industry should make a sincere effort to be part of the solution,” Mudd concluded. “And that by doing so, we can help to defuse the criticism that’s building against us.”
What happened next was not written down. But according to three participants, when Mudd stopped talking, the one C.E.O. whose recent exploits in the grocery store had awed the rest of the industry stood up to speak. His name was Stephen Sanger, and he was also the person — as head of General Mills — who had the most to lose when it came to dealing with obesity. Under his leadership, General Mills had overtaken not just the cereal aisle but other sections of the grocery store. The company’s Yoplait brand had transformed traditional unsweetened breakfast yogurt into a veritable dessert. It now had twice as much sugar per serving as General Mills’ marshmallow cereal Lucky Charms. And yet, because of yogurt’s well-tended image as a wholesome snack, sales of Yoplait were soaring, with annual revenue topping $500 million. Emboldened by the success, the company’s development wing pushed even harder, inventing a Yoplait variation that came in a squeezable tube — perfect for kids. They called it Go-Gurt and rolled it out nationally in the weeks before the C.E.O. meeting. (By year’s end, it would hit $100 million in sales.)
According to the sources I spoke with, Sanger began by reminding the group that consumers were “fickle.” (Sanger declined to be interviewed.) Sometimes they worried about sugar, other times fat. General Mills, he said, acted responsibly to both the public and shareholders by offering products to satisfy dieters and other concerned shoppers, from low sugar to added whole grains. But most often, he said, people bought what they liked, and they liked what tasted good. “Don’t talk to me about nutrition,” he reportedly said, taking on the voice of the typical consumer. “Talk to me about taste, and if this stuff tastes better, don’t run around trying to sell stuff that doesn’t taste good.”
To react to the critics, Sanger said, would jeopardize the sanctity of the recipes that had made his products so successful. General Mills would not pull back. He would push his people onward, and he urged his peers to do the same. Sanger’s response effectively ended the meeting.
“What can I say?” James Behnke told me years later. “It didn’t work. These guys weren’t as receptive as we thought they would be.” Behnke chose his words deliberately. He wanted to be fair. “Sanger was trying to say, ‘Look, we’re not going to screw around with the company jewels here and change the formulations because a bunch of guys in white coats are worried about obesity.’ ”
The meeting was remarkable, first, for the insider admissions of guilt. But I was also struck by how prescient the organizers of the sit-down had been. Today, one in three adults is considered clinically obese, along with one in five kids, and 24 million Americans are afflicted by type 2 diabetes, often caused by poor diet, with another 79 million people having pre-diabetes. Even gout, a painful form of arthritis once known as “the rich man’s disease” for its associations with gluttony, now afflicts eight million Americans.
The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.Image: Cheeto puffs. Courtesy of tumblr.
- Geoengineering As a Solution to Climate Change>
Experimental physicist David Keith has a plan: dump hundreds of thousands of tons of atomized sulfuric acid into the upper atmosphere; watch the acid particles reflect additional sunlight; wait for global temperature to drop. Many of Keith’s peers think this geoengineering scheme is crazy, least of which are the possible unknown and unmeasured side-effects, but this hasn’t stopped the healthy debate. One thing is becoming increasingly clear — humans need to take collective action.From Technology Review:
Here is the plan. Customize several Gulfstream business jets with military engines and with equipment to produce and disperse fine droplets of sulfuric acid. Fly the jets up around 20 kilometers—significantly higher than the cruising altitude for a commercial jetliner but still well within their range. At that altitude in the tropics, the aircraft are in the lower stratosphere. The planes spray the sulfuric acid, carefully controlling the rate of its release. The sulfur combines with water vapor to form sulfate aerosols, fine particles less than a micrometer in diameter. These get swept upward by natural wind patterns and are dispersed over the globe, including the poles. Once spread across the stratosphere, the aerosols will reflect about 1 percent of the sunlight hitting Earth back into space. Increasing what scientists call the planet’s albedo, or reflective power, will partially offset the warming effects caused by rising levels of greenhouse gases.
The author of this so-called geoengineering scheme, David Keith, doesn’t want to implement it anytime soon, if ever. Much more research is needed to determine whether injecting sulfur into the stratosphere would have dangerous consequences such as disrupting precipitation patterns or further eating away the ozone layer that protects us from damaging ultraviolet radiation. Even thornier, in some ways, are the ethical and governance issues that surround geoengineering—questions about who should be allowed to do what and when. Still, Keith, a professor of applied physics at Harvard University and a leading expert on energy technology, has done enough analysis to suspect it could be a cheap and easy way to head off some of the worst effects of climate change.
According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft.
One of the startling things about Keith’s proposal is just how little sulfur would be required. A few grams of it in the stratosphere will offset the warming caused by a ton of carbon dioxide, according to his estimate. And even the amount that would be needed by 2070 is dwarfed by the roughly 50 million metric tons of sulfur emitted by the burning of fossil fuels every year. Most of that pollution stays in the lower atmosphere, and the sulfur molecules are washed out in a matter of days. In contrast, sulfate particles remain in the stratosphere for a few years, making them more effective at reflecting sunlight.
The idea of using sulfate aerosols to offset climate warming is not new. Crude versions of the concept have been around at least since a Russian climate scientist named Mikhail Budkyo proposed the idea in the mid-1970s, and more refined descriptions of how it might work have been discussed for decades. These days the idea of using sulfur particles to counteract warming—often known as solar radiation management, or SRM—is the subject of hundreds of papers in academic journals by scientists who use computer models to try to predict its consequences.
But Keith, who has published on geoengineering since the early 1990s, has emerged as a leading figure in the field because of his aggressive public advocacy for more research on the technology—and his willingness to talk unflinchingly about how it might work. Add to that his impeccable academic credentials—last year Harvard lured him away from the University of Calgary with a joint appointment in the school of engineering and the Kennedy School of Government—and Keith is one of the world’s most influential voices on solar geoengineering. He is one of the few who have done detailed engineering studies and logistical calculations on just how SRM might be carried out. And if he and his collaborator James Anderson, a prominent atmospheric chemist at Harvard, gain public funding, they plan to conduct some of the first field experiments to assess the risks of the technique.
Leaning forward from the edge of his chair in a small, sparse Harvard office on an unusually warm day this winter, he explains his urgency. Whether or not greenhouse-gas emissions are cut sharply—and there is little evidence that such reductions are coming—”there is a realistic chance that [solar geoengineering] technologies could actually reduce climate risk significantly, and we would be negligent if we didn’t look at that,” he says. “I’m not saying it will work, and I’m not saying we should do it.” But “it would be reckless not to begin serious research on it,” he adds. “The sooner we find out whether it works or not, the better.”
The overriding reason why Keith and other scientists are exploring solar geoengineering is simple and well documented, though often overlooked: the warming caused by atmospheric carbon dioxide buildup is for all practical purposes irreversible, because the climate change is directly related to the total cumulative emissions. Even if we halt carbon dioxide emissions entirely, the elevated concentrations of the gas in the atmosphere will persist for decades. And according to recent studies, the warming itself will continue largely unabated for at least 1,000 years. If we find in, say, 2030 or 2040 that climate change has become intolerable, cutting emissions alone won’t solve the problem.
“That’s the key insight,” says Keith. While he strongly supports cutting carbon dioxide emissions as rapidly as possible, he says that if the climate “dice” roll against us, that won’t be enough: “The only thing that we think might actually help [reverse the warming] in our lifetime is in fact geoengineering.”
- From Sea to Shining Sea - By Rail>
Now that air travel has become well and truly commoditized, and for most of us, a nightmare, it’s time, again, to revisit the romance of rail. After all, the elitist romance of air travel passed away about 40-50 years ago. Now all we are left with is parking trauma at the airport; endless lines at check in, security, the gate and while boarding and disembarking; inane airport announcements and beeping golf carts; coughing, tweeting passengers crammed shoulder to shoulder in far too small seats; poor quality air and poor quality service in the cabin. It’s even dangerous to open the shade and look out of the aircraft window for fear of waking a cranky neighbor, or, more calamitous still, for washing out the in-seat displays showing the latest reality TV videos.
Some of you, surely, still pine for a quiet and calming ride across the country taking in the local sights at a more leisurely pace. Alfred Twu, who helped define the 2008 high speed rail proposal for California, would have us zooming across the entire United States in trains, again. So, it not be a leisurely ride — think more like 200-300 miles per hour — but it may well bring us closer to what we truly miss when suspended at 30,000 ft. We can’t wait.From the Guardian:
I created this US High Speed Rail Map as a composite of several proposed maps from 2009, when government agencies and advocacy groups were talking big about rebuilding America’s train system.
Having worked on getting California’s high speed rail approved in the 2008 elections, I’ve long sung the economic and environmental benefits of fast trains.
This latest map comes more from the heart. It speaks more to bridging regional and urban-rural divides than about reducing airport congestion or even creating jobs, although it would likely do that as well.
Instead of detailing construction phases and service speeds, I took a little artistic license and chose colors and linked lines to celebrate America’s many distinct but interwoven regional cultures.
The response to my map this week went above and beyond my wildest expectations, sparking vigorous political discussion between thousands of Americans ranging from off-color jokes about rival cities to poignant reflections on how this kind of rail network could change long-distance relationships and the lives of faraway family members.
Commenters from New York and Nebraska talked about “wanting to ride the red line”. Journalists from Chattanooga, Tennessee (population 167,000) asked to reprint the map because they were excited to be on the map. Hundreds more shouted “this should have been built yesterday”.
It’s clear that high speed rail is more than just a way to save energy or extend economic development to smaller cities.
More than mere steel wheels on tracks, high speed rail shrinks space and brings farflung families back together. It keeps couples in touch when distant career or educational opportunities beckon. It calls to adventure and travel. It is duct tape and string to reconnect politically divided regions. Its colorful threads weave new American Dreams.
That said, while trains still live large in the popular imagination, decades of limited service have left some blind spots in the collective consciousness. I’ll address few here:
Myth: High speed rail is just for big city people.
Fact: Unlike airplanes or buses which must make detours to drop off passengers at intermediate points, trains glide into and out of stations with little delay, pausing for under a minute to unload passengers from multiple doors. Trains can, have, and continue to effectively serve small towns and suburbs, whereas bus service increasingly bypasses them.
I do hear the complaint: “But it doesn’t stop in my town!” In the words of one commenter, “the train doesn’t need to stop on your front porch.” Local transit, rental cars, taxis, biking, and walking provide access to and from stations.
Myth: High speed rail is only useful for short distances.
Fact: Express trains that skip stops allow lines to serve many intermediate cities while still providing some fast end-to-end service. Overnight sleepers with lie-flat beds where one boards around dinner and arrives after breakfast have been successful in the US before and are in use on China’s newest 2,300km high speed line.Image: U.S. High Speed Rail System proposal. Alfred Twu created this map to showcase what could be possible.
- Beware North Korea, Google is Watching You>
This week Google refreshed its maps of North Korea. What was previously a blank canvas with only the country’s capital — Pyongyang — visible, now boasts roads, hotels, monuments and even some North Korean internment camps. While this is not the first detailed map of the secretive state it is an important milestone in Google’s quest to map us all.From the Washington Post:
Until Tuesday, North Korea appeared on Google Maps as a near-total white space — no roads, no train lines, no parks and no restaurants. The only thing labeled was the capital city, Pyongyang.
This all changed when Google, on Tuesday, rolled out a detailed map of one of the world’s most secretive states. The new map labels everything from Pyongyang’s subway stops to the country’s several city-sized gulags, as well as its monuments, hotels, hospitals and department stores.
According to a Google blog post, the maps were created by a group of volunteer “citizen cartographers,” through an interface known as Google Map Maker. That program — much like Wikipedia — allows users to submit their own data, which is then fact-checked by other users, and sometimes altered many times over. Similar processes were used in other once-unmapped countries like Afghanistan and Burma.
In the case of North Korea, those volunteers worked from outside of the country, beginning from 2009. They used information that was already public, compiling details from existing analog maps, satellite images, or other Web-based materials. Much of the information was already available on the Internet, said Hwang Min-woo, 28, a volunteer mapmaker from Seoul who worked for two years on the project.
North Korea was the last country virtually unmapped by Google, but other — even more detailed — maps of the North existed before this. Most notable is a map created by Curtis Melvin, who runs the North Korea Economy Watch blog and spent years identifying thousands of landmarks in the North: tombs, textile factories, film studios, even rumored spy training locations. Melvin’s map is available as a downloadable Google Earth file.
Google’s map is important, though, because it is so readily accessible. The map is unlikely to have an immediate influence in the North, where Internet use is restricted to all but a handful of elites. But it could prove beneficial for outsider analysts and scholars, providing an easy-to-access record about North Korea’s provinces, roads, landmarks, as well as hints about its many unseen horrors.Read the entire article and check out more maps after the jump.
- So, You Want to Be a Brit?>
The United Kingdom government has just published its updated 180-page handbook for new residents. So, those seeking to become subjects of Her Majesty will need to brush up on more that Admiral Nelson, Churchill, Spitfires, Chaucer and the Black Death. Now, if you are one of the approximately 150,000 new residents each year, you may well have to learn about Morecambe and Wise, Roald Dahl, and Monty Python. Nudge-nudge, wink-wink!From the Telegraph:
It has been described as “essential reading” for migrants and takes readers on a whirlwind historical tour of Britain from Stone Age hunter-gatherers to Morecambe and Wise, skipping lightly through the Black Death and Tudor England.
The latest Home Office citizenship handbook, Life in the United Kingdom: A Guide for New Residents, has scrapped sections on claiming benefits, written under the Labour government in 2007, for a triumphalist vision of events and people that helped make Britain a “great place to live”.
The Home Office said it had stripped-out “mundane information” about water meters, how to find train timetables, and using the internet.
The guide’s 180 pages, filled with pictures of the Queen, Spitfires and Churchill, are a primer for citizenship tests taken by around 150,000 migrants a year.
Comedies such as Monty Python and The Morecambe and Wise Show are highlighted as examples of British people’s “unique sense of humour and satire”, while Olympic athletes including Jessica Ennis and Sir Chris Hoy are included for the first time.
Previously, historical information was included in the handbook but was not tested. Now the book features sections on Roman, Anglo-Saxon and Viking Britain to give migrants an “understanding of how modern Britain has evolved”.
They can equally expect to be quizzed on the children’s author Roald Dahl, the Harrier jump jet and the Turing machine – a theoretical device proposed by Alan Turing and seen as a precursor to the modern computer.
The handbook also refers to the works of William Shakespeare, Geoffrey Chaucer and Jane Austen alongside Coronation Street. Meanwhile, Christmas pudding, the Last Night of the Proms and cricket matches are described as typical “indulgences”.
The handbook goes on sale today and forms the basis of the 45-minute exam in which migrants must gain marks of 75 per cent to pass.Image: Group shot of the Monty Python crew in 1969. Courtesy of Wikpedia.
- Las Vegas, Tianducheng and Paris: Cultural Borrowing>
These three locations in Nevada, China (near Hangzhou) and Paris, France, have something in common. People the world over travel to these three places to see what they share. But only one has an original. In this case, we’re talking about the Eiffel Tower.
Now, this architectural grand theft is subject to a lengthy debate — the merits of mimicry, on a vast scale. There is even a fascinating coffee table sized book dedicated to this growing trend: Original Copies: Architectural Mimicry in Contemporary China, by Bianca Bosker.
Interestingly, the copycat trend only seems worrisome if those doing the copying are in a powerful and growing nation, and the copying is done on a national scale, perhaps for some form of cultural assimilation. After all, we don’t hear similar cries when developers put up a copy of Venice in Las Vegas — that’s just for entertainment we are told.
Yet haven’t civilizations borrowed, and stolen, ideas both good and bad throughout the ages? The answer of course is an unequivocal yes. Humans are avaricious collectors of memes that work — it’s more efficient to borrow than to invent. The Greeks borrowed from the Egyptians; the Romans borrowed from the Greeks; the Turks borrowed from the Romans; the Arabs borrowed from the Turks; the Spanish from the Arabs, the French from the Spanish, the British from the French, and so on. Of course what seems to be causing a more recent stir is that China is doing the borrowing, and on such a rapid and grand scale — the nation is copying not just buildings (and most other products) but entire urban landscapes. However, this is one way that empires emerge and evolve. In this case, China’s acquisitive impulses could, perhaps, be tempered if most nations of the world borrowed less from the Chinese — money that is. But that’s another story.From the Atlantic:
The latest and most famous case of Chinese architectural mimicry doesn’t look much like its predecessors. On December 28, German news weekly Der Spiegel reported that the Wangjing Soho, Zaha Hadid’s soaring new office and retail development under construction in Beijing, is being replicated, wall for wall and window for window, in Chongqing, a city in central China.
To most outside observers, this bold and quickly commissioned counterfeit represents a familiar form of piracy. In fashion, technology, and architecture, great ideas trickle down, often against the wishes of their progenitors. But in China, architectural copies don’t usually ape the latest designs.
In the vast space between Beijing and Chongqing lies a whole world of Chinese architectural simulacra that quietly aspire to a different ideal. In suburbs around China’s booming cities, developers build replicas of towns like Halstatt, Austria and Dorchester, England. Individual homes and offices, too, are designed to look like Versailles or the Chrysler Building. The most popular facsimile in China is the White House. The fastest-urbanizing country in history isn’t scanning design magazines for inspiration; it’s watching movies.
At Beijing’s Palais de Fortune, two hundred chateaus sit behind gold-tipped fences. At Chengdu’s British Town, pitched roofs and cast-iron street lamps dot the streets. At Shanghai’s Thames Town, a Gothic cathedral has become a tourist attraction in itself. Other developments have names like “Top Aristocrat,” (Beijing), “the Garden of Monet” (Shanghai), and “Galaxy Dante,” (Shenzhen).
Architects and critics within and beyond China have treated these derivative designs with scorn, as shameless kitsch or simply trash. Others cite China’s larger knock-off culture, from handbags to housing, as evidence of the innovation gap between China and the United States. For a larger audience on the Internet, they are merely a punchline, another example of China’s endlessly entertaining wackiness.
In short, the majority of Chinese architectural imitation, oozing with historical romanticism, is not taken seriously.
But perhaps it ought to be.
In Original Copies: Architectural Mimicry in Contemporary China, the first detailed book on the subject, Bianca Bosker argues that the significance of these constructions has been unfairly discounted. Bosker, a senior technology editor at the Huffington Post, has been visiting copycat Chinese villages for some six years, and in her view, these distorted impressions of the West offer a glance at the hopes, dreams and contradictions of China’s middle class.
“Clearly there’s an acknowledgement that there’s something great about Paris,” says Bosker. “But it’s also: ‘We can do it ourselves.’”
Armed with firsthand observation, field research, interviews, and a solid historical background, Bosker’s book is an attempt to change the way we think about Chinese duplitecture. “We’re seeing the Chinese dream in action,” she says. “It has to do with this ability to take control of your life. There’s now this plethora of options to choose from.” That is something new in China, as is the role that private enterprise is taking in molding built environments that will respond to people’s fantasies.
While the experts scoff, the people who build and inhabit these places are quite proud of them. As the saying goes, “The way to live best is to eat Chinese food, drive an American car, and live in a British house. That’s the ideal life.” The Chinese middle class is living in Orange County, Beijing, the same way you listen to reggae music or lounge in Danish furniture.
In practice, though, the depth and scale of this phenomenon has few parallels. No one knows how many facsimile communities there are in China, but the number is increasing every day. “Every time I go looking for more,” Bosker says, “I find more.”
How many are there?
“At least hundreds.”Image: Tianducheng, 13th arrondissement, Paris in China. Courtesy of Bianca Bosker/University of Hawaii Press.
- Light From Gravity>
Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:From ars technica:
A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.
Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.
The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.
Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.
It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.Read more about GravityLight after the jump.Video courtesy of GravityLight.
- Map as Illusion>
We love maps here at theDiagonal. We also love ideas that challenge the status quo. And, this latest Strange Map, courtesy of Frank Jacobs over at Big Think does both. What we appreciate about his cartographic masterpiece is that it challenges our visual perception, and, more importantly, challenges our assumed hemispheric worldview.Read more of this article after the jump.
- National Geographic Hits 125>
Chances are that if you don’t have some ancient National Geographic magazines hidden in a box in your attic, then you know someone who does. If not, it’s time to see what you have been missing all these years. National Geographic celebrates 125 years in 2013, and what better way to do this than to look back through some of its glorious photographic archives.See more classic images after the jump.Image: 1964, Tanzania: a touching moment between the primatologist and National Geographic grantee Jane Goodall and a young chimpanzee called Flint at Tanzania’s Gombe Stream reserve. Courtesy of Guardian / National Geographic.
- Climate Change Report>
No pithy headline. The latest U.S. National Climate Assessment makes sobering news. The full 1,146 page report is available for download here.
Over the next 30 years (and beyond), it warns of projected sea-level rises along the Eastern Seaboard of the United States, warmer temperatures across much of the nation, and generally warmer and more acidic oceans. More worrying still are the less direct consequences of climate change: increased threats to human health due to severe weather such as storms, drought and wildfires; more vulnerable infrastructure in regions subject to increasingly volatile weather; and rising threats to regional stability and national security due to a less reliable national and global water supply.From Scientific American:
The consequences of climate change are now hitting the United States on several fronts, including health, infrastructure, water supply, agriculture and especially more frequent severe weather, a congressionally mandated study has concluded.
A draft of the U.S. National Climate Assessment, released on Friday, said observable change to the climate in the past half-century “is due primarily to human activities, predominantly the burning of fossil fuel,” and that no areas of the United States were immune to change.
“Corn producers in Iowa, oyster growers in Washington State, and maple syrup producers in Vermont have observed changes in their local climate that are outside of their experience,” the report said.
Months after Superstorm Sandy hurtled into the U.S. East Coast, causing billions of dollars in damage, the report concluded that severe weather was the new normal.
“Certain types of weather events have become more frequent and/or intense, including heat waves, heavy downpours, and, in some regions, floods and droughts,” the report said, days after scientists at the National Oceanic and Atmospheric Administration declared 2012 the hottest year ever in the United States.
Some environmentalists looked for the report to energize climate efforts by the White House or Congress, although many Republican lawmakers are wary of declaring a definitive link between human activity and evidence of a changing climate.
The U.S. Congress has been mostly silent on climate change since efforts to pass “cap-and-trade” legislation collapsed in the Senate in mid-2010.
The advisory committee behind the report was established by the U.S. Department of Commerce to integrate federal research on environmental change and its implications for society. It made two earlier assessments, in 2000 and 2009.
Thirteen departments and agencies, from the Agriculture Department to NASA, are part of the committee, which also includes academics, businesses, nonprofits and others.
‘A WARNING TO ALL OF US’
The report noted that of an increase in average U.S. temperatures of about 1.5 degrees F (.83 degree C) since 1895, when reliable national record-keeping began, more than 80 percent had occurred in the past three decades.
With heat-trapping gases already in the atmosphere, temperatures could rise by a further 2 to 4 degrees F (1.1 to 2.2 degrees C) in most parts of the country over the next few decades, the report said.
- Plagiarism is the Sincerest Form of Capitalism>
Plagiarism is fine art in China. But, it’s also very big business. The nation knocks off everything, from Hollywood and Bollywood movies, to software, electronics, appliances, drugs, and military equipment. Now, it’s moved on to copying architectural plans.From the Telegraph:
China is famous for its copy-cat architecture: you can find replicas of everything from the Eiffel Tower and the White House to an Austrian village across its vast land. But now they have gone one step further: recreating a building that hasn’t even been finished yet. A building designed by the Iraqi-British architect Dame Zaha Hadid for Beijing has been copied by a developer in Chongqing, south-west China, and now the two projects are racing to be completed first.
Dame Zaha, whose Wangjing Soho complex consists of three pebble-like constructions and will house an office and retail complex, unveiled her designs in August 2011 and hopes to complete the project next year.
Meanwhile, a remarkably similar project called Meiquan 22nd Century is being constructed in Chongqing, that experts (and anyone with eyes, really) deem a rip-off. The developers of the Soho complex are concerned that the other is being built at a much faster rate than their own.
“It is possible that the Chongqing pirates got hold of some digital files or renderings of the project,” Satoshi Ohashi, project director at Zaha Hadid Architects, told Der Spiegel online. “[From these] you could work out a similar building if you are technically very capable, but this would only be a rough simulation of the architecture.”
So where does the law stand? Reporting on the intriguing case, China Intellectual Property magazine commented, “Up to now, there is no special law in China which has specific provisions on IP rights related to architecture.” They added that if it went to court, the likely outcome would be payment of compensation to Dame Zaha’s firm, rather than the defendant being forced to pull the building down. However, Dame Zaha seems somewhat unfazed about the structure, simply remarking that if the finished building contains a certain amount of innovation then “that could be quite exciting”. One of the world’s most celebrated architects, Dame Zaha – who recently designed the Aquatics Centre for the London Olympics – has 11 current projects in China. She is quite the star over there: 15,000 fans flocked to see her give a talk at the unveiling of the designs for the complex.Image: Wangjing Soho Architecture. Courtesy of Zaha Hadid Architects.
- The Future of the Grid>
Two common complaints dog the sustainable energy movement: first, energy generated from the sun and wind is not always present; second, renewable energy is too costly. A new study debunks these notions, and shows that cost effective renewable energy could power our needs 99 percent of the time by 2030.From ars technica:
You’ve probably heard the argument: wind and solar power are well and good, but what about when the wind doesn’t blow and the sun doesn’t shine? But it’s always windy and sunny somewhere. Given a sufficient distribution of energy resources and a large enough network of electrically conducting tubes, plus a bit of storage, these problems can be overcome—technologically, at least.
But is it cost-effective to do so? A new study from the University of Delaware finds that renewable energy sources can, with the help of storage, power a large regional grid for up to 99.9 percent of the time using current technology. By 2030, the cost of doing so will hit parity with current methods. Further, if you can live with renewables meeting your energy needs for only 90 percent of the time, the economics become positively compelling.
“These results break the conventional wisdom that renewable energy is too unreliable and expensive,” said study co-author Willett Kempton, a professor at the University of Delaware’s School of Marine Science and Policy. “The key is to get the right combination of electricity sources and storage—which we did by an exhaustive search—and to calculate costs correctly.”
By exhaustive, Kempton is referring to the 28 billion combinations of inland and offshore wind and photovoltaic solar sources combined with centralized hydrogen, centralized batteries, and grid-integrated vehicles analyzed in the study. The researchers deliberately overlooked constant renewable sources of energy such as geothermal and hydro power on the grounds that they are less widely available geographically.
These technologies were applied to a real-world test case: that of the PJM Interconnection regional grid, which covers parts of states from New Jersey to Indiana, and south to North Carolina. The model used hourly consumption data from the years 1999 to 2002; during that time, the grid had a generational capacity of 72GW catering to an average demand of 31.5GW. Taking in 13 states, either whole or in part, the PJM Interconnection constitutes one fifth of the USA’s grid. “Large” is no overstatement, even before considering more recent expansions that don’t apply to the dataset used.
The researchers constructed a computer model using standard solar and wind analysis tools. They then fed in hourly weather data from the region for the whole four-year period—35,040 hours worth. The goal was to find the minimum cost at which the energy demand could be met entirely by renewables for a given proportion of the time, based on the following game plan:
- When there’s enough renewable energy direct from source to meet demand, use it. Store any surplus.
- When there is not enough renewable energy direct from source, meet the shortfall with the stored energy.
- When there is not enough renewable energy direct from source, and the stored energy reserves are insufficient to bridge the shortfall, top up the remaining few percent of the demand with fossil fuels.
Perhaps unsurprisingly, the precise mix required depends upon exactly how much time you want renewables to meet the full load. Much more surprising is the amount of excess renewable infrastructure the model proposes as the most economic. To achieve a 90-percent target, the renewable infrastructure should be capable of generating 180 percent of the load. To meet demand 99.9 percent of the time, that rises to 290 percent.
“So much excess generation of renewables is a new idea, but it is not problematic or inefficient, any more than it is problematic to build a thermal power plant requiring fuel input at 250 percent of the electrical output, as we do today,” the study argues.Image: Bangui Windfarm, Ilocos Norte, Philippines. Courtesy of
- Places to Visit Before World's End>
In case you missed all the apocalyptic hoopla, the world is supposed to end today. Now, if you’re reading this, you obviously still have a little time, since the Mayans apparently did not specify a precise time for prophesied end. So, we highly recommend that you visit one or more of these beautiful places, immediately. Of course, if we’re all still here tomorrow, you will have some extra time to take in these breathtaking sights before the next planned doomsday.Check out the top 100 places according to the Telegraph after the jump.Image: Lapland for the northern lights. Courtesy of ALAMY / Telegraph.
- Climate change: Not in My Neigborhood>
It’s no surprise that in our daily lives we seek information that reinforces our perceptions, opinions and beliefs of the world around us. It’s also the case that if we do not believe in a particular position, we will overlook any evidence in our immediate surroundings that runs contrary to our disbelief — climate change is no different.From ars technica:
We all know it’s hard to change someone’s mind. In an ideal, rational world, a person’s opinion about some topic would be based on several pieces of evidence. If you were to supply that person with several pieces of stronger evidence that point in another direction, you might expect them to accept the new information and agree with you.
However, this is not that world, and rarely do we find ourselves in a debate with Star Trek’s Spock. There are a great many reasons that we behave differently. One is the way we rate incoming information for trustworthiness and importance. Once we form an opinion, we rate information that confirms our opinion more highly than information that challenges it. This is one form of “motivated reasoning.” We like to think we’re right, and so we are motivated to come to the conclusion that the facts are still on our side.
Publicly contentious issues often put a spotlight on these processes—issues like climate change, example. In a recent paper published in Nature Climate Change, researchers from George Mason and Yale explore how motivated reasoning influences whether people believe they have personally experienced the effects of climate change.
When it comes to communicating the science of global warming, a common strategy is to focus on the concrete here-and-now rather than the abstract and distant future. The former is easier for people to relate to and connect with. Glazed eyes are the standard response to complicated graphs of projected sea level rise, with ranges of uncertainty and several scenarios of future emissions. Show somebody that their favorite ice fishing spot is iced over for several fewer weeks each winter than it was in the late 1800s, though, and you might have their attention.
Public polls show that acceptance of a warming climate correlates with agreement that one has personally experienced its effects. That could be affirmation that personal experience is a powerful force for the acceptance of climate science. Obviously, there’s another possibility—that those who accept that the climate is warming are more likely to believe they’ve experienced the effects themselves, whereas those who deny that warming is taking place are unlikely to see evidence of it in daily life. That’s, at least partly, motivated reasoning at work. (And of course, this cuts both ways. Individuals who agree that the Earth is warming may erroneously interpret unrelated events as evidence of that fact.)
The survey used for this study was unique in that the same people were polled twice, two and a half years apart, to see how their views changed over time. For the group as a whole, there was evidence for both possibilities—experience affected acceptance, and acceptance predicted statements about experience.
Fortunately, the details were a bit more interesting than that. When you categorize individuals by engagement—essentially how confident and knowledgeable they feel about the facts of the issue—differences are revealed. For the highly-engaged groups (on both sides), opinions about whether climate is warming appeared to drive reports of personal experience. That is, motivated reasoning was prevalent. On the other hand, experience really did change opinions for the less-engaged group, and motivated reasoning took a back seat.Image courtesy of: New York Times / Steen Ulrik Johannessen / Agence France-Presse — Getty Images.
- National Emotions Mapped>
Are Canadians as a people more emotional than Brazilians? Are Brits as emotional as Mexicans? While generalizing and mapping a nation’s emotionality is dubious at best, this map is nonetheless fascinating.From the Washington Post:
Since 2009, the Gallup polling firm has surveyed people in 150 countries and territories on, among other things, their daily emotional experience. Their survey asks five questions, meant to gauge whether the respondent felt significant positive or negative emotions the day prior to the survey. The more times that people answer “yes” to questions such as “Did you smile or laugh a lot yesterday?”, the more emotional they’re deemed to be.
Gallup has tallied up the average “yes” responses from respondents in almost every country on Earth. The results, which I’ve mapped out above, are as fascinating as they are indecipherable. The color-coded key in the map indicates the average percentage of people who answered “yes.” Dark purple countries are the most emotional, yellow the least. Here are a few takeaways.
Singapore is the least emotional country in the world. ”Singaporeans recognize they have a problem,” Bloomberg Businessweek writes of the country’s “emotional deficit,” citing a culture in which schools “discourage students from thinking of themselves as individuals.” They also point to low work satisfaction, competitiveness, and the urban experience: “Staying emotionally neutral could be a way of coping with the stress of urban life in a place where 82 percent of the population lives in government-built housing.”
The Philippines is the world’s most emotional country. It’s not even close; the heavily Catholic, Southeast Asian nation, a former colony of Spain and the U.S., scores well above second-ranked El Salvador.
Post-Soviet countries are consistently among the most stoic. Other than Singapore (and, for some reason, Madagascar and Nepal), the least emotional countries in the world are all former members of the Soviet Union. They are also the greatest consumers of cigarettes and alcohol. This could be what you call and chicken-or-egg problem: if the two trends are related, which one came first? Europe appears almost like a gradient here, with emotions increasing as you move West.
People in the Americas are just exuberant. Every nation on the North and South American continents ranked highly on the survey. Americans and Canadians are both among the 15 most emotional countries in the world, as well as ten Latin countries. The only non-American countries in the top 15, other than the Philippines, are the Arab nations of Oman and Bahrain, both of which rank very highly.
- Testosterone and the Moon>
While the United States’ military makes no comment a number of corroborated reports suggest that the country had a plan to drop an atomic bomb on the moon during the height of the Cold War. Apparently, a Hiroshima-like explosion on our satellite would have been seen as a “show of force” by the Soviets. The shear absurdity of this Dr.Strangelove story makes it all the more real.From the Independent:
US Military chiefs, keen to intimidate Russia during the Cold War, plotted to blow up the moon with a nuclear bomb, according to project documents kept secret for for nearly 45 years.
The army chiefs allegedly developed a top-secret project called, ‘A Study of Lunar Research Flights’ – or ‘Project A119′, in the hope that their Soviet rivals would be intimidated by a display of America’s Cold War muscle.
According to The Sun newspaper the military bosses developed a classified plan to launch a nuclear weapon 238,000 miles to the moon where it would be detonated upon impact.
The planners reportedly opted for an atom bomb, rather than a hydrogen bomb, because the latter would be too heavy for the missile.
Physicist Leonard Reiffel, who says he was involved in the project, claims the hope was that the flash from the bomb would intimidate the Russians following their successful launching of the Sputnik satellite in October 1957.
The planning of the explosion reportedly included calculations by astronomer Carl Sagan, who was then a young graduate.
Documents reportedly show the plan was abandoned because of fears it would have an adverse effect on Earth should the explosion fail.Image courtesy of NASA.
- Pluralistic Ignorance>
Why study the science of climate change when you can study the complexities of climate change deniers themselves? That was the question that led several groups of independent researchers to study why some groups of people cling to mistaken beliefs and hold inaccurate views of the public consensus.From ars technica:
By just about every measure, the vast majority of scientists in general—and climate scientists in particular—have been convinced by the evidence that human activities are altering the climate. However, in several countries, a significant portion of the public has concluded that this consensus doesn’t exist. That has prompted a variety of studies aimed at understanding the large disconnect between scientists and the public, with results pointing the finger at everything from the economy to the weather. Other studies have noted societal influences on acceptance, including ideology and cultural identity.
Those studies have generally focused on the US population, but the public acceptance of climate change is fairly similar in Australia. There, a new study has looked at how societal tendencies can play a role in maintaining mistaken beliefs. The authors of the study have found evidence that two well-known behaviors—the “false consensus” and “pluralistic ignorance”—are helping to shape public opinion in Australia.
False consensus is the tendency of people to think that everyone else shares their opinions. This can arise from the fact that we tend to socialize with people who share our opinions, but the authors note that the effect is even stronger “when we hold opinions or beliefs that are unpopular, unpalatable, or that we are uncertain about.” In other words, our social habits tend to reinforce the belief that we’re part of a majority, and we have a tendency to cling to the sense that we’re not alone in our beliefs.
Pluralistic ignorance is similar, but it’s not focused on our own beliefs. Instead, sometimes the majority of people come to believe that most people think a certain way, even though the majority opinion actually resides elsewhere.
As it turns out, the authors found evidence of both these effects. They performed two identical surveys of over 5,000 Australians, done a year apart; about 1,350 people took the survey both times, which let the researchers track how opinions evolve. Participants were asked to describe their own opinion on climate change, with categories including “don’t know,” “not happening,” “a natural occurrence,” and “human-induced.” After voicing their own opinion, people were asked to estimate what percentage of the population would fall into each of these categories.
In aggregate, over 90 percent of those surveyed accepted that climate change was occurring (a rate much higher than we see in the US), with just over half accepting that humans were driving the change. Only about five percent felt it wasn’t happening, and even fewer said they didn’t know. The numbers changed only slightly between the two polls.
The false consensus effect became obvious when the researchers looked at what these people thought that everyone else believed. Here, the false consensus effect was obvious: every single group believed that their opinion represented the plurality view of the population. This was most dramatic among those who don’t think that the climate is changing; even though they represent far less than 10 percent of the population, they believed that over 40 percent of Australians shared their views. Those who profess ignorance also believed they had lots of company, estimating that their view was shared by a quarter of the populace.
Among those who took the survey twice, the effect became even more pronounced. In the year between the surveys, they respondents went from estimating that 30 percent of the population agreed with them to thinking that 45 percent did. And, in general, this group was the least likely to change its opinion between the two surveys.
But there was also evidence of pluralistic ignorance. Every single group grossly overestimated the number of people who were unsure about climate change or convinced it wasn’t occurring. Even those who were convinced that humans were changing the climate put 20 percent of Australians into each of these two groups.Image: Flood victims. Courtesy of NRDC.
- From Finely Textured Beef to Soylent Pink>
Blame corporate euphemisms and branding for the obfuscation of everyday things. More sinister yet, is the constant re-working of names for our ever increasingly processed foodstuffs. Only last year as several influential health studies pointed towards the detrimental health effects of high fructose corn syrup (HFC) did the food industry act, but not by removing copious amounts of the addictive additive from many processed foods. Rather, the industry attempted to re-brand HFC as “corn sugar”. And, now on to the battle over “soylent pink” also known as “pink slim”.From Slate:
What do you call a mash of beef trimmings that have been chopped and then spun in a centrifuge to remove the fatty bits and gristle? According to the government and to the company that invented the process, you call it lean finely textured beef. But to the natural-food crusaders who would have the stuff removed from the nation’s hamburgers and tacos, the protein-rich product goes by another, more disturbing name: Pink slime.
The story of this activist rebranding—from lean finely textured beef to pink slime—reveals just how much these labels matter. It was the latter phrase that, for example, birthed the great ground-beef scare of 2012. In early March, journalists at both the Daily and at ABC began reporting on a burger panic: Lax rules from the U.S. Department of Agriculture allowed producers to fill their ground-beef packs with a slimy, noxious byproduct—a mush the reporters called unsanitary and without much value as a food. Coverage linked back to a New York Times story from 2009 in which the words pink slime had appeared in public for the first time in a quote from an email written by a USDA microbiologist who was frustrated at a decision to leave the additive off labels for ground meat.
The slimy terror spread in the weeks that followed. Less than a month after ABC’s initial reports, almost a quarter million people had signed a petition to get pink slime out of public school cafeterias. Supermarket chains stopped selling burger meat that contained it—all because of a shift from four matter-of-fact words to two visceral ones.
And now that rebranding has become the basis for a 263-page lawsuit. Last month, Beef Products Inc., the first and principal producer of lean/pink/textured/slimy beef, filed a defamation claim against ABC (along with that microbiologist and a former USDA inspector) in a South Dakota court. The company says the network carried out a malicious and dishonest campaign to discredit its ground-beef additive and that this work had grievous consequences. When ABC began its coverage, Beef Products Inc. was selling 5 million pounds of slime/beef/whatever every week. Then three of its four plants were forced to close, and production dropped to 1.6 million pounds. A weekly profit of $2.3 million had turned into a $583,000 weekly loss.
At Reuters, Steven Brill argued that the suit has merit. I won’t try to comment on its legal viability, but the details of the claim do provide some useful background about how we name our processed foods, in both industry and the media. It turns out the paste now known within the business as lean finely textured beef descends from an older, less purified version of the same. Producers have long tried to salvage the trimmings from a cattle carcass by cleaning off the fat and the bacteria that often congregate on these leftover parts. At best they could achieve a not-so-lean class of meat called partially defatted chopped beef, which USDA deemed too low in quality to be a part of hamburger or ground meat.
By the late 1980s, though, Eldon Roth of Beef Products Inc. had worked out a way to make those trimmings a bit more wholesome. He’d found a way, using centrifuges, to separate the fat more fully. In 1991, USDA approved his product as fat reduced beef and signed off on its use in hamburgers. JoAnn Smith, a government official and former president of the National Cattlemen’s Association, signed off on this “euphemistic designation,” writes Marion Nestle in Food Politics. (Beef Products, Inc. maintains that this decision “was not motivated by any official’s so-called ‘links to the beef industry.’ “) So 20 years ago, the trimmings had already been reformulated and rebranded once.
But the government still said that fat reduced beef could not be used in packages marked “ground beef.” (The government distinction between hamburger and ground beef is that the former can contain added fat, while the latter can’t.) So Beef Products Inc. pressed its case, and in 1993 it convinced the USDA to approve the mash for wider use, with a new and better name: lean finely textured beef. A few years later, Roth started killing the microbes on his trimmings with ammonia gas and got approval to do that, too. With government permission, the company went on to sell several billion pounds of the stuff in the next two decades.
In the meantime, other meat processors started making something similar but using slightly different names. AFA Foods (which filed for bankruptcy in April after the recent ground-beef scandal broke), has referred to its products as boneless lean beef trimmings, a more generic term. Cargill, which decontaminates its meat with citric acid in place of ammonia gas, calls its mash of trimmings finely textured beef.Image: Industrial ground beef. Courtesy of Wikipedia.
- GigaBytes and TeraWatts>
Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.
Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.From the New York Times:
Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.
The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.
Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.
That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.
They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.
A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.
Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.
To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.
Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.
“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”Image courtesy of the AP / Thanassis Stavrakis.
- A Link Between BPA and Obesity>
You have probably heard of BPA. It’s a compound used in the manufacture of many plastics, especially hard, polycarbonate plastics. Interestingly, it has hormone-like characteristics, mimicking estrogen. As a result, BPA crops up in many studies that show adverse health affects. As a precaution, the U.S. Food and Drug Administration (FDA) several years ago banned the use of BPA from products aimed at young children, such as baby bottles. But evidence remains inconsistent, so BPA is still found in many products today. Now comes another study linking BPA to obesity.From Smithsonian:
Since the 1960s, manufacturers have widely used the chemical bisphenol-A (BPA) in plastics and food packaging. Only recently, though, have scientists begun thoroughly looking into how the compound might affect human health—and what they’ve found has been a cause for concern.
Starting in 2006, a series of studies, mostly in mice, indicated that the chemical might act as an endocrine disruptor (by mimicking the hormone estrogen), cause problems during development and potentially affect the reproductive system, reducing fertility. After a 2010 Food and Drug Administration report warned that the compound could pose an especially hazardous risk for fetuses, infants and young children, BPA-free water bottles and food containers started flying off the shelves. In July, the FDA banned the use of BPA in baby bottles and sippy cups, but the chemical is still present in aluminum cans, containers of baby formula and other packaging materials.
Now comes another piece of data on a potential risk from BPA but in an area of health in which it has largely been overlooked: obesity. A study by researchers from New York University, published today in the Journal of the American Medical Association, looked at a sample of nearly 3,000 children and teens across the country and found a “significant” link between the amount of BPA in their urine and the prevalence of obesity.
“This is the first association of an environmental chemical in childhood obesity in a large, nationally representative sample,” said lead investigator Leonardo Trasande, who studies the role of environmental factors in childhood disease at NYU. “We note the recent FDA ban of BPA in baby bottles and sippy cups, yet our findings raise questions about exposure to BPA in consumer products used by older children.”
The researchers pulled data from the 2003 to 2008 National Health and Nutrition Examination Surveys, and after controlling for differences in ethnicity, age, caregiver education, income level, sex, caloric intake, television viewing habits and other factors, they found that children and adolescents with the highest levels of BPA in their urine had a 2.6 times greater chance of being obese than those with the lowest levels. Overall, 22.3 percent of those in the quartile with the highest levels of BPA were obese, compared with just 10.3 percent of those in the quartile with the lowest levels of BPA.
The vast majority of BPA in our bodies comes from ingestion of contaminated food and water. The compound is often used as an internal barrier in food packaging, so that the product we eat or drink does not come into direct contact with a metal can or plastic container. When heated or washed, though, plastics containing BPA can break down and release the chemical into the food or liquid they hold. As a result, roughly 93 percent of the U.S. population has detectable levels of BPA in their urine.
The researchers point specifically to the continuing presence of BPA in aluminum cans as a major problem. “Most people agree the majority of BPA exposure in the United States comes from aluminum cans,” Trasande said. “Removing it from aluminum cans is probably one of the best ways we can limit exposure. There are alternatives that manufacturers can use to line aluminum cans.”Image: Bisphenol A. Courtesy of Wikipedia.
- An Answer is Blowing in the Wind>
Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.From Ars Technica:
Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?
Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.
The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?
In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.
The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.
The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?
For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.
Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.
Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.
Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.
For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.
These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).
- Air Conditioning in a Warming World> From the New York Times:
THE blackouts that left hundreds of millions of Indians sweltering in the dark last month underscored the status of air-conditioning as one of the world’s most vexing environmental quandaries.
Fact 1: Nearly all of the world’s booming cities are in the tropics and will be home to an estimated one billion new consumers by 2025. As temperatures rise, they — and we — will use more air-conditioning.
Fact 2: Air-conditioners draw copious electricity, and deliver a double whammy in terms of climate change, since both the electricity they use and the coolants they contain result in planet-warming emissions.
Fact 3: Scientific studies increasingly show that health and productivity rise significantly if indoor temperature is cooled in hot weather. So cooling is not just about comfort.
Sum up these facts and it’s hard to escape: Today’s humans probably need air-conditioning if they want to thrive and prosper. Yet if all those new city dwellers use air-conditioning the way Americans do, life could be one stuttering series of massive blackouts, accompanied by disastrous planet-warming emissions.
We can’t live with air-conditioning, but we can’t live without it.
“It is true that air-conditioning made the economy happen for Singapore and is doing so for other emerging economies,” said Pawel Wargocki, an expert on indoor air quality at the International Center for Indoor Environment and Energy at the Technical University of Denmark. “On the other hand, it poses a huge threat to global climate and energy use. The current pace is very dangerous.”
Projections of air-conditioning use are daunting. In 2007, only 11 percent of households in Brazil and 2 percent in India had air-conditioning, compared with 87 percent in the United States, which has a more temperate climate, said Michael Sivak, a research professor in energy at the University of Michigan. “There is huge latent demand,” Mr. Sivak said. “Current energy demand does not yet reflect what will happen when these countries have more money and more people can afford air-conditioning.” He has estimated that, based on its climate and the size of the population, the cooling needs of Mumbai alone could be about a quarter of those of the entire United States, which he calls “one scary statistic.”
It is easy to decry the problem but far harder to know what to do, especially in a warming world where people in the United States are using our existing air-conditioners more often. The number of cooling degree days — a measure of how often cooling is needed — was 17 percent above normal in the United States in 2010, according to the Environmental Protection Agency, leading to “an increase in electricity demand.” This July was the hottest ever in the United States.
Likewise, the blackouts in India were almost certainly related to the rising use of air-conditioning and cooling, experts say, even if the immediate culprit was a grid that did not properly balance supply and demand.
The late arrival of this year’s monsoons, which normally put an end to India’s hottest season, may have devastated the incomes of farmers who needed the rain. But it “put smiles on the faces of those who sell white goods — like air-conditioners and refrigerators — because it meant lots more sales,” said Rajendra Shende, chairman of the Terre Policy Center in Pune, India.
“Cooling is the craze in India — everyone loves cool temperatures and getting to cool temperatures as quickly as possible,” Mr. Shende said. He said that cooling has become such a cultural priority that rather than advertise a car’s acceleration, salesmen in India now emphasize how fast its air-conditioner can cool.
Scientists are scrambling to invent more efficient air-conditioners and better coolant gases to minimize electricity use and emissions. But so far the improvements have been dwarfed by humanity’s rising demands.
And recent efforts to curb the use of air-conditioning, by fiat or persuasion, have produced sobering lessons.Image courtesy of Parkland Air Conditioning.
- When to Eat Your Fruit and Veg>
It’s time to jettison the $1.99 hyper-burger and super-sized fires and try some real fruits and vegetables. You know — the kind of product that comes directly from the soil. But, when is the best time to suck on a juicy peach or chomp some crispy radicchio?
A great chart, below, summarizes which fruits and vegetables are generally in season for the Northern Hemisphere.Infographic courtesy of Visual News, designed by Column Five.
- Extreme Weather as the New Norm>
Melting glaciers at the poles, wildfires in the western United States, severe flooding across Europe and parts of Asia, hurricanes in northern Australia, warmer temperatures across the globe. According to a many climatologists, including a growing number of ex-climate change skeptics, this is the new normal for our foreseeable future. Welcome to the changed climate.From the New York Times:
BY many measurements, this summer’s drought is one for the record books. But so was last year’s drought in the South Central states. And it has been only a decade since an extreme five-year drought hit the American West. Widespread annual droughts, once a rare calamity, have become more frequent and are set to become the “new normal.”
Until recently, many scientists spoke of climate change mainly as a “threat,” sometime in the future. But it is increasingly clear that we already live in the era of human-induced climate change, with a growing frequency of weather and climate extremes like heat waves, droughts, floods and fires.
Future precipitation trends, based on climate model projections for the coming fifth assessment from the Intergovernmental Panel on Climate Change, indicate that droughts of this length and severity will be commonplace through the end of the century unless human-induced carbon emissions are significantly reduced. Indeed, assuming business as usual, each of the next 80 years in the American West is expected to see less rainfall than the average of the five years of the drought that hit the region from 2000 to 2004.
That extreme drought (which we have analyzed in a new study in the journal Nature-Geoscience) had profound consequences for carbon sequestration, agricultural productivity and water resources: plants, for example, took in only half the carbon dioxide they do normally, thanks to a drought-induced drop in photosynthesis.
In the drought’s worst year, Western crop yields were down by 13 percent, with many local cases of complete crop failure. Major river basins showed 5 percent to 50 percent reductions in flow. These reductions persisted up to three years after the drought ended, because the lakes and reservoirs that feed them needed several years of average rainfall to return to predrought levels.
In terms of severity and geographic extent, the 2000-4 drought in the West exceeded such legendary events as the Dust Bowl of the 1930s. While that drought saw intervening years of normal rainfall, the years of the turn-of-the-century drought were consecutive. More seriously still, long-term climate records from tree-ring chronologies show that this drought was the most severe event of its kind in the western United States in the past 800 years. Though there have been many extreme droughts over the last 1,200 years, only three other events have been of similar magnitude, all during periods of “megadroughts.”
Most frightening is that this extreme event could become the new normal: climate models point to a warmer planet, largely because of greenhouse gas emissions. Planetary warming, in turn, is expected to create drier conditions across western North America, because of the way global-wind and atmospheric-pressure patterns shift in response.
Indeed, scientists see signs of the relationship between warming and drought in western North America by analyzing trends over the last 100 years; evidence suggests that the more frequent drought and low precipitation events observed for the West during the 20th century are associated with increasing temperatures across the Northern Hemisphere.
These climate-model projections suggest that what we consider today to be an episode of severe drought might even be classified as a period of abnormal wetness by the end of the century and that a coming megadrought — a prolonged, multidecade period of significantly below-average precipitation — is possible and likely in the American West.Image courtesy of the Sun.
- A Climate Change Skeptic Recants>
A climate change skeptic recants. Of course, disbelievers in human-influenced climate change will point to the fact that physicist Richard Muller used an op-ed in the New York Times as evidence of flagrant falsehood and unmitigated bias.
Several years ago Muller set up the Berkeley Earth project, to collect and analyze land-surface temperature records from sources independent of NASA and NOAA. Convinced, at the time, that climate change researchers had the numbers all wrong, Muller and team set out to find the proof.From the New York Times:
CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.
My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.
These findings are stronger than those of the Intergovernmental Panel on Climate Change, the United Nations group that defines the scientific and diplomatic consensus on global warming. In its 2007 report, the I.P.C.C. concluded only that most of the warming of the prior 50 years could be attributed to humans. It was possible, according to the I.P.C.C. consensus statement, that the warming before 1956 could be because of changes in solar activity, and that even a substantial part of the more recent warming could be natural.
Our Berkeley Earth approach used sophisticated statistical methods developed largely by our lead scientist, Robert Rohde, which allowed us to determine earth land temperature much further back in time. We carefully studied issues raised by skeptics: biases from urban heating (we duplicated our results using rural data alone), from data selection (prior groups selected fewer than 20 percent of the available temperature stations; we used virtually 100 percent), from poor station quality (we separately analyzed good stations and poor ones) and from human intervention and data adjustment (our work is completely automated and hands-off). In our papers we demonstrate that none of these potentially troublesome effects unduly biased our conclusions.
The historic temperature pattern we observed has abrupt dips that match the emissions of known explosive volcanic eruptions; the particulates from such events reflect sunlight, make for beautiful sunsets and cool the earth’s surface for a few years. There are small, rapid variations attributable to El Niño and other ocean currents such as the Gulf Stream; because of such oscillations, the “flattening” of the recent temperature rise that some people claim is not, in our view, statistically significant. What has caused the gradual but systematic rise of two and a half degrees? We tried fitting the shape to simple math functions (exponentials, polynomials), to solar activity and even to rising functions like world population. By far the best match was to the record of atmospheric carbon dioxide, measured from atmospheric samples and air trapped in polar ice.
Just as important, our record is long enough that we could search for the fingerprint of solar variability, based on the historical record of sunspots. That fingerprint is absent. Although the I.P.C.C. allowed for the possibility that variations in sunlight could have ended the “Little Ice Age,” a period of cooling from the 14th century to about 1850, our data argues strongly that the temperature rise of the past 250 years cannot be attributed to solar changes. This conclusion is, in retrospect, not too surprising; we’ve learned from satellite measurements that solar activity changes the brightness of the sun very little.
How definite is the attribution to humans? The carbon dioxide curve gives a better match than anything else we’ve tried. Its magnitude is consistent with the calculated greenhouse effect — extra warming from trapped heat radiation. These facts don’t prove causality and they shouldn’t end skepticism, but they raise the bar: to be considered seriously, an alternative explanation must match the data at least as well as carbon dioxide does. Adding methane, a second greenhouse gas, to our analysis doesn’t change the results. Moreover, our analysis does not depend on large, complex global climate models, the huge computer programs that are notorious for their hidden assumptions and adjustable parameters. Our result is based simply on the close agreement between the shape of the observed temperature rise and the known greenhouse gas increase.
It’s a scientist’s duty to be properly skeptical. I still find that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I’ve analyzed some of the most alarmist claims, and my skepticism about them hasn’t changed.Image: Global land-surface temperature with a 10-year moving average. Courtesy of Berkeley Earth.
- Best Countries for Women>
If you’re female and value lengthy life expectancy, comprehensive reproductive health services, sound education and equality with males, where should you live? In short, Scandinavia, Australia and New Zealand, and Northern Europe. In a list of the 44 most well-developed nations, the United States ranks towards the middle, just below Canada and Estonia, but above Greece, Italy, Russia and most of Central and Eastern Europe.
The fascinating infographic from the National Post does a great job of summarizing the current state of womens’ affairs from data gathered from 165 countries.Read the entire article and find a higher quality infographic after the jump.
- Two Degrees>
Author and environmentalist Bill McKibben has been writing about climate change and environmental issues for over 20 years. His first book, The End of Nature, was published in 1989, and is considered to be the first book aimed at the general public on the subject of climate change.
In his latest essay in Rolling Stone, which we excerpt below, McKibben offers a sobering assessment based on our current lack of action on a global scale. He argues that in the face of governmental torpor, and with individual action being almost inconsequential (at this late stage), only a radical re-invention of our fossil-fuel industries — to energy companies in the broad sense — can bring significant and lasting change.
Learn more about Bill McKibben, here.From Rolling Stone:
If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.
Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.
Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.
When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.
The First Number: 2° Celsius
If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”
In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.
The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.
Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”
Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.Read the entire article after the jump.Image: Emissions from industry have helped increase the levels of carbon dioxide in the atmosphere, driving climate change. Courtesy of New Scientist / Eye Ubiquitous / Rex Features.
- Your Life Expectancy Mapped>
Your life expectancy mapped, that is, if you live in London, U.K. So, take the iconic London tube (subway) map, then overlay it with figures for average life expectancy. Voila, you get to see how your neighbors on the Piccadilly Line fair in their longevity compared with say, you, who happen to live near a Central Line station. It turns out that in some cases adjacent areas — as depicted by nearby but different subway stations — show an astounding gap of more than 20 years in projected life span.
So, what is at work? And, more importantly, should you move to Bond Street where the average life expectancy is 96 years, versus only 79 in Kennington, South London?From the Atlantic:
Last year’s dystopian action flick In Time has Justin Timberlake playing a street rat who suddenly comes into a great deal of money — only the currency isn’t cash, it’s time. Hours and minutes of Timberlake’s life that can be traded just like dollars and cents in our world. Moving from poor districts to rich ones, and vice versa, requires Timberlake to pay a toll, each time shaving off a portion of his life savings.
Literally paying with your life just to get around town seems like — you guessed it — pure science fiction. It’s absolute baloney to think that driving or taking a crosstown bus could result in a shorter life (unless you count this). But a project by University College London researchers called Lives on the Line echoes something similar with a map that plots local differences in life expectancy based on the nearest Tube stop.
The trends are largely unsurprising, and correlate mostly with wealth. Britons living in the ritzier West London tend to have longer expected lifespans compared to those who live in the east or the south. Those residing near the Oxford Circus Tube stop have it the easiest, with an average life expectancy of 96 years. Going into less wealthy neighborhoods in south and east London, life expectancy begins to drop — though it still hovers in the respectable range of 78-79.
Meanwhile, differences in life expectancy between even adjacent stations can be stark. Britons living near Pimlico are predicted to live six years longer than those just across the Thames near Vauxhall. There’s about a two-decade difference between those living in central London compared to those near some stations on the Docklands Light Railway, according to the BBC. Similarly, moving from Tottenham Court Road to Holborn will also shave six years off the Londoner’s average life expectancy.
Michael Marmot, a UCL professor who wasn’t involved in the project, put the numbers in international perspective.
“The difference between Hackney and the West End,” Marmot told the BBC, “is the same as the difference between England and Guatemala in terms of life expectancy.”Image courtesy of Atlantic / MappingLondon.co.uk.
- The North Continues to Melt Away>
On July 16, 2012 the Petermann Glacier in Greenland calved another gigantic island of ice, about twice the size of Manhattan in New York, or about 46 square miles. Climatologists armed with NASA satellite imagery have been following the glacier for many years, and first spotted the break-off point around 8 years ago. The Petermann Glacier calved a previous huge iceberg, twice this size, in 2010.
According to NASA average temperatures in northern Greenland and the Canadian Arctic have increased by about 4 degrees Fahrenheit in the last 30 years.
So, driven by climate change or not, regardless of whether it is short-term or long-term, temporary or irreversible, man-made or a natural cycle, the trend is clear — the Arctic is warming, the ice cap is shrinking and sea-levels are rising.From the Economist:
STANDING ON THE Greenland ice cap, it is obvious why restless modern man so reveres wild places. Everywhere you look, ice draws the eye, squeezed and chiselled by a unique coincidence of forces. Gormenghastian ice ridges, silver and lapis blue, ice mounds and other frozen contortions are minutely observable in the clear Arctic air. The great glaciers impose order on the icy sprawl, flowing down to a semi-frozen sea.
The ice cap is still, frozen in perturbation. There is not a breath of wind, no engine’s sound, no bird’s cry, no hubbub at all. Instead of noise, there is its absence. You feel it as a pressure behind the temples and, if you listen hard, as a phantom roar. For generations of frosty-whiskered European explorers, and still today, the ice sheet is synonymous with the power of nature.
The Arctic is one of the world’s least explored and last wild places. Even the names of its seas and rivers are unfamiliar, though many are vast. Siberia’s Yenisey and Lena each carries more water to the sea than the Mississippi or the Nile. Greenland, the world’s biggest island, is six times the size of Germany. Yet it has a population of just 57,000, mostly Inuit scattered in tiny coastal settlements. In the whole of the Arctic—roughly defined as the Arctic Circle and a narrow margin to the south (see map)—there are barely 4m people, around half of whom live in a few cheerless post-Soviet cities such as Murmansk and Magadan. In most of the rest, including much of Siberia, northern Alaska, northern Canada, Greenland and northern Scandinavia, there is hardly anyone. Yet the region is anything but inviolate.
A heat map of the world, colour-coded for temperature change, shows the Arctic in sizzling maroon. Since 1951 it has warmed roughly twice as much as the global average. In that period the temperature in Greenland has gone up by 1.5°C, compared with around 0.7°C globally. This disparity is expected to continue. A 2°C increase in global temperatures—which appears inevitable as greenhouse-gas emissions soar—would mean Arctic warming of 3-6°C.
Almost all Arctic glaciers have receded. The area of Arctic land covered by snow in early summer has shrunk by almost a fifth since 1966. But it is the Arctic Ocean that is most changed. In the 1970s, 80s and 90s the minimum extent of polar pack ice fell by around 8% per decade. Then, in 2007, the sea ice crashed, melting to a summer minimum of 4.3m sq km (1.7m square miles), close to half the average for the 1960s and 24% below the previous minimum, set in 2005. This left the north-west passage, a sea lane through Canada’s 36,000-island Arctic Archipelago, ice-free for the first time in memory.
Scientists, scrambling to explain this, found that in 2007 every natural variation, including warm weather, clear skies and warm currents, had lined up to reinforce the seasonal melt. But last year there was no such remarkable coincidence: it was as normal as the Arctic gets these days. And the sea ice still shrank to almost the same extent.
There is no serious doubt about the basic cause of the warming. It is, in the Arctic as everywhere, the result of an increase in heat-trapping atmospheric gases, mainly carbon dioxide released when fossil fuels are burned. Because the atmosphere is shedding less solar heat, it is warming—a physical effect predicted back in 1896 by Svante Arrhenius, a Swedish scientist. But why is the Arctic warming faster than other places?
Consider, first, how very sensitive to temperature change the Arctic is because of where it is. In both hemispheres the climate system shifts heat from the steamy equator to the frozen pole. But in the north the exchange is much more efficient. This is partly because of the lofty mountain ranges of Europe, Asia and America that help mix warm and cold fronts, much as boulders churn water in a stream. Antarctica, surrounded by the vast southern seas, is subject to much less atmospheric mixing.
The land masses that encircle the Arctic also prevent the polar oceans revolving around it as they do around Antarctica. Instead they surge, north-south, between the Arctic land masses in a gigantic exchange of cold and warm water: the Pacific pours through the Bering Strait, between Siberia and Alaska, and the Atlantic through the Fram Strait, between Greenland and Norway’s Svalbard archipelago.
That keeps the average annual temperature for the high Arctic (the northernmost fringes of land and the sea beyond) at a relatively sultry -15°C; much of the rest is close to melting-point for much of the year. Even modest warming can therefore have a dramatic effect on the region’s ecosystems. The Antarctic is also warming, but with an average annual temperature of -57°C it will take more than a few hot summers for this to become obvious.Image: Sequence of three images showing the Petermann Glacier sliding toward the sea along the northwestern coast of Greenland, terminating in a huge, new floating ice island. Courtesy: NASA.
- A Different Kind of Hotel>
Bored of the annual family trip to Disneyland? Tired of staying in a suite hotel that still offers musak in the lobby, floral motifs on the walls, and ashtrays and saccharin packets next to the rickety minibar? Well, leaf through this list of 10 exotic and gorgeous hotels and start planning your next real escape today.
Wadi Rum Desert Lodge – The Valley of the Moon, Jordan.From Flavorwire:
A Backward Glance, Pulitzer Prize-winning author Edith Wharton’s gem of an autobiography is highbrow beach reading at its very best. In the memoir, she recalls time spent with her bff traveling buddy, Henry James, and quotes his arcadian proclamation, “summer afternoon — summer afternoon; to me those have always been the two most beautiful words in the English language.” Maybe so in the less than industrious heyday of inherited wealth, but in today’s world where most people work all day for a living, those two words just don’t have the same appeal as our two favorite words: summer getaway.
Like everyone else in our overworked and overheated city, rest and relaxation are all we can think about — especially on a hot Friday afternoon like this. In considering options for our celebrated summer respite, we thought we’d take a virtual gander to check out alternatives to the usual Hamptons summer share. From a treehouse where sloths join you for morning coffee to a giant sandcastle, click through to see some of the most unusual summer getaway destinations in the world.See more stunning hotels after the jump.
- National Education Rankings: C->
One would believe that the most affluent and open country on the planet would have one of the best, if not the best, education systems. Yet, the United States of America distinguishes itself by being thoroughly mediocre in a ranking of developed nations in science, mathematics and reading. How can we makes amends for our children?From Slate:
Take the 2009 PISA test, which assessed the knowledge of students from 65 countries and economies—34 of which are members of the development organization the OECD, including the United States—in math, science, and reading. Of the OECD countries, the United States came in 17th place in science literacy; of all countries and economies surveyed, it came in 23rd place. The U.S. score of 502 practically matched the OECD average of 501. That puts us firmly in the middle. Where we don’t want to be.
What do the leading countries do differently? To find out, Slate asked science teachers from five countries that are among the world’s best in science education—Finland, Singapore, South Korea, New Zealand, and Canada—how they approach their subject and the classroom. Their recommendations: Keep students engaged and make the science seem relevant.
Finland: “To Make Students Enjoy Chemistry Is Hard Work”
Finland was first among the 34 OECD countries in the 2009 PISA science rankings and second—behind mainland China—among all 65 nations and economies that took the test. Ari Myllyviita teaches chemistry and works with future science educators at the Viikki Teacher Training School of Helsinki University.
Finland’s National Core Curriculum is premised on the idea “that learning is a result of a student’s active and focused actions aimed to process and interpret received information in interaction with other students, teachers and the environment and on the basis of his or her existing knowledge structures.”
My conception of learning lies strongly on this citation from our curriculum. My aim is to support knowledge-building, socioculturally: to create socially supported activity in student’s zone of proximal development (the area where student need some support to achieve next level of understanding or skill). The student’s previous knowledge is the starting point, and then the learning is bound to the activity during lessons—experiments, simulations, and observing phenomena.
The National Core Curriculum also states, “The purpose of instruction in chemistry is to support development of students’ scientific thinking and modern worldview.” Our teaching is based on examination and observations of substances and chemical phenomena, their structures and properties, and reactions between substances. Through experiments and theoretical models, students are taught to understand everyday life and nature. In my classroom, I use discussion, lectures, demonstrations, and experimental work—quite often based on group work. Between lessons, I use social media and other information communication technologies to stay in touch with students.
In addition to the National Core Curriculum, my school has its own. They have the same bases, but our own curriculum is more concrete. Based on these, I write my course and lesson plans. Because of different learning styles, I use different kinds of approaches, sometimes theoretical and sometimes experimental. Always there are new concepts and perhaps new models to explain the phenomena or results.
To make students enjoy learning chemistry is hard work. I think that as a teacher, you have to love your subject and enjoy teaching even when there are sometimes students who don´t pay attention to you. But I get satisfaction when I can give a purpose for the future by being a supportive teacher.
New Zealand: “Students Disengage When a Teacher Is Simply Repeating Facts or Ideas”
New Zealand came in seventh place out of 65 in the 2009 PISA assessment. Steve Martin is head of junior science at Howick College. In 2010, he received the prime minister’s award for science teaching.
Science education is an important part of preparing students for their role in the community. Scientific understanding will allow them to engage in issues that concern them now and in the future, such as genetically modified crops. In New Zealand, science is also viewed as having a crucial role to play in the future of the economic health of the country. This can be seen in the creation of the “Prime Minister’s Science Prizes,” a program that identifies the nation’s leading scientists, emerging and future scientists, and science teachers.
The New Zealand Science Curriculum allows for flexibility depending on contextual factors such as school location, interests of students, and teachers’ specialization. The curriculum has the “Nature of Science” as its foundation, which supports students learning the skills essential to a scientist, such as problem-solving and effective communication. The Nature of Science refers to the skills required to work as a scientist, how to communicate science effectively through science-specific vocabulary, and how to participate in debates and issues with a scientific perspective.
School administrators support innovation and risk-taking by teachers, which fosters the “let’s have a go” attitude. In my own classroom, I utilize computer technology to create virtual science lessons that support and encourage students to think for themselves and learn at their own pace. Virtual Lessons are Web-based documents that support learning in and outside the classroom. They include support for students of all abilities by providing digital resources targeted at different levels of thinking. These could include digital flashcards that support vocabulary development, videos that explain the relationships between ideas or facts, and links to websites that allow students to create cartoon animations. The students are then supported by the use of instant messaging, online collaborative documents, and email so they can get support from their peers and myself at anytime. I provide students with various levels of success criteria, which are statements that students and teachers use to evaluate performance. In every lesson I provide the students with three different levels of success criteria, each providing an increase in cognitive demand. The following is an example based on the topic of the carbon cycle:
I can identify the different parts of the carbon cycle.
I can explain how all the parts interact with each other to form the carbon cycle.
I can predict the effect that removing one part of the carbon cycle has on the environment.
These provide challenge for all abilities and at the same time make it clear what students need to do to be successful. I value creativity and innovation, and this greatly influences the opportunities I provide for students.
My students learn to love to be challenged and to see that all ideas help develop greater understanding. Students value the opportunity to contribute to others’ understanding, and they disengage when a teacher is simply repeating facts or ideas.Image: Coloma 1914 Classroom. Courtesy of Coloma Convent School, Croydon UK.
- King Canute or Mother Nature in North Carolina, Virginia, Texas?>
Legislators in North Carolina recently went one better than King C’Nut (Canute). The king of Denmark, England, Norway and parts of Sweden during various periods between 1018 and 1035, famously and unsuccessfully tried to hold back the incoming tide. The now mythic story tells of Canute’s arrogance. Not to be outdone, North Carolina’s state legislature recently passed a law that bans state agencies from reporting that sea-level rise is accelerating.
The bill From North Carolina states:
“… rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of sea-level rise may be extrapolated linearly to estimate future rates of rise but shall not include scenarios of accelerated rates of sea-level rise.”
This comes hot on the heals of the recent revisionist push in Virginia where references to phrases such as “sea level rise” and “climate change” are forbidden in official state communications. Last year of course, Texas led the way for other states following the climate science denial program when the Texas Commission on Environmental Quality, which had commissioned a scientific study of Galveston Bay, removed all references to “rising sea levels”.
For more detailed reporting on this unsurprising and laughable state of affairs check out this article at Skeptical Science.From Scientific American:
Less than two weeks after the state’s senate passed a climate science-squelching bill, research shows that sea level along the coast between N.C. and Massachusetts is rising faster than anywhere on Earth.
Could nature be mocking North Carolina’s law-makers? Less than two weeks after the state’s senate passed a bill banning state agencies from reporting that sea-level rise is accelerating, research has shown that the coast between North Carolina and Massachusetts is experiencing the fastest sea-level rise in the world.
Asbury Sallenger, an oceanographer at the US Geological Survey in St Petersburg, Florida, and his colleagues analysed tide-gauge records from around North America. On 24 June, they reported in Nature Climate Change that since 1980, sea-level rise between Cape Hatteras, North Carolina, and Boston, Massachusetts, has accelerated to between 2 and 3.7 millimetres per year. That is three to four times the global average, and it means the coast could see 20–29 centimetres of sea-level rise on top of the metre predicted for the world as a whole by 2100 ( A. H. Sallenger Jr et al. Nature Clim. Change http://doi.org/hz4; 2012).
“Many people mistakenly think that the rate of sea-level rise is the same everywhere as glaciers and ice caps melt,” says Marcia McNutt, director of the US Geological Survey. But variations in currents and land movements can cause large regional differences. The hotspot is consistent with the slowing measured in Atlantic Ocean circulation, which may be tied to changes in water temperature, salinity and density.
North Carolina’s senators, however, have tried to stop state-funded researchers from releasing similar reports. The law approved by the senate on 12 June banned scientists in state agencies from using exponential extrapolation to predict sea-level rise, requiring instead that they stick to linear projections based on historical data.
Following international opprobrium, the state’s House of Representatives rejected the bill on 19 June. However, a compromise between the house and the senate forbids state agencies from basing any laws or plans on exponential extrapolations for the next three to four years, while the state conducts a new sea-level study.
According to local media, the bill was the handiwork of industry lobbyists and coastal municipalities who feared that investors and property developers would be scared off by predictions of high sea-level rises. The lobbyists invoked a paper published in the Journal of Coastal Research last year by James Houston, retired director of the US Army Corps of Engineers’ research centre in Vicksburg, Mississippi, and Robert Dean, emeritus professor of coastal engineering at the University of Florida in Gainesville. They reported that global sea-level rise has slowed since 1930 ( J. R. Houston and R. G. Dean J. Coastal Res. 27 , 409 – 417 ; 2011) — a contention that climate sceptics around the world have seized on.
Speaking to Nature, Dean accused the oceanographic community of ideological bias. “In the United States, there is an overemphasis on unrealistically high sea-level rise,” he says. “The reason is budgets. I am retired, so I have the freedom to report what I find without any bias or need to chase funding.” But Sallenger says that Houston and Dean’s choice of data sets masks acceleration in the sea-level-rise hotspot.Image courtesy of Policymic.
- Good Grades and Good Drugs?>
A sad story chronicling the rise in amphetamine use in the quest for good school grades. More frightening now is the increase in addiction of ever younger kids, and not for dubious goal of excelling at school. Many kids are just taking the drug to get high.From the Telegraph:
The New York Times has finally woken up to America’s biggest unacknowledged drug problem: the massive overprescription of the amphetamine drug Adderall for Attention Deficit Hyperactivity Disorder. Kids have been selling each other this powerful – and extremely moreish – mood enhancer for years, as ADHD diagnoses and prescriptions for the drug have shot up.
Now, children are snorting the stuff, breaking open the capsules and ingesting it using the time-honoured tool of a rolled-up bank note.
The NYT seems to think these teenage drug users are interested in boosting their grades. It claims that, for children without ADHD, “just one pill can jolt them with the energy focus to push through all-night homework binges and stay awake during exams afterward”.
Really? There are two problems with this.
First, the idea that ADHD kids are “normal” on Adderall and its methylphenidate alternative Ritalin – gentler in its effect but still a psychostimulant – is open to question. Reading this scorching article by the child psychologist Prof L Alan Sroufe, who says there’s no evidence that attention-deficit children are born with an organic disease, or that ADHD and non-ADHD kids react differently to their doctor-prescribed amphetamines. Yes, there’s an initial boost to concentration, but the effect wears off – and addiction often takes its place.
Second, the school pupils illicitly borrowing or buying Adderall aren’t necessarily doing it to concentrate on their work. They’re doing it to get high.
Adderall, with its mixture of amphetamine salts, has the ability to make you as euphoric as a line of cocaine – and keep you that way, particularly if it’s the slow-release version and you’re taking it for the first time. At least, that was my experience. Here’s what happened.
I was staying with a hospital consultant and his attorney wife in the East Bay just outside San Francisco. I’d driven overnight from Los Angeles after a flight from London; I was jetlagged, sleep-deprived and facing a deadline to write an article for the Spectator about, of all things, Bach cantatas.
Sitting in the courtyard garden with my laptop, I tapped and deleted one clumsy sentence after another. The sun was going down; my hostess saw me shivering and popped out with a blanket, a cup of herbal tea and ‘something to help you concentrate’.
I took the pill, didn’t notice any effect, and was glad when I was called in for dinner.
The dining room was a Californian take on the Second Empire. The lady next to me was a Southern Belle turned realtor, her eyelids already drooping from the effects of her third giant glass of Napa Valley chardonnay. She began to tell me about her divorce. Every time she refilled her glass, her new husband raised his eyes to heaven.
It felt as if I was stuck in an episode of Dallas, or a very bad Tennessee Williams play. But it didn’t matter in the least because, at some stage between the mozzarella salad and the grilled chicken, I’d become as high as a kite.
Adderall helps you concentrate, no doubt about it. I was riveted by the details of this woman’s alimony settlement. Even she, utterly self- obsessed as she was, was surprised by my gushing empathy. After dinner, I sat down at the kitchen table to finish the article. The head rush was beginning to wear off, but then, just as I started typing, a second wave of amphetamine pushed its way into my bloodstream. This was timed-release Adderall. Gratefully I plunged into 18th-century Leipzig, meticulously noting the catalogue numbers of cantatas. It was as if the great Johann Sebastian himself was looking over my shoulder. By the time I glanced at the clock, it was five in the morning. My pleasure at finishing the article was boosted by the dopamine high. What a lovely drug.
The blues didn’t hit me until the next day – and took the best part of a week to banish.
And this is what they give to nine-year-olds.Read the entire article after the jump.From the New York Times:
He steered into the high school parking lot, clicked off the ignition and scanned the scraps of his recent weeks. Crinkled chip bags on the dashboard. Soda cups at his feet. And on the passenger seat, a rumpled SAT practice book whose owner had been told since fourth grade he was headed to the Ivy League. Pencils up in 20 minutes.
The boy exhaled. Before opening the car door, he recalled recently, he twisted open a capsule of orange powder and arranged it in a neat line on the armrest. He leaned over, closed one nostril and snorted it.
Throughout the parking lot, he said, eight of his friends did the same thing.
The drug was not cocaine or heroin, but Adderall, an amphetamine prescribed for attention deficit hyperactivity disorder that the boy said he and his friends routinely shared to study late into the night, focus during tests and ultimately get the grades worthy of their prestigious high school in an affluent suburb of New York City. The drug did more than just jolt them awake for the 8 a.m. SAT; it gave them a tunnel focus tailor-made for the marathon of tests long known to make or break college applications.
“Everyone in school either has a prescription or has a friend who does,” the boy said.
At high schools across the United States, pressure over grades and competition for college admissions are encouraging students to abuse prescription stimulants, according to interviews with students, parents and doctors. Pills that have been a staple in some college and graduate school circles are going from rare to routine in many academically competitive high schools, where teenagers say they get them from friends, buy them from student dealers or fake symptoms to their parents and doctors to get prescriptions.
Of the more than 200 students, school officials, parents and others contacted for this article, about 40 agreed to share their experiences. Most students spoke on the condition that they be identified by only a first or middle name, or not at all, out of concern for their college prospects or their school systems’ reputations — and their own.
“It’s throughout all the private schools here,” said DeAnsin Parker, a New York psychologist who treats many adolescents from affluent neighborhoods like the Upper East Side. “It’s not as if there is one school where this is the culture. This is the culture.”
Observed Gary Boggs, a special agent for the Drug Enforcement Administration, “We’re seeing it all across the United States.”
The D.E.A. lists prescription stimulants like Adderall and Vyvanse (amphetamines) and Ritalin and Focalin (methylphenidates) as Class 2 controlled substances — the same as cocaine and morphine — because they rank among the most addictive substances that have a medical use. (By comparison, the long-abused anti-anxiety drug Valium is in the lower Class 4.) So they carry high legal risks, too, as few teenagers appreciate that merely giving a friend an Adderall or Vyvanse pill is the same as selling it and can be prosecuted as a felony.
While these medicines tend to calm people with A.D.H.D., those without the disorder find that just one pill can jolt them with the energy and focus to push through all-night homework binges and stay awake during exams afterward. “It’s like it does your work for you,” said William, a recent graduate of the Birch Wathen Lenox School on the Upper East Side of Manhattan.
But abuse of prescription stimulants can lead to depression and mood swings (from sleep deprivation), heart irregularities and acute exhaustion or psychosis during withdrawal, doctors say. Little is known about the long-term effects of abuse of stimulants among the young. Drug counselors say that for some teenagers, the pills eventually become an entry to the abuse of painkillers and sleep aids.
“Once you break the seal on using pills, or any of that stuff, it’s not scary anymore — especially when you’re getting A’s,” said the boy who snorted Adderall in the parking lot. He spoke from the couch of his drug counselor, detailing how he later became addicted to the painkiller Percocet and eventually heroin.
Paul L. Hokemeyer, a family therapist at Caron Treatment Centers in Manhattan, said: “Children have prefrontal cortexes that are not fully developed, and we’re changing the chemistry of the brain. That’s what these drugs do. It’s one thing if you have a real deficiency — the medicine is really important to those people — but not if your deficiency is not getting into Brown.”
The number of prescriptions for A.D.H.D. medications dispensed for young people ages 10 to 19 has risen 26 percent since 2007, to almost 21 million yearly, according to IMS Health, a health care information company — a number that experts estimate corresponds to more than two million individuals. But there is no reliable research on how many high school students take stimulants as a study aid. Doctors and teenagers from more than 15 schools across the nation with high academic standards estimated that the portion of students who do so ranges from 15 percent to 40 percent.
“They’re the A students, sometimes the B students, who are trying to get good grades,” said one senior at Lower Merion High School in Ardmore, a Philadelphia suburb, who said he makes hundreds of dollars a week selling prescription drugs, usually priced at $5 to $20 per pill, to classmates as young as freshmen. “They’re the quote-unquote good kids, basically.”
The trend was driven home last month to Nan Radulovic, a psychotherapist in Santa Monica, Calif. Within a few days, she said, an 11th grader, a ninth grader and an eighth grader asked for prescriptions for Adderall solely for better grades. From one girl, she recalled, it was not quite a request.
“If you don’t give me the prescription,” Dr. Radulovic said the girl told her, “I’ll just get it from kids at school.”Image: Illegal use of Adderall is prevalent enough that many students seem to take it for granted. Courtesy of Minnesota Post / Flickr/ CC/ Hipsxxhearts.
- The 10,000 Year Clock>
Aside from the ubiquitous plastic grocery bag will any human made artifact last 10,000 years? Before you answer, let’s qualify the question by mandating the artifact have some long-term value. That would seem to eliminate plastic bags, plastic toys embedded in fast food meals, and DVDs of reality “stars” ripped from YouTube. What does that leave? Most human made products consisting of metals or biodegradable components, such as paper and wood, will rust, rot or breakdown in 20-300 years. Even some plastics left exposed to sun and air will breakdown within a thousand years. Of course, buried deep in a landfill, plastic containers, styrofoam cups and throwaway diapers may remain with us for tens or hundreds of thousands of years.
Archaeological excavations show us that artifacts made of glass and ceramic would fit the bill — lasting well into the year 12012 and beyond. But, in the majority of cases we usually unearth fragments of things.
But what if some ingenious humans could build something that would still be around 10,000 years from now? Better still, build something that will still function as designed 10,000 years from now. This would represent an extraordinary feat of contemporary design and engineering. And, more importantly it would provide a powerful story for countless generations beginning with ours.
So, enter Danny Hillis and the Clock of the Long Now (also knows as the Millennium Clock or the 10,000 Year Clock). Danny Hillis is an inventor, scientist, and computer designer. He pioneered the concept of massively parallel computers.
In Hillis’ own words:
Ten thousand years – the life span I hope for the clock – is about as long as the history of human technology. We have fragments of pots that old. Geologically, it’s a blink of an eye. When you start thinking about building something that lasts that long, the real problem is not decay and corrosion, or even the power source. The real problem is people. If something becomes unimportant to people, it gets scrapped for parts; if it becomes important, it turns into a symbol and must eventually be destroyed. The only way to survive over the long run is to be made of materials large and worthless, like Stonehenge and the Pyramids, or to become lost. The Dead Sea Scrolls managed to survive by remaining lost for a couple millennia. Now that they’ve been located and preserved in a museum, they’re probably doomed. I give them two centuries – tops. The fate of really old things leads me to think that the clock should be copied and hidden.
Plans call for the 200 foot tall, 10,000 Year Clock to be installed inside a mountain in remote west Texas, with a second location in remote eastern Nevada. Design and engineering work on the clock, and preparation of the Clock’s Texas home are underway.
For more on the 10,000 Year Clock jump to the Long Now Foundation, here.More from Rationally Speaking:
I recently read Brian Hayes’ wonderful collection of mathematically oriented essays called Group Theory In The Bedroom, and Other Mathematical Diversions. Not surprisingly, the book contained plenty of philosophical musings too. In one of the essays, called “Clock of Ages,” Hayes describes the intricacies of clock building and he provides some interesting historical fodder.
For instance, we learn that in the sixteenth century Conrad Dasypodius, a Swiss mathematician, could have chosen to restore the old Clock of the Three Kings in Strasbourg Cathedral. Dasypodius, however, preferred to build a new clock of his own rather than maintain an old one. Over two centuries later, Jean-Baptiste Schwilgue was asked to repair the clock built by Dasypodius, but he decided to build a new and better clock which would last for 10,000 years.
Did you know that a large-scale project is underway to build another clock that will be able to run with minimal maintenance and interruption for ten millennia? It’s called The 10,000 Year Clock and its construction is sponsored by The Long Now Foundation. The 10,000 Year Clock is, however, being built for more than just its precision and durability. If the creators’ intentions are realized, then the clock will serve as a symbol to encourage long-term thinking about the needs and claims of future generations. Of course, if all goes to plan, our future descendants will be left to maintain it too. The interesting question is: will they want to?
If history is any indicator, then I think you know the answer. As Hayes puts it: “The fact is, winding and dusting and fixing somebody else’s old clock is boring. Building a brand-new clock of your own is much more fun, especially if you can pretend that it’s going to inspire awe and wonder for the ages to come. So why not have the fun now and let the future generations do the boring bit.” I think Hayes is right, it seems humans are, by nature, builders and not maintainers.
Projects like The 10,000 Year Clock are often undertaken with the noblest of environmental intentions, but the old proverb is relevant here: the road to hell is paved with good intentions. What I find troubling, then, is that much of the environmental do-goodery in the world may actually be making things worse. It’s often nothing more than a form of conspicuous consumption, which is a term coined by the economist and sociologist Thorstein Veblen. When it pertains specifically to “green” purchases, I like to call it being conspicuously environmental. Let’s use cars as an example. Obviously it depends on how the calculations are processed, but in many instances keeping and maintaining an old clunker is more environmentally friendly than is buying a new hybrid. I can’t help but think that the same must be true of building new clocks.
In his book, The Conundrum, David Owen writes: “How appealing would ‘green’ seem if it meant less innovation and fewer cool gadgets — not more?” Not very, although I suppose that was meant to be a rhetorical question. I enjoy cool gadgets as much as the next person, but it’s delusional to believe that conspicuous consumption is somehow a gift to the environment.
Using insights from evolutionary psychology and signaling theory, I think there is also another issue at play here. Buying conspicuously environmental goods, like a Prius, sends a signal to others that one cares about the environment. But if it’s truly the environment (and not signaling) that one is worried about, then surely less consumption must be better than more. The homeless person ironically has a lesser environmental impact than your average yuppie, yet he is rarely recognized as an environmental hero. Using this logic I can’t help but conclude that killing yourself might just be the most environmentally friendly act of all time (if it wasn’t blatantly obvious, this is a joke). The lesson here is that we shouldn’t confuse smug signaling with actually helping.Image: Prototype of the 10,000 Year Clock. Courtesy of the Long Now Foundation / Science Museum of London.
- High Fructose Corn Syrup = Corn Sugar?>
Hats off to the global agro-industrial complex that feeds most of the Earth’s inhabitants. With high fructose corn syrup (HFCS) getting an increasingly bad rap for helping to expand our waistlines and catalyze our diabetes, the industry is becoming more creative.
However, it’s only the type of “creativity” that a cynic would come to expect from a faceless, trillion dollar industry; it’s not a fresh, natural innovation. The industry wants to rename HFCS to “corn sugar”, making it sound healthier and more natural in the process.From the New York Times:
The United States Food and Drug Administration has rejected a request from the Corn Refiners Association to change the name of high-fructose corn syrup.
The association, which represents the companies that make the syrup, had petitioned the F.D.A. in September 2010 to begin calling the much-maligned sweetener “corn sugar.” The request came on the heels of a national advertising campaign promoting the syrup as a natural ingredient made from corn.
But in a letter, Michael M. Landa, director of the Center for Food Safety and Applied Nutrition at the F.D.A., denied the petition, saying that the term “sugar” is used only for food “that is solid, dried and crystallized.”
“HFCS is an aqueous solution sweetener derived from corn after enzymatic hydrolysis of cornstarch, followed by enzymatic conversion of glucose (dextrose) to fructose,” the letter stated. “Thus, the use of the term ‘sugar’ to describe HFCS, a product that is a syrup, would not accurately identify or describe the basic nature of the food or its characterizing properties.”
In addition, the F.D.A. concluded that the term “corn sugar” has been used to describe the sweetener dextrose and therefore should not be used to describe high-fructose corn syrup. The agency also said the term “corn sugar” could pose a risk to consumers who have been advised to avoid fructose because of a hereditary fructose intolerance or fructose malabsorption.Image: Fructose vs. D-Glucose Structural Formulae. Courtesy of Wikipedia.
- The Most Beautiful Railway Stations> From Flavorwire:
In 1972, Pulitzer Prize-winning author, and The New York Times’ very first architecture critic, Ada Louise Huxtable observed that “nothing was more up-to-date when it was built, or is more obsolete today, than the railroad station.” A comment on the emerging age of the jetliner and a swanky commercial air travel industry that made the behemoth train stations of the time appear as cumbersome relics of an outdated industrial era, we don’t think the judgment holds up today — at all. Like so many things that we wrote off in favor of what was seemingly more modern and efficient (ahem, vinyl records and Polaroid film), the train station is back and better than ever. So, we’re taking the time to look back at some of the greatest stations still standing.See other beautiful stations and read the entire article after the jump.Image: Grand Central Terminal — New York City, New York. Courtesy of Flavorwire.
- Java by the Numbers>
If you think the United States is a nation of coffee drinkers, thing again. The U.S., only ranks eighth in terms of annual java consumption per person. Way out in front is Finland. Makes one wonder if there is a correlation of coffee drinking and heavy metal music.Infographic courtesy of Hamilton Beach.
- Human Evolution: Stalled>
It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?From the Wall Street Journal:
If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.
I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).
More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.
Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.
Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.
Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.
So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”Image: The “Robot Evolution”. Courtesy of STRK3.
- Reconnecting with Our Urban Selves>
Christopher Mims over at the Technology Review revisits a recent study of our social networks, both real-world and online. It’s startling to see the growth in our social isolation despite the corresponding growth in technologies that increase our ability to communicate and interact with one another. Is the suburbanization of our species to blame, and can Facebook save us?From Technology Review:
In 2009, the Pew Internet Trust published a survey worth resurfacing for what it says about the significance of Facebook. The study was inspired by earlier research that “argued that since 1985 Americans have become more socially isolated, the size of their discussion networks has declined, and the diversity of those people with whom they discuss important matters has decreased.”
In particular, the study found that Americans have fewer close ties to those from their neighborhoods and from voluntary associations. Sociologists Miller McPherson, Lynn Smith-Lovin and Matthew Brashears suggest that new technologies, such as the internet and mobile phone, may play a role in advancing this trend.
If you read through all the results from Pew’s survey, you’ll discover two surprising things:
1. “Use of newer information and communication technologies (ICTs), such as the internet and mobile phones, is not the social change responsible for the restructuring of Americans’ core networks. We found that ownership of a mobile phone and participation in a variety of internet activities were associated with larger and more diverse core discussion networks.”
2. However, Americans on the whole are more isolated than they were in 1985. “The average size of Americans’ core discussion networks has declined since 1985; the mean network size has dropped by about one-third or a loss of approximately one confidant.” In addition, “The diversity of core discussion networks has markedly declined; discussion networks are less likely to contain non-kin – that is, people who are not relatives by blood or marriage.”
In other words, the technologies that have isolated Americans are anything but informational. It’s not hard to imagine what they are, as there’s been plenty of research on the subject. These technologies are the automobile, sprawl and suburbia. We know that neighborhoods that aren’t walkable decrease the number of our social connections and increase obesity. We know that commutes make us miserable, and that time spent in an automobile affects everything from our home life to our level of anxiety and depression.
Indirect evidence for this can be found in the demonstrated preferences of Millenials, who are opting for cell phones over automobiles and who would rather live in the urban cores their parents abandoned, ride mass transit and in all other respects physically re-integrate themselves with the sort of village life that is possible only in the most walkable portions of cities.
Meanwhile, it’s worth contemplating one of the primary factors that drove Facebook’s adoption by (soon) 1 billion people: Loneliness. Americans have less support than ever — one in eight in the Pew survey reported having no “discussion confidants.”
It’s clear that for all our fears about the ability of our mobile devices to isolate us in public, the primary way they’re actually used is for connection.Image: Typical suburban landscape. Courtesy of Treehugger.
- Heavy Metal Density>
Heavy Metal in the musical sense, not as in elements, such as iron or manganese, is really popular in Finland and Iceland. It even pops up in Iran and Saudia Arabia.Frank Jacobs over at Strange Maps tells us more.
This map reflects the number of heavy metal bands per 100,000 inhabitants for each country in the world. It codes the result on a colour temperature scale, with blue indicating low occurrence, and red high occurrence. The data for this map is taken from the extensive Encyclopaedia Metallum, an online archive of metal music that lists bands per country, and provides some background by listing their subgenre (Progressive Death Metal, Symphonic Gothic Metal, Groove Metal, etc).
Even if you barely know your Def Leppard from your Deep Purple, you won’t be surprised by the obvious point of this map: Scandinavia is the world capital of heavy metal music. Leaders of the pack are Finland and Sweden, coloured with the hottest shade of red. With 2,825 metal bands listed in the Encyclopaedia Metallum, the figure for Finland works out to 54.3 bands per 100,000 Finns (for a total of 5.2 million inhabitants). Second is Sweden, with a whopping 3,398 band entries. For 9.1 million Swedes, that amounts to 37.3 metal bands per 100,000 inhabitants.
The next-hottest shade of red is coloured in by Norway and Iceland. The Icelandic situation is interesting: with only 71 bands listed, the country seems not particulary metal-oriented. But the total population of the North Atlantic island is a mere 313,000. Which produces a result of 22.6 metal bands per 100,000 inhabitants. That’s almost the double, relatively speaking, of Denmark, which has a score of 12.9 (708 metal bands for 5.5 million Danes)
The following shades of colour, from dark orange to light yellow, are almost all found in North America, Europe and Australasia. A notable addition to this list of usual suspects are Israel, and the three countries of Latin America’s Southern Cone: Chile, Argentina and Uruguay.
Some interesting variations in Europe: Portugal is much darker – i.e. much more metal-oriented – than its Iberian neighbour Spain, and Greece is a solid southern outpost of metal on an otherwise wishy-washy Balkan Peninsula.
On the other side of the scale, light blue indicates the worst – or at least loneliest – places to be a metal fan: Papua New Guinea, North Korea, Cambodia, Afghanistan, Yemen, and most of Africa outside its northern and southern fringe. According to the Encyclopaedia Metallum, there isn’t a single metal band in any of those countries.
- Everything You Ever Wanted to Know About Plastic>
Yes, it’s like a monstrous other-worldly being that will eventually eat you; horrifying facts about plastic that you wish you had never known. This sobering infographic courtesy of ReuseThisBag.com, created by Obizmedia.
- Our Children: Independently Dependent>
Why can’t our kids tie their own shoes?
Are we raising our children to be self-obsessed, attention-seeking, helpless and dependent groupthinkers? And, why may the phenomenon of “family time” in the U.S. be a key culprit?
These are some of the questions raised by anthropologist Elinor Ochs and her colleagues. Over the last decade they have studied family life across the globe, from the Amazon region, to Samoa, and middle-America.From the Wall Street Journal:
Why do American children depend on their parents to do things for them that they are capable of doing for themselves? How do U.S. working parents’ views of “family time” affect their stress levels? These are just two of the questions that researchers at UCLA’s Center on Everyday Lives of Families, or CELF, are trying to answer in their work.
By studying families at home—or, as the scientists say, “in vivo”—rather than in a lab, they hope to better grasp how families with two working parents balance child care, household duties and career, and how this balance affects their health and well-being.
The center, which also includes sociologists, psychologists and archeologists, wants to understand “what the middle class thought, felt and what they did,” says Dr. Ochs. The researchers plan to publish two books this year on their work, and say they hope the findings may help families become closer and healthier.
Ten years ago, the UCLA team recorded video for a week of nearly every moment at home in the lives of 32 Southern California families. They have been picking apart the footage ever since, scrutinizing behavior, comments and even their refrigerators’s contents for clues.
The families, recruited primarily through ads, owned their own homes and had two or three children, at least one of whom was between 7 and 12 years old. About a third of the families had at least one nonwhite member, and two were headed by same-sex couples. Each family was filmed by two cameras and watched all day by at least three observers.
Among the findings: The families had very a child-centered focus, which may help explain the “dependency dilemma” seen among American middle-class families, says Dr. Ochs. Parents intend to develop their children’s independence, yet raise them to be relatively dependent, even when the kids have the skills to act on their own, she says.
In addition, these parents tended to have a very specific, idealized way of thinking about family time, says Tami Kremer-Sadlik, a former CELF research director who is now the director of programs for the division of social sciences at UCLA. These ideals appeared to generate guilt when work intruded on family life, and left parents feeling pressured to create perfect time together. The researchers noted that the presence of the observers may have altered some of the families’ behavior.
How kids develop moral responsibility is an area of focus for the researchers. Dr. Ochs, who began her career in far-off regions of the world studying the concept of “baby talk,” noticed that American children seemed relatively helpless compared with those in other cultures she and colleagues had observed.
In those cultures, young children were expected to contribute substantially to the community, says Dr. Ochs. Children in Samoa serve food to their elders, waiting patiently in front of them before they eat, as shown in one video snippet. Another video clip shows a girl around 5 years of age in Peru’s Amazon region climbing a tall tree to harvest papaya, and helping haul logs thicker than her leg to stoke a fire.
By contrast, the U.S. videos showed Los Angeles parents focusing more on the children, using simplified talk with them, doing most of the housework and intervening quickly when the kids had trouble completing a task.
In 22 of 30 families, children frequently ignored or resisted appeals to help, according to a study published in the journal Ethos in 2009. In the remaining eight families, the children weren’t asked to do much. In some cases, the children routinely asked the parents to do tasks, like getting them silverware. “How am I supposed to cut my food?” Dr. Ochs recalls one girl asking her parents.
Asking children to do a task led to much negotiation, and when parents asked, it sounded often like they were asking a favor, not making a demand, researchers said. Parents interviewed about their behavior said it was often too much trouble to ask.
For instance, one exchange caught on video shows an 8-year-old named Ben sprawled out on a couch near the front door, lifting his white, high-top sneaker to his father, the shoe laced. “Dad, untie my shoe,” he pleads. His father says Ben needs to say “please.”
“Please untie my shoe,” says the child in an identical tone as before. After his father hands the shoe back to him, Ben says, “Please put my shoe on and tie it,” and his father obliges.Read the entire article after the jump:Image courtesy of Kyle T. Webster / Wall Street Journal.
- Skyscrapers A La Mode>
Since 2006 Evolo architecture magazine has run a competition for architects to bring life to their most fantastic skyscraper designs. All the finalists of 2012 competition presented some stunning ideas, and topped by the winner, Himalaya Water Tower, from Zhi Zheng, Hongchuan Zhao, Dongbai Song of China.From Evolo:
Housed within 55,000 glaciers in the Himalaya Mountains sits 40 percent of the world’s fresh water. The massive ice sheets are melting at a faster-than-ever pace due to climate change, posing possible dire consequences for the continent of Asia and the entire world stand, and especially for the villages and cities that sit on the seven rivers that come are fed from the Himalayas’ runoff as they respond with erratic flooding or drought.
The “Himalaya Water Tower” is a skyscraper located high in the mountain range that serves to store water and helps regulate its dispersal to the land below as the mountains’ natural supplies dry up. The skyscraper, which can be replicated en masse, will collect water in the rainy season, purify it, freeze it into ice and store it for future use. The water distribution schedule will evolve with the needs of residents below; while it can be used to help in times of current drought, it’s also meant to store plentiful water for future generations.
Follow the other notable finalists at Evolo magazine after the jump.
- Best Days to Avoid Car Crash - Tuesday and Wednesday>
The cool inforgraphic below courtesy of FlowingData shows us at a glance that Saturday is the most likely day of the week to be involved in a (fatal) car crash. So, if you’re cautious stick to driving in the middle of the week.
The data is sourced from the National Highway Traffic Safety Association.
- Engineering the Ultimate Solar Power Collector: The Leaf> From Cosmic Log:
Researchers have been trying for decades to improve upon Mother Nature’s favorite solar-power trick — photosynthesis — but now they finally think they see the sunlight at the end of the tunnel.
“We now understand photosynthesis much better than we did 20 years ago,” said Richard Cogdell, a botanist at the University of Glasgow who has been doing research on bacterial photosynthesis for more than 30 years. He and three colleagues discussed their efforts to tweak the process that powers the world’s plant life today in Vancouver, Canada, during the annual meeting of the American Association for the Advancement of Science.
The researchers are taking different approaches to the challenge, but what they have in common is their search for ways to get something extra out of the biochemical process that uses sunlight to turn carbon dioxide and water into sugar and oxygen. “You can really view photosynthesis as an assembly line with about 168 steps,” said Steve Long, head of the University of Illinois’ Photosynthesis and Atmospheric Change Laboratory.
Revving up Rubisco
Howard Griffiths, a plant physiologist at the University of Cambridge, just wants to make improvements in one section of that assembly line. His research focuses on ways to get more power out of the part of the process driven by an enzyme called Rubisco. He said he’s trying to do what many auto mechanics have done to make their engines run more efficiently: “You turbocharge it.”
Some plants, such as sugar cane and corn, already have a turbocharged Rubisco engine, thanks to a molecular pathway known as C4. Geneticists believe the C4 pathway started playing a significant role in plant physiology in just the past 10 million years or so. Now Griffiths is looking into strategies to add the C4 turbocharger to rice, which ranks among the world’s most widely planted staple crops.
The new cellular machinery might be packaged in a micro-compartment that operates within the plant cell. That’s the way biochemical turbochargers work in algae and cyanobacteria. Griffiths and his colleagues are looking at ways to create similar micro-compartments for higher plants. The payoff would come in the form of more efficient carbon dioxide conversion, with higher crop productivity as a result. “For a given amount of carbon gain, the plant uses less water,” Griffiths said.Image courtesy of Kumaravel via Flickr, Creative Commons.
- Great Architecture>
Jonathan Glancey, architecture critic at the Guardian in the UK for the last fifteen years, is moving on to greener pastures, and presumably new buildings. In his final article for the newspaper he reflects on some buildings that have engendered shock and/or awe.From the Guardian:
Fifteen years is not a long time in architecture. It is the slowest as well as the most political of the arts. This much was clear when I joined the Guardian as its architecture and design correspondent, from the Independent, in 1997. I thought the Millennium Experience (the talk of the day) decidedly dimwitted and said so in no uncertain terms; it lacked a big idea and anything like the imagination of, say, the Great Exhibition of 1851, or the Festival of Britain in 1951.
For the macho New Labour government, newly in office and all football and testosterone, criticism of this cherished project was tantamount to sedition. They lashed out like angry cats; there were complaints from 10 Downing Street’s press office about negative coverage of the Dome. Hard to believe then, much harder now. That year’s London Model Engineer Exhibition was far more exciting; here was an enthusiastic celebration of the making of things, at a time when manufacturing was becoming increasingly looked down on.
New Labour, meanwhile, promised it would do things for architecture and urban design that Roman emperors and Renaissance princes could only have dreamed of. The north Greenwich peninsula was to become a new Florence, with trams and affordable housing. As would the Thames Gateway, that Siberia stretching – marshy, mysterious, semi-industrial – to Southend Pier and the sea. To a new, fast-breeding generation of quangocrats this land looked like a blank space on the London A-Z, ready to fill with “environmentally friendly” development. Precious little has happened there since, save for some below-standard housing, Boris Johnson’s proposal for an estuary airport and – a very good thing – an RSPB visitors’ centre designed by Van Heyningen and Haward near Purfleet on the Rainham marshes.
Labour’s promises turned out to be largely tosh, of course. Architecture and urban planning are usually best when neither hyped nor hurried. Grand plans grow best over time, as serendipity and common sense soften hard edges. In 2002, Tony Blair decided to invade Iraq – not a decision that, on the face of it, has a lot to do with architecture; but one of the articles I am most proud to have written for this paper was the story of a journey I made from one end of Iraq to the other, with Stuart Freedman, an unflappable press photographer. At the time, the Blair government was denying there would be a war, yet every Iraqi we spoke to knew the bombs were about to fall. It was my credentials as a critic and architectural historian that got me my Iraqi visa. Foreign correspondents, including several I met in Baghdad’s al-Rashid hotel, were understandably finding the terrain hard-going. But handwritten in my passport was an instruction saying: “Give this man every assistance.”
We travelled to Babylon to see Saddam’s reconstruction of the fabled walled city, and to Ur, Abraham’s home, and its daunting ziggurat and then – wonder of wonders – into the forbidden southern deserts to Eridu. Here I walked on the sand-covered remains of one of the world’s first cities. This, if anywhere, is where architecture was born. At Samarra, in northern Iraq, I climbed to the top of the wondrous spiral minaret of what was once the town’s Great Mosque. How the sun shone that day. When I got to the top, there was nothing to hang on to. I was confronted by the blazing blue sky and its gods, or God; the architecture itself was all but invisible. Saddam’s soldiers, charming recruits in starched and frayed uniforms drilled by a tough and paternal sergeant, led me through the country, through miles of unexploded war material piled high along sandy tracks, and across the paths of Shia militia.
Ten years on, Zaha Hadid, a Baghdad-born architect who has risen to stellar prominence since 2002, has won her first Iraqi commission, a new headquarters for the Iraqi National Bank in Baghdad. With luck, other inspired architects will get to work in Iraq, too, reconnecting the country with its former role as a crucible of great buildings and memorable cities.
Architecture is also the stuff of construction, engineering, maths and science. Of philosophy, sociology, Le Corbusier and who knows what else. It is also, I can’t help feeling, harder to create great buildings now than it was in the past. When Eridu or the palaces and piazzas of Renaissance Italy were shaped, architecture was the most expensive and prestigious of all cultural endeavours. Today we spread our wealth more thinly, spending ever more on disposable consumer junk, building more roads to serve ever more grim private housing estates, unsustainable supermarkets and distribution depots (and container ports and their giant ships), and the landfill sites we appear to need to shore up our insatiable, throwaway culture. Architecture has been in danger, like our indefensibly mean and horrid modern housing, of becoming little more than a commodity. Government talk of building a rash of “eco-towns” proved not just unpopular but more hot air. A policy initiative too far, the idea has effectively been dropped.
And, yet, despite all these challenges, the art form survives and even thrives. I have been moved in different ways by the magnificent Neues Museum, Berlin, a 10-year project led by David Chipperfield; by the elemental European Southern Observatory Hotel by Auer + Weber, for scientists in Chile’s Atacama Desert; and by Charles Barclay’s timber Kielder Observatory, where I spent a night in 2008 watching stars hanging above the Northumbrian forest.
I have been enchanted by the 2002 Serpentine Pavilion, a glimpse into a possible future by Toyo Ito and Cecil Balmond; by the inspiring reinvention of St Pancras station by Alastair Lansley and fellow architects; and by Blur, a truly sensational pavilion by Diller + Scofidio set on a steel jetty overlooking Lake Neuchatel at Yverdon-les-Bains. A part of Switzerland’s Expo 2002, this cat’s cradle of tensile steel was a machine for making clouds. You walked through the clouds as they appeared and, when conditions were right, watched them float away over the lake.Read the entire article here.Image: The spiral minaret of the Great Mosque of Samarra, Iraq. Courtesy Reuters / Guardian.
- Suburbia as Mass Murderer>
Jane Brody over at the Well blog makes a compelling case for the dismantling of suburbia. After all, these so-called “built environments” where we live, work, eat, play and raise our children, are an increasingly serious health hazard.From the New York Times:
Developers in the last half-century called it progress when they built homes and shopping malls far from city centers throughout the country, sounding the death knell for many downtowns. But now an alarmed cadre of public health experts say these expanded metropolitan areas have had a far more serious impact on the people who live there by creating vehicle-dependent environments that foster obesity, poor health, social isolation, excessive stress and depression.
As a result, these experts say, our “built environment” — where we live, work, play and shop — has become a leading cause of disability and death in the 21st century. Physical activity has been disappearing from the lives of young and old, and many communities are virtual “food deserts,” serviced only by convenience stores that stock nutrient-poor prepared foods and drinks.
According to Dr. Richard J. Jackson, professor and chairman of environmental health sciences at the University of California, Los Angeles, unless changes are made soon in the way many of our neighborhoods are constructed, people in the current generation (born since 1980) will be the first in America to live shorter lives than their parents do.
Although a decade ago urban planning was all but missing from public health concerns, a sea change has occurred. At a meeting of the American Public Health Association in October, Dr. Jackson said, there were about 300 presentations on how the built environment inhibits or fosters the ability to be physically active and get healthy food.
In a healthy environment, he said, “people who are young, elderly, sick or poor can meet their life needs without getting in a car,” which means creating places where it is safe and enjoyable to walk, bike, take in nature and socialize.
“People who walk more weigh less and live longer,” Dr. Jackson said. “People who are fit live longer. People who have friends and remain socially active live longer. We don’t need to prove all of this,” despite the plethora of research reports demonstrating the ill effects of current community structures.Image courtesy of Duke University.
- See the Aurora, then Die>
One item that features prominently on so-called “things-to-do-before-you-die” lists is seeing the Aurora Borealis, or Northern Lights.
The recent surge in sunspot activity and solar flares has caused a corresponding uptick in geo-magnetic storms here on Earth. The resulting Aurorae have been nothing short of spectacular. More images here, courtesy of Smithsonian magazine.
- Driving Across the U.S. at 146,700 Miles per Hour>
Through the miracle of time-lapse photography we bring you a journey of 12,225 miles across 32 States in 55 days compressed into 5 minutes. Brian Defrees snapped an image every five seconds from his car-mounted camera during the adventure, which began and ended in New York, via Washington D.C., Florida, Los Angeles and Washington State, and many points in between.
- Oil: Where it Comes From and Where it Goes>
Compiled from recent U.S. government and OPEC (Organization of Petroleum Exporting Countries) statistics, the infographic below highlights the global thirst for oil.From Daily Infographic:
- The Corporate One Percent of the One Percent>
With the Occupy Wall Street movement and related protests continuing to gather steam much recent media and public attention has focused on 1 percent versus the remaining 99 percent of the population. By most accepted estimates, 1 percent of households control around 40 percent of the global wealth, and there is a vast discrepancy between the top and bottom of the economic spectrum. While these statistics are telling, a related analysis of corporate wealth, highlighted in the New Scientist, shows a much tighter concentration among a very select group of transnational corporations (TNC).New Scientist:
An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.
The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.
The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).
“Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”
Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.
The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.
The work, to be published in PLoS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.
When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.Image courtesy of New Scientist / PLoS One. The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue.
- The Myth of Bottled Water>
In 2010 the world spent around $50 Billion on bottled water, with over a third accounted for by the United States alone. During this period the United States House of Representatives spent $860,000 on bottled water for its 435 members. This is close to $2,000 per person per year. (Figures according to Corporate Accountability International).
This is despite the fact that on average bottled water costs around 1,900 times more than it’s cheaper, less glamorous sibling — tap water. Bottled water has become a truly big business even though science shows no discernible benefit of bottled water over that from the faucet. In fact, around 40 percent of bottled water comes from municipal water supplies anyway.
In 2007 Charles Fishman wrote a ground-breaking cover story on the bottled water industry for Fast Company. We excerpt part of the article, Message in a Bottle, below.By Charles Fishman:
The largest bottled-water factory in North America is located on the outskirts of Hollis, Maine. In the back of the plant stretches the staging area for finished product: 24 million bottles of Poland Spring water. As far as the eye can see, there are double-stacked pallets packed with half-pint bottles, half-liters, liters, “Aquapods” for school lunches, and 2.5-gallon jugs for the refrigerator.
Really, it is a lake of Poland Spring water, conveniently celled off in plastic, extending across 6 acres, 8 feet high. A week ago, the lake was still underground; within five days, it will all be gone, to supermarkets and convenience stores across the Northeast, replaced by another lake’s worth of bottles.
Looking at the piles of water, you can have only one thought: Americans sure are thirsty.
Bottled water has become the indispensable prop in our lives and our culture. It starts the day in lunch boxes; it goes to every meeting, lecture hall, and soccer match; it’s in our cubicles at work; in the cup holder of the treadmill at the gym; and it’s rattling around half-finished on the floor of every minivan in America. Fiji Water shows up on the ABC show Brothers & Sisters; Poland Spring cameos routinely on NBC’s The Office. Every hotel room offers bottled water for sale, alongside the increasingly ignored ice bucket and drinking glasses. At Whole Foods, the upscale emporium of the organic and exotic, bottled water is the number-one item by units sold.
Thirty years ago, bottled water barely existed as a business in the United States. Last year, we spent more on Poland Spring, Fiji Water, Evian, Aquafina, and Dasani than we spent on iPods or movie tickets–$15 billion. It will be $16 billion this year.
Bottled water is the food phenomenon of our times. We–a generation raised on tap water and water fountains–drink a billion bottles of water a week, and we’re raising a generation that views tap water with disdain and water fountains with suspicion. We’ve come to pay good money–two or three or four times the cost of gasoline–for a product we have always gotten, and can still get, for free, from taps in our homes.
When we buy a bottle of water, what we’re often buying is the bottle itself, as much as the water. We’re buying the convenience–a bottle at the 7-Eleven isn’t the same product as tap water, any more than a cup of coffee at Starbucks is the same as a cup of coffee from the Krups machine on your kitchen counter. And we’re buying the artful story the water companies tell us about the water: where it comes from, how healthy it is, what it says about us. Surely among the choices we can make, bottled water isn’t just good, it’s positively virtuous.
Except for this: Bottled water is often simply an indulgence, and despite the stories we tell ourselves, it is not a benign indulgence. We’re moving 1 billion bottles of water around a week in ships, trains, and trucks in the United States alone. That’s a weekly convoy equivalent to 37,800 18-wheelers delivering water. (Water weighs 81/3 pounds a gallon. It’s so heavy you can’t fill an 18-wheeler with bottled water–you have to leave empty space.)
Meanwhile, one out of six people in the world has no dependable, safe drinking water. The global economy has contrived to deny the most fundamental element of life to 1 billion people, while delivering to us an array of water “varieties” from around the globe, not one of which we actually need. That tension is only complicated by the fact that if we suddenly decided not to purchase the lake of Poland Spring water in Hollis, Maine, none of that water would find its way to people who really are thirsty.Please read the entire article here.Image courtesy of Wikipedia.
- Book Review: The Big Thirst. Charles Fishman>
Charles Fishman has a fascinating new book entitled The Big Thirst: The Secret Life and Turbulent Future of Water. In it Fishman examines the origins of water on our planet and postulates an all to probable future where water becomes an increasingly limited and precious resource.A brief excerpt from a recent interview, courtesy of NPR:
For most of us, even the most basic questions about water turn out to be stumpers.
Where did the water on Earth come from?
Is water still being created or added somehow?
How old is the water coming out of the kitchen faucet?
For that matter, how did the water get to the kitchen faucet?
And when we flush, where does the water in the toilet actually go?
The things we think we know about water — things we might have learned in school — often turn out to be myths.
We think of Earth as a watery planet, indeed, we call it the Blue Planet; but for all of water’s power in shaping our world, Earth turns out to be surprisingly dry. A little water goes a long way.
We think of space as not just cold and dark and empty, but as barren of water. In fact, space is pretty wet. Cosmic water is quite common.
At the most personal level, there is a bit of bad news. Not only don’t you need to drink eight glasses of water every day, you cannot in any way make your complexion more youthful by drinking water. Your body’s water-balance mechanisms are tuned with the precision of a digital chemistry lab, and you cannot possibly “hydrate” your skin from the inside by drinking an extra bottle or two of Perrier. You just end up with pee sourced in France.
In short, we know nothing of the life of water — nothing of the life of the water inside us, around us, or beyond us. But it’s a great story — captivating and urgent, surprising and funny and haunting. And if we’re going to master our relationship to water in the next few decades — really, if we’re going to remaster our relationship to water — we need to understand the life of water itself.Read more of this article and Charles Fishman’s interview with NPR here.
- The Climate Spin Cycle>
There’s something to be said for a visual aide that puts a complex conversation about simple ideas into perspective. So, here we have a high-level flow chart that characterizes one on the most important debates of our time — climate change. Whether you are for or against the notion or the science, or merely perplexed by the hyperbole inside the “echo chamber” there is no denying that this debate will remain with us for quite sometime.Chart courtesy of Riley E. Dunlap and Aaron M. McCright, “Organized Climate-Change Denial,” In J. S. Dryzek, R. B. Norgaard and D. Schlosberg, (eds.), Oxford
Handbook of Climate Change and Society. New York: Oxford University Press, 2011.
- The Greenest Way To Travel>
A simplistic but nonetheless useful infographic below highlights the comparative energy footprints of our most common means of transportation. Can’t beat that bicycle.From One Block of the Grid:
- A Medical Metaphor for Climate Risk>
While scientific evidence of climate change continues to mount and an increasing number of studies point causal fingers at ourselves there is perhaps another way to visualize the risk of inaction or over-reaction. So, since most people can leave ideology aside when it comes to their own health, a medical metaphor, courtesy of Andrew Revkin over at Dot Earth, may be of use to broaden acceptance of the message.From the New York Times:
Paul C. Stern, the director of the National Research Council committee on the human dimensions of global change, has been involved in a decades-long string of studies of behavior, climate change and energy choices.
This is an arena that is often attacked by foes of cuts in greenhouse gases, who see signs of mind control and propaganda. Stern says that has nothing to do with his approach, as he made clear in “Contributions of Psychology to Limiting Climate Change,” a paper that was part of a special issue of the journal American Psychologist on climate change and behavior:
Psychological contributions to limiting climate change will come not from trying to change people’s attitudes, but by helping to make low-carbon technologies more attractive and user-friendly, economic incentives more transparent and easier to use, and information more actionable and relevant to the people who need it.
The special issue of the journal builds on a 2009 report on climate and behavior from the American Psychological Association that was covered here. Stern has now offered a reaction to the discussion last week of Princeton researcher Robert Socolow’s call for a fresh approach to climate policy that acknowledges “the news about climate change is unwelcome, that today’s climate science is incomplete, and that every ’solution’ carries risk.” Stern’s response, centered on a medical metaphor (not the first) is worth posting as a “Your Dot” contribution. You can find my reaction to his idea below. Here’s Stern’s piece:
I agree with Robert Socolow that scientists could do better at encouraging a high quality of discussion about climate change.
But providing better technical descriptions will not help most people because they do not follow that level of detail. Psychological research shows that people often use simple, familiar mental models as analogies for complex phenomena. It will help people think through climate choices to have a mental model that is familiar and evocative and that also neatly encapsulates Socolow’s points that the news is unwelcome, that science is incomplete, and that some solutions are dangerous. There is such a model.
Too many people think of climate science as an exact science like astronomy that can make highly confident predictions, such as about lunar eclipses. That model misrepresents the science, does poorly at making Socolow’s points, and has provided an opening for commentators and bloggers seeking to use any scientific disagreement to discredit the whole body of knowledge.
A mental model from medical science might work better. In the analogy, the planet is a patient suspected of having a serious, progressive disease (anthropogenic climate change). The symptoms are not obvious, just as they are not with diabetes or hypertension, but the disease may nevertheless be serious. Humans, as guardians of the planet, must decide what to do. Scientists are in the role of physician. The guardians have been asking the physicians about the diagnosis (is this disease present?), the nature of the disease, its prognosis if untreated, and the treatment options, including possible side effects. The medical analogy helps clarify the kinds of errors that are possible and can help people better appreciate how science can help and think through policy choices.
Diagnosis. A physician must be careful to avoid two errors: misdiagnosing the patient with a dread disease that is not present, and misdiagnosing a seriously ill patient as healthy. To avoid these types of error, physicians often run diagnostic tests or observe the patient over a period of time before recommending a course of treatment. Scientists have been doing this with Earth’s climate at least since 1959, when strong signs of illness were reported from observations in Hawaii.
Scientists now have high confidence that the patient has the disease. We know the causes: fossil fuel consumption, certain land cover changes, and a few other physical processes. We know that the disease produces a complex syndrome of symptoms involving change in many planetary systems (temperature, precipitation, sea level and acidity balance, ecological regimes, etc.). The patient is showing more and more of the syndrome, and although we cannot be sure that each particular symptom is due to climate change rather than some other cause, the combined evidence justifies strong confidence that the syndrome is present.
Prognosis. Fundamental scientific principles tell us that the disease is progressive and very hard to reverse. Observations tell us that the processes that cause it have been increasing, as have the symptoms. Without treatment, they will get worse. However, because this is an extremely rare disease (in fact, the first known case), there is uncertainty about how fast it will progress. The prognosis could be catastrophic, but we cannot assign a firm probability to the worst outcomes, and we are not even sure what the most likely outcome is. We want to avoid either seriously underestimating or overestimating the seriousness of the prognosis.
Treatment. We want treatments that improve the patient’s chances at low cost and with limited adverse side effects and we want to avoid “cures” that might be worse than the disease. We want to consider the chances of improvement for each treatment, and its side effects, in addition to the untreated prognosis. We want to avoid the dangers both of under-treatment and of side effects. We know that some treatments (the ones limiting climate change) get at the causes and could alleviate all the symptoms if taken soon enough. But reducing the use of fossil fuels quickly could be painful. Other treatments, called adaptations, offer only symptomatic relief. These make sense because even with strong medicine for limiting climate change, the disease will get worse before it gets better.
Choices. There are no risk-free choices. We know that the longer treatment is postponed, the more painful it will be, and the worse the prognosis. We can also use an iterative treatment approach (as Socolow proposed), starting some treatments and monitoring their effects and side effects before raising the dose. People will disagree about the right course of treatment, but thinking about the choices in this way might give the disagreements the appropriate focus.Read more here.Image courtesy of Stephen Wilkes for The New York Times.
- London's Other River>
You will have heard of the River Thames, the famous swathe of grey that cuts a watery path through London. You may even have heard of several of London’s prominent canals, such as the Grand Union Canal and Regent’s Canal. But, you probably will not have heard of the mysterious River Fleet that meanders through eerie tunnels beneath the city.
The Fleet and its Victorian tunnels are available for exploration, but are not for the faint of heart or sensitive of nose.
For more stunning subterranean images follow the full article here.Images courtesy of Environmental Grafitti.
- Sustainable Living From Your Backyard>
Dreaming of a self-sufficiency? The infographic below shows that an average U.S. household would need around 2 acres of outdoor space for the ultimate sustainable backyard.From One Block Off the Grid:
- Data, data, data: It's Everywhere>
Cities are one of the most remarkable and peculiar inventions of our species. They provide billions in the human family a framework for food, shelter and security. Increasingly, cities are becoming hubs in a vast data network where public officials and citizens mine and leverage vast amounts of information.Krystal D’Costa for Scientific American:
Once upon a time there was a family that lived in homes raised on platforms in the sky. They had cars that flew and sorta drove themselves. Their sidewalks carried them to where they needed to go. Video conferencing was the norm, as were appliances which were mostly automated. And they had a robot that cleaned and dispensed sage advice.
I was always a huge fan of the Jetsons. The family dynamics I could do without—Hey, Jane, you clearly had outside interests. You totally could have pursued them, and rocked at it too!—but they were a social reflection of the times even while set in the future, so that is what it is. But their lives were a technological marvel! They could travel by tube, electronic arms dressed them (at the push of the button), and Rosie herself was astounding. If it rained, the Superintendent could move their complex to a higher altitude to enjoy the sunshine! Though it’s a little terrifying to think that Mr. Spacely could pop up on video chat at any time. Think about your boss having that sort of access. Scary, right?
The year 2062 used to seem impossibly far away. But as the setting for the space-age family’s adventures looms on the horizon, even the tech-expectant Jetsons would have to agree that our worlds are perhaps closer than we realize. The moving sidewalks and push button technology (apps, anyone?) have been realized, we’re developing cars that can drive themselves, and we’re on our way to building more Rosie-like AI. Heck, we’re even testing the limits of personal flight. No joke. We’re even working to build a smarter electrical grid, one that would automatically adjust home temperatures and more accurately measure usage.
Sure, we have a ways to go just yet, but we’re more than peering over the edge. We’ve taken the first big step in revolutionizing our management of data.
The September special issue of Scientific American focuses on the strengths of urban centers. Often disparaged for congestion, pollution, and perceived apathy, cities have a history of being vilified. And yet, they’re also seats of innovation. The Social Nexus explores the potential awaiting to be unleashed by harnessing data.
If there’s one thing cities have an abundance of, it’s data. Number of riders on the subway, parking tickets given in a certain neighborhood, number of street fairs, number of parking facilities, broken parking meters—if you can imagine it, chances are the City has the data available, and it’s now open for you to review, study, compare, and shape, so that you can help built a city that’s responsive to your needs.Image courtesy of Wikipedia / Creative Commons.
- The Right of Not Turning Left>
In 2007 UPS made the headlines by declaring left-hand turns for its army of delivery truck drivers undesirable. Of course, we left-handers have always known that our left or “sinister” side is fatefully less attractive and still branded as unlucky or evil. Chinese culture brands left-handedness as improper as well.
UPS had other motives for poo-pooing left-hand turns. For a company which runs over 95,000 big brown delivery trucks optimizing delivery routes could result in tremendous savings. In fact, careful research showed that the company could reduce its annual delivery routes by 28.5 million miles, save around 3 million gallons of fuel and reduce CO2 emissions by over 30,000 metric tons. And, eliminating or reducing left-hand turns would be safer as well. Of the 2.4 million crashes at intersections in the United States in 2007, most involved left-hand turns, according to the U.S. Federal Highway Administration.
Now urban planners and highway designers in the United States are evaluating the same thing — how to reduce the need for left-hand turns. Drivers in Europe, especially the United Kingdom, will be all too familiar with the roundabout technique for reducing left-handed turns on many A and B roads. Roundabouts have yet to gain significant traction in the United States, so now comes the Diverging Diamond Interchange.From Slate:
. . . Left turns are the bane of traffic engineers. Their idea of utopia runs clockwise. (UPS’ routing software famously has drivers turn right whenever possible, to save money and time.) The left-turning vehicle presents not only the aforementioned safety hazard, but a coagulation in the smooth flow of traffic. It’s either a car stopped in an active traffic lane, waiting to turn; or, even worse, it’s cars in a dedicated left-turn lane that, when traffic is heavy enough, requires its own “dedicated signal phase,” lengthening the delay for through traffic as well as cross traffic. And when traffic volumes really increase, as in the junction of two suburban arterials, multiple left-turn lanes are required, costing even more in space and money.
And, increasingly, because of shifting demographics and “lollipop” development patterns, suburban arterials are where the action is: They represent, according to one report, less than 10 percent of the nation’s road mileage, but account for 48 percent of its vehicle-miles traveled.
. . . What can you do when you’ve tinkered all you can with the traffic signals, added as many left-turn lanes as you can, rerouted as much traffic as you can, in areas that have already been built to a sprawling standard? Welcome to the world of the “unconventional intersection,” where left turns are engineered out of existence.
. . . “Grade separation” is the most extreme way to eliminate traffic conflicts. But it’s not only aesthetically unappealing in many environments, it’s expensive. There is, however, a cheaper, less disruptive approach, one that promises its own safety and efficiency gains, that has become recently popular in the United States: the diverging diamond interchange. There’s just one catch: You briefly have to drive the wrong way. But more on that in a bit.
The “DDI” is the brainchild of Gilbert Chlewicki, who first theorized what he called the “criss-cross interchange” as an engineering student at the University of Maryland in 2000.
The DDI is the sort of thing that is easier to visualize than describe (this simulation may help), but here, roughly, is how a DDI built under a highway overpass works: As the eastbound driver approaches the highway interchange (whose lanes run north-south), traffic lanes “criss cross” at a traffic signal. The driver will now find himself on the “left” side of the road, where he can either make an unimpeded left turn onto the highway ramp, or cross over again to the right once he has gone under the highway overpass.
- The Plastic Bag Wars> From Rolling Stone:
American shoppers use an estimated 102 billion plastic shopping bags each year — more than 500 per consumer. Named by Guinness World Records as “the most ubiquitous consumer item in the world,” the ultrathin bags have become a leading source of pollution worldwide. They litter the world’s beaches, clog city sewers, contribute to floods in developing countries and fuel a massive flow of plastic waste that is killing wildlife from sea turtles to camels. “The plastic bag has come to represent the collective sins of the age of plastic,” says Susan Freinkel, author of Plastic: A Toxic Love Story.
Many countries have instituted tough new rules to curb the use of plastic bags. Some, like China, have issued outright bans. Others, including many European nations, have imposed stiff fees to pay for the mess created by all the plastic trash. “There is simply zero justification for manufacturing them anymore, anywhere,” the United Nations Environment Programme recently declared. But in the United States, the plastics industry has launched a concerted campaign to derail and defeat anti-bag measures nationwide. The effort includes well-placed political donations, intensive lobbying at both the state and national levels, and a pervasive PR campaign designed to shift the focus away from plastic bags to the supposed threat of canvas and paper bags — including misleading claims that reusable bags “could” contain bacteria and unsafe levels of lead.
“It’s just like Big Tobacco,” says Amy Westervelt, founding editor of Plastic Free Times, a website sponsored by the nonprofit Plastic Pollution Coalition. “They’re using the same underhanded tactics — and even using the same lobbying firm that Philip Morris started and bankrolled in the Nineties. Their sole aim is to maintain the status quo and protect their profits. They will stop at nothing to suppress or discredit science that clearly links chemicals in plastic to negative impacts on human, animal and environmental health.”
Made from high-density polyethylene — a byproduct of oil and natural gas — the single-use shopping bag was invented by a Swedish company in the mid-Sixties and brought to the U.S. by ExxonMobil. Introduced to grocery-store checkout lines in 1976, the “T-shirt bag,” as it is known in the industry, can now be found literally every where on the planet, from the bottom of the ocean to the peaks of Mount Everest. The bags are durable, waterproof, cheaper to produce than paper bags and able to carry 1,000 times their own weight. They are also a nightmare to recycle: The flimsy bags, many thinner than a strand of human hair, gum up the sorting equipment used by most recycling facilities. “Plastic bags and other thin-film plastic is the number-one enemy of the equipment we use,” says Jeff Murray, vice president of Far West Fibers, the largest recycler in Oregon. “More than 300,000 plastic bags are removed from our machines every day — and since most of the removal has to be done by hand, that means more than 25 percent of our labor costs involves plastic-bag removal.”
- The Slow Food - Fast Food Debate>
For watchers of the human condition, dissecting and analyzing our food culture is both fascinating and troubling. The global agricultural-industrial complex with its enormous efficiencies and finely engineered end-products, churns out mountains of food stuffs that help feed a significant proportion of the world. And yet, many argue that the same over-refined, highly-processed, preservative-doped, high-fructose enriched, sugar and salt laden, color saturated foods are to blame for many of our modern ills. The catalog of dangers from that box of “fish” sticks, orange “cheese” and twinkies goes something likes this: heart disease, cancer, diabetes, and obesity.
To counterbalance the fast/processed food juggernaut the grassroots International Slow Food movement established its manifesto in 1989. Its stated vision is:
We envision a world in which all people can access and enjoy food that is good for them, good for those who grow it and good for the planet.
They go on to say:
We believe that everyone has a fundamental right to the pleasure of good food and consequently the responsibility to protect the heritage of food, tradition and culture that make this pleasure possible. Our association believes in the concept of neo-gastronomy – recognition of the strong connections between plate, planet, people and culture.
These are lofty ideals. Many would argue that the goals of the Slow Food movement, while worthy, are somewhat elitist and totally impractical in current times on our over-crowded, resource constrained little blue planet.
Krystal D’Costa over at Anthropology in Practice has a fascinating analysis and takes a more pragmatic view.From Krystal D’Costa over at Anthropology in Practice:
There’s a sign hanging in my local deli that offers customers some tips on what to expect in terms of quality and service. It reads:
Can be fast and good, but it won’t be cheap.
Can be fast and cheap, but it won’t be good.
Can be good and cheap, but it won’t be fast.
Pick two—because you aren’t going to get it good, cheap, and fast.
The Good/Fast/Cheap Model is certainly not new. It’s been a longstanding principle in design, and has been applied to many other things. The idea is a simple one: we can’t have our cake and eat it too. But that doesn’t mean we can’t or won’t try—and no where does this battle rage more fiercely than when it comes to fast food.
In a landscape dominated by golden arches, dollar menus, and value meals serving up to 2,150 calories, fast food has been much maligned. It’s fast, it’s cheap, but we know it’s generally not good for us. And yet, well-touted statistics report that Americans are spending more than ever on fast food:
In 1970, Americans spent about $6 billion on fast food; in 2000, they spent more than $110 billion. Americans now spend more money on fast food than on higher education, personal computers, computer software, or new cars. They spend more on fast food than on movies, books, magazines, newspapers, videos, and recorded music—combined.[i]
With waistlines growing at an alarming rate, fast food has become an easy target. Concern has spurned the emergence of healthier chains (where it’s good and fast, but not cheap), half servings, and posted calorie counts. We talk about awareness and “food prints” enthusiastically, aspire to incorporate more organic produce in our diets, and struggle to encourage others to do the same even while we acknowledge that differing economic means may be a limiting factor.
In short, we long to return to a simpler food time—when local harvests were common and more than adequately provided the sustenance we needed, and we relied less on processed, industrialized foods. We long for a time when home-cooked meals, from scratch, were the norm—and any number of cooking shows on the American airways today work to convince us that it’s easy to do. We’re told to shun fast food, and while it’s true that modern, fast, processed foods represent an extreme in portion size and nutrition, it is also true that our nostalgia is misguided: raw, unprocessed foods—the “natural” that we yearn for—were a challenge for our ancestors. In fact, these foods were downright dangerous.
Step back in time to when fresh meat rotted before it could be consumed and you still consumed it, to when fresh fruits were sour, vegetables were bitter, and when roots and tubers were poisonous. Nature, ever fickle, could withhold her bounty as easily as she could share it: droughts wreaked havoc on produce, storms hampered fishing, cows stopped giving milk, and hens stopped laying.[ii] What would you do then?Images courtesy of International Slow Food Movement / Fred Meyer store by lyzadanger.
- How the Great White Egret Spurred Bird Conservation>
The infamous Dead Parrot Sketch from Monty Python’s Flying Circus continues to resonate several generations removed from its creators. One of the most treasured exchanges, between a shady pet shop owner and prospective customer included two immortal comedic words, “Beautiful plumage”, followed by the equally impressive retort, “The plumage don’t enter into it. It’s stone dead.”
Though utterly silly this conversation does point towards a deeper and very ironic truth: that humans so eager to express their status among their peers do this by exploiting another species. Thus, the stunning white plumage of the Great White Egret proved to be its undoing, almost. So utterly sought after were the egrets’ feathers that both males and females were hunted close to extinction. And, in a final ironic twist, the near extinction of these great birds inspired the Audubon campaigns and drove legislation to curb the era of fancy feathers.More courtesy of the Smithsonian
I’m not the only one who has been dazzled by the egret’s feathers, though. At the turn of the 20th century, these feathers were a huge hit in the fashion world, to the detriment of the species, as Thor Hanson explains in his new book Feathers: The Evolution of a Natural Miracle:
One particular group of birds suffered near extermination at the hands of feather hunters, and their plight helped awaken a conservation ethic that still resonates in the modern environmental movement. With striking white plumes and crowded, conspicuous nesting colonies, Great Egrets and Snowy Egrets faced an unfortunate double jeopardy: their feathers fetched a high price, and their breeding habits made them an easy mark. To make matters worse, both sexes bore the fancy plumage, so hunters didn’t just target the males; they decimated entire rookeries. At the peak of the trade, an ounce of egret plume fetched the modern equivalent of two thousand dollars, and successful hunters could net a cool hundred grand in a single season. But every ounce of breeding plumes represented six dead adults, and each slain pair left behind three to five starving nestlings. Millions of birds died, and by the turn of the century this once common species survived only in the deep Everglades and other remote wetlands.
This slaughter inspired Audubon members to campaign for environmental protections and bird preservation, at the state, national and international levels.Image courtesy of Antonio Soto for the Smithsonian.
- Green Bootleggers and Baptists> Bjørn Lomborg for Project Syndicate:
In May, the United Nations’ International Panel on Climate Change made media waves with a new report on renewable energy. As in the past, the IPCC first issued a short summary; only later would it reveal all of the data. So it was left up to the IPCC’s spin-doctors to present the take-home message for journalists.
The first line of the IPCC’s press release declared, “Close to 80% of the world‘s energy supply could be met by renewables by mid-century if backed by the right enabling public policies.” That story was repeated by media organizations worldwide.
Last month, the IPCC released the full report, together with the data behind this startlingly optimistic claim. Only then did it emerge that it was based solely on the most optimistic of 164 modeling scenarios that researchers investigated. And this single scenario stemmed from a single study that was traced back to a report by the environmental organization Greenpeace. The author of that report – a Greenpeace staff member – was one of the IPCC’s lead authors.
The claim rested on the assumption of a large reduction in global energy use. Given the number of people climbing out of poverty in China and India, that is a deeply implausible scenario.
When the IPCC first made the claim, global-warming activists and renewable-energy companies cheered. “The report clearly demonstrates that renewable technologies could supply the world with more energy than it would ever need,” boasted Steve Sawyer, Secretary-General of the Global Wind Energy Council.
This sort of behavior – with activists and big energy companies uniting to applaud anything that suggests a need for increased subsidies to alternative energy – was famously captured by the so-called “bootleggers and Baptists” theory of politics.
The theory grew out of the experience of the southern United States, where many jurisdictions required stores to close on Sunday, thus preventing the sale of alcohol. The regulation was supported by religious groups for moral reasons, but also by bootleggers, because they had the market to themselves on Sundays. Politicians would adopt the Baptists’ pious rhetoric, while quietly taking campaign contributions from the criminals.
Of course, today’s climate-change “bootleggers” are not engaged in any illegal behavior. But the self-interest of energy companies, biofuel producers, insurance firms, lobbyists, and others in supporting “green” policies is a point that is often missed.
Indeed, the “bootleggers and Baptists” theory helps to account for other developments in global warming policy over the past decade or so. For example, the Kyoto Protocol would have cost trillions of dollars, but would have achieved a practically indiscernible difference in stemming the rise in global temperature. Yet activists claimed that there was a moral obligation to cut carbon-dioxide emissions, and were cheered on by businesses that stood to gain.More from theSource here
- Jevons Paradox: Energy Efficiency Increases Consumption?>
Energy efficiency sounds simple, but it’s rather difficult to measure. Sure when you purchase a shiny, new more energy efficient washing machine compared with your previous model you’re making a personal dent in energy consumption. But, what if in aggregate overall consumption increases because more people want that energy efficient model? In a nutshell, that’s Jevons Paradox, named after a 19th-century British economist, William Jevons. He observed that while the steam engine consumed energy more efficiently from coal, it also stimulated so much economic growth that coal consumption actually increased. Thus, Jevons argued that improvements in fuel efficiency tend to increase, rather than decrease, fuel use.
John Tierney over at the New York Times brings Jevons into the 21st century and discovers that the issues remain the same.From the New York Times:
For the sake of a cleaner planet, should Americans wear dirtier clothes?
This is not a simple question, but then, nothing about dirty laundry is simple anymore. We’ve come far since the carefree days of 1996, when Consumer Reports tested some midpriced top-loaders and reported that “any washing machine will get clothes clean.”
In this year’s report, no top-loading machine got top marks for cleaning. The best performers were front-loaders costing on average more than $1,000. Even after adjusting for inflation, that’s still $350 more than the top-loaders of 1996.
What happened to yesterday’s top-loaders? To comply with federal energy-efficiency requirements, manufacturers made changes like reducing the quantity of hot water. The result was a bunch of what Consumer Reports called “washday wash-outs,” which left some clothes “nearly as stained after washing as they were when we put them in.”
Now, you might think that dirtier clothes are a small price to pay to save the planet. Energy-efficiency standards have been embraced by politicians of both parties as one of the easiest ways to combat global warming. Making appliances, cars, buildings and factories more efficient is called the “low-hanging fruit” of strategies to cut greenhouse emissions.
But a growing number of economists say that the environmental benefits of energy efficiency have been oversold. Paradoxically, there could even be more emissions as a result of some improvements in energy efficiency, these economists say.
The problem is known as the energy rebound effect. While there’s no doubt that fuel-efficient cars burn less gasoline per mile, the lower cost at the pump tends to encourage extra driving. There’s also an indirect rebound effect as drivers use the money they save on gasoline to buy other things that produce greenhouse emissions, like new electronic gadgets or vacation trips on fuel-burning planes.Read more here.Image courtesy of Wikipedia, Popular Science Monthly / Creative Commons.
- The Strange Forests that Drink—and Eat—Fog> From Discover:
On the rugged roadway approaching Fray Jorge National Park in north-central Chile, you are surrounded by desert. This area receives less than six inches of rain a year, and the dry terrain is more suggestive of the badlands of the American Southwest than of the lush landscapes of the Amazon. Yet as the road climbs, there is an improbable shift. Perched atop the coastal mountains here, some 1,500 to 2,000 feet above the level of the nearby Pacific Ocean, are patches of vibrant rain forest covering up to 30 acres apiece. Trees stretch as much as 100 feet into the sky, with ferns, mosses, and bromeliads adorning their canopies. Then comes a second twist: As you leave your car and follow a rising path from the shrub into the forest, it suddenly starts to rain. This is not rain from clouds in the sky above, but fog dripping from the tree canopy. These trees are so efficient at snatching moisture out of the air that the fog provides them with three-quarters of all the water they need.
Understanding these pocket rain forests and how they sustain themselves in the middle of a rugged desert has become the life’s work of a small cadre of scientists who are only now beginning to fully appreciate Fray Jorge’s third and deepest surprise: The trees that grow here do more than just drink the fog. They eat it too.
Fray Jorge lies at the north end of a vast rain forest belt that stretches southward some 600 miles to the tip of Chile. In the more southerly regions of this zone, the forest is wetter, thicker, and more contiguous, but it still depends on fog to survive dry summer conditions. Kathleen C. Weathers, an ecosystem scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York, has been studying the effects of fog on forest ecosystems for 25 years, and she still cannot quite believe how it works. “One step inside a fog forest and it’s clear that you’ve entered a remarkable ecosystem,” she says. “The ways in which trees, leaves, mosses, and bromeliads have adapted to harvest tiny droplets of water that hang in the atmosphere is unparalleled.”Image courtesy of Juan J. Armesto/Foundation Senda Darwin Archive