- Self-Assured Destruction (SAD)>
The Cold War between the former U.S.S.R and the United States brought us the perfect acronym for the ultimate human “game” of brinkmanship — it was called MAD, for mutually assured destruction.
Now, thanks to ever-evolving technology, increasing military capability, growing environmental exploitation and unceasing human stupidity we have reached an era that we have dubbed SAD, for self-assured destruction. During the MAD period — the thinking was that it would take the combined efforts of the world’s two superpowers to wreak global catastrophe. Now, as a sign of our so-called progress — in the era of SAD — it only takes one major nation to ensure the destruction of the planet. Few would call this progress. Noam Chomsky offers some choice words on our continuing folly.
What is the future likely to bring? A reasonable stance might be to try to look at the human species from the outside. So imagine that you’re an extraterrestrial observer who is trying to figure out what’s happening here or, for that matter, imagine you’re an historian 100 years from now – assuming there are any historians 100 years from now, which is not obvious – and you’re looking back at what’s happening today. You’d see something quite remarkable.
For the first time in the history of the human species, we have clearly developed the capacity to destroy ourselves. That’s been true since 1945. It’s now being finally recognized that there are more long-term processes like environmental destruction leading in the same direction, maybe not to total destruction, but at least to the destruction of the capacity for a decent existence.
And there are other dangers like pandemics, which have to do with globalization and interaction. So there are processes underway and institutions right in place, like nuclear weapons systems, which could lead to a serious blow to, or maybe the termination of, an organized existence.
The question is: What are people doing about it? None of this is a secret. It’s all perfectly open. In fact, you have to make an effort not to see it.
There have been a range of reactions. There are those who are trying hard to do something about these threats, and others who are acting to escalate them. If you look at who they are, this future historian or extraterrestrial observer would see something strange indeed. Trying to mitigate or overcome these threats are the least developed societies, the indigenous populations, or the remnants of them, tribal societies and first nations in Canada. They’re not talking about nuclear war but environmental disaster, and they’re really trying to do something about it.
In fact, all over the world – Australia, India, South America – there are battles going on, sometimes wars. In India, it’s a major war over direct environmental destruction, with tribal societies trying to resist resource extraction operations that are extremely harmful locally, but also in their general consequences. In societies where indigenous populations have an influence, many are taking a strong stand. The strongest of any country with regard to global warming is in Bolivia, which has an indigenous majority and constitutional requirements that protect the “rights of nature.”
Ecuador, which also has a large indigenous population, is the only oil exporter I know of where the government is seeking aid to help keep that oil in the ground, instead of producing and exporting it – and the ground is where it ought to be.
Venezuelan President Hugo Chavez, who died recently and was the object of mockery, insult, and hatred throughout the Western world, attended a session of the U.N. General Assembly a few years ago where he elicited all sorts of ridicule for calling George W. Bush a devil. He also gave a speech there that was quite interesting. Of course, Venezuela is a major oil producer. Oil is practically their whole gross domestic product. In that speech, he warned of the dangers of the overuse of fossil fuels and urged producer and consumer countries to get together and try to work out ways to reduce fossil fuel use. That was pretty amazing on the part of an oil producer. You know, he was part Indian, of indigenous background. Unlike the funny things he did, this aspect of his actions at the U.N. was never even reported.
So, at one extreme you have indigenous, tribal societies trying to stem the race to disaster. At the other extreme, the richest, most powerful societies in world history, like the United States and Canada, are racing full-speed ahead to destroy the environment as quickly as possible. Unlike Ecuador, and indigenous societies throughout the world, they want to extract every drop of hydrocarbons from the ground with all possible speed.
Both political parties, President Obama, the media, and the international press seem to be looking forward with great enthusiasm to what they call “a century of energy independence” for the United States. Energy independence is an almost meaningless concept, but put that aside. What they mean is: we’ll have a century in which to maximize the use of fossil fuels and contribute to destroying the world.
And that’s pretty much the case everywhere. Admittedly, when it comes to alternative energy development, Europe is doing something. Meanwhile, the United States, the richest and most powerful country in world history, is the only nation among perhaps 100 relevant ones that doesn’t have a national policy for restricting the use of fossil fuels, that doesn’t even have renewable energy targets. It’s not because the population doesn’t want it. Americans are pretty close to the international norm in their concern about global warming. It’s institutional structures that block change. Business interests don’t want it and they’re overwhelmingly powerful in determining policy, so you get a big gap between opinion and policy on lots of issues, including this one.
So that’s what the future historian – if there is one – would see. He might also read today’s scientific journals. Just about every one you open has a more dire prediction than the last.
The other issue is nuclear war. It’s been known for a long time that if there were to be a first strike by a major power, even with no retaliation, it would probably destroy civilization just because of the nuclear-winter consequences that would follow. You can read about it in the Bulletin of Atomic Scientists. It’s well understood. So the danger has always been a lot worse than we thought it was.
We’ve just passed the 50th anniversary of the Cuban Missile Crisis, which was called “the most dangerous moment in history” by historian Arthur Schlesinger, President John F. Kennedy’s advisor. Which it was. It was a very close call, and not the only time either. In some ways, however, the worst aspect of these grim events is that the lessons haven’t been learned.
What happened in the missile crisis in October 1962 has been prettified to make it look as if acts of courage and thoughtfulness abounded. The truth is that the whole episode was almost insane. There was a point, as the missile crisis was reaching its peak, when Soviet Premier Nikita Khrushchev wrote to Kennedy offering to settle it by a public announcement of a withdrawal of Russian missiles from Cuba and U.S. missiles from Turkey. Actually, Kennedy hadn’t even known that the U.S. had missiles in Turkey at the time. They were being withdrawn anyway, because they were being replaced by more lethal Polaris nuclear submarines, which were invulnerable.
So that was the offer. Kennedy and his advisors considered it – and rejected it. At the time, Kennedy himself was estimating the likelihood of nuclear war at a third to a half. So Kennedy was willing to accept a very high risk of massive destruction in order to establish the principle that we – and only we – have the right to offensive missiles beyond our borders, in fact anywhere we like, no matter what the risk to others – and to ourselves, if matters fall out of control. We have that right, but no one else does.
Kennedy did, however, accept a secret agreement to withdraw the missiles the U.S. was already withdrawing, as long as it was never made public. Khrushchev, in other words, had to openly withdraw the Russian missiles while the US secretly withdrew its obsolete ones; that is, Khrushchev had to be humiliated and Kennedy had to maintain his macho image. He’s greatly praised for this: courage and coolness under threat, and so on. The horror of his decisions is not even mentioned – try to find it on the record.
And to add a little more, a couple of months before the crisis blew up the United States had sent missiles with nuclear warheads to Okinawa. These were aimed at China during a period of great regional tension.
Well, who cares? We have the right to do anything we want anywhere in the world. That was one grim lesson from that era, but there were others to come.
Ten years after that, in 1973, Secretary of State Henry Kissinger called a high-level nuclear alert. It was his way of warning the Russians not to interfere in the ongoing Israel-Arab war and, in particular, not to interfere after he had informed the Israelis that they could violate a ceasefire the U.S. and Russia had just agreed upon. Fortunately, nothing happened.
Ten years later, President Ronald Reagan was in office. Soon after he entered the White House, he and his advisors had the Air Force start penetrating Russian air space to try to elicit information about Russian warning systems, Operation Able Archer. Essentially, these were mock attacks. The Russians were uncertain, some high-level officials fearing that this was a step towards a real first strike. Fortunately, they didn’t react, though it was a close call. And it goes on like that.
At the moment, the nuclear issue is regularly on front pages in the cases of North Korea and Iran. There are ways to deal with these ongoing crises. Maybe they wouldn’t work, but at least you could try. They are, however, not even being considered, not even reported.
Read the entire article here.
Image: President Kennedy signs Cuba quarantine proclamation, 23 October 1962. Courtesy of Wikipedia.
- Living Long and Prospering on Ikaria>
It’s safe to suggest that most of us above a certain age — let’s say 30 — wish to stay young. It is also safer to suggest, in the absence of a solution to this first wish, that many of us wish to age gracefully and happily. Yet for most of us, especially in the West, we age in a less dignified manner in combination with colorful medicines, lengthy tubes, and unpronounceable procedures. We are collectively living longer. But, the quality of those extra years leaves much to be desired.
In a quest to understand the process of aging more thoroughly researchers regularly descend on areas the world over that are known to have higher than average populations of healthy older people. These have become known as “Blue Zones”. One such place is a small, idyllic (there’s a clue right there) Greek island called Ikaria.
From the Guardian:
Gregoris Tsahas has smoked a packet of cigarettes every day for 70 years. High up in the hills of Ikaria, in his favourite cafe, he draws on what must be around his half-millionth fag. I tell him smoking is bad for the health and he gives me an indulgent smile, which suggests he’s heard the line before. He’s 100 years old and, aside from appendicitis, has never known a day of illness in his life.
Tsahas has short-cropped white hair, a robustly handsome face and a bone-crushing handshake. He says he drinks two glasses of red wine a day, but on closer interrogation he concedes that, like many other drinkers, he has underestimated his consumption by a couple of glasses.
The secret of a good marriage, he says, is never to return drunk to your wife. He’s been married for 60 years. “I’d like another wife,” he says. “Ideally one about 55.”
Tsahas is known at the cafe as a bit of a gossip and a joker. He goes there twice a day. It’s a 1km walk from his house over uneven, sloping terrain. That’s four hilly kilometres a day. Not many people half his age manage that far in Britain.
In Ikaria, a Greek island in the far east of the Mediterranean, about 30 miles from the Turkish coast, characters such as Gregoris Tsahas are not exceptional. With its beautiful coves, rocky cliffs, steep valleys and broken canopy of scrub and olive groves, Ikaria looks similar to any number of other Greek islands. But there is one vital difference: people here live much longer than the population on other islands and on the mainland. In fact, people here live on average 10 years longer than those in the rest of Europe and America – around one in three Ikarians lives into their 90s. Not only that, but they also have much lower rates of cancer and heart disease, suffer significantly less depression and dementia, maintain a sex life into old age and remain physically active deep into their 90s. What is the secret of Ikaria? What do its inhabitants know that the rest of us don’t?
The island is named after Icarus, the young man in Greek mythology who flew too close to the sun and plunged into the sea, according to legend, close to Ikaria. Thoughts of plunging into the sea are very much in my mind as the propeller plane from Athens comes in to land. There is a fierce wind blowing – the island is renowned for its wind – and the aircraft appears to stall as it turns to make its final descent, tipping this way and that until, at the last moment, the pilot takes off upwards and returns to Athens. Nor are there any ferries, owing to a strike. “They’re always on strike,” an Athenian back at the airport tells me.
Stranded in Athens for the night, I discover that a fellow thwarted passenger is Dan Buettner, author of a book called The Blue Zones, which details the five small areas in the world where the population outlive the American and western European average by around a decade: Okinawa in Japan, Sardinia, the Nicoya peninsula in Costa Rica, Loma Linda in California and Ikaria.
Tall and athletic, 52-year-old Buettner, who used to be a long-distance cyclist, looks a picture of well-preserved youth. He is a fellow with National Geographic magazine and became interested in longevity while researching Okinawa’s aged population. He tells me there are several other passengers on the plane who are interested in Ikaria’s exceptional demographics. “It would have been ironic, don’t you think,” he notes drily, “if a group of people looking for the secret of longevity crashed into the sea and died.”
Chatting to locals on the plane the following day, I learn that several have relations who are centenarians. One woman says her aunt is 111. The problem for demographers with such claims is that they are often very difficult to stand up. Going back to Methuselah, history is studded with exaggerations of age. In the last century, longevity became yet another battleground in the cold war. The Soviet authorities let it be known that people in the Caucasus were living deep into their hundreds. But subsequent studies have shown these claims lacked evidential foundation.
Since then, various societies and populations have reported advanced ageing, but few are able to supply convincing proof. “I don’t believe Korea or China,” Buettner says. “I don’t believe the Hunza Valley in Pakistan. None of those places has good birth certificates.”
However, Ikaria does. It has also been the subject of a number of scientific studies. Aside from the demographic surveys that Buettner helped organise, there was also the University of Athens’ Ikaria Study. One of its members, Dr Christina Chrysohoou, a cardiologist at the university’s medical school, found that the Ikarian diet featured a lot of beans and not much meat or refined sugar. The locals also feast on locally grown and wild greens, some of which contain 10 times more antioxidants than are found in red wine, as well as potatoes and goat’s milk.
Chrysohoou thinks the food is distinct from that eaten on other Greek islands with lower life expectancy. “Ikarians’ diet may have some differences from other islands’ diets,” she says. “The Ikarians drink a lot of herb tea and small quantities of coffee; daily calorie consumption is not high. Ikaria is still an isolated island, without tourists, which means that, especially in the villages in the north, where the highest longevity rates have been recorded, life is largely unaffected by the westernised way of living.”
But she also refers to research that suggests the Ikarian habit of taking afternoon naps may help extend life. One extensive study of Greek adults showed that regular napping reduced the risk of heart disease by almost 40%. What’s more, Chrysohoou’s preliminary studies revealed that 80% of Ikarian males between the ages of 65 and 100 were still having sex. And, of those, a quarter did so with “good duration” and “achievement”. “We found that most males between 65 and 88 reported sexual activity, but after the age of 90, very few continued to have sex.”
Read the entire article here.
Image: Agios Giorgis Beach, Ikaria. Courtesy of Island-Ikaria travel guide.
- MondayMap: The Double Edge of Climate Change>
So the changing global climate will imperil our coasts, flood low-lying lands, fuel more droughts, increase weather extremes, and generally make the planet more toasty. But, a new study — for the first time — links increasing levels of CO2 to an increase in global vegetation. Perhaps this portends our eventual fate — ceding the Earth back to the plants — unless humans make some drastic behavioral changes.
From the New Scientist:
The planet is getting lusher, and we are responsible. Carbon dioxide generated by human activity is stimulating photosynthesis and causing a beneficial greening of the Earth’s surface.
For the first time, researchers claim to have shown that the increase in plant cover is due to this “CO2 fertilisation effect” rather than other causes. However, it remains unclear whether the effect can counter any negative consequences of global warming, such as the spread of deserts.
Recent satellite studies have shown that the planet is harbouring more vegetation overall, but pinning down the cause has been difficult. Factors such as higher temperatures, extra rainfall, and an increase in atmospheric CO2 – which helps plants use water more efficiently – could all be boosting vegetation.
To home in on the effect of CO2, Randall Donohue of Australia’s national research institute, the CSIRO in Canberra, monitored vegetation at the edges of deserts in Australia, southern Africa, the US Southwest, North Africa, the Middle East and central Asia. These are regions where there is ample warmth and sunlight, but only just enough rainfall for vegetation to grow, so any change in plant cover must be the result of a change in rainfall patterns or CO2 levels, or both.
If CO2 levels were constant, then the amount of vegetation per unit of rainfall ought to be constant, too. However, the team found that this figure rose by 11 per cent in these areas between 1982 and 2010, mirroring the rise in CO2 (Geophysical Research Letters, doi.org/mqx). Donohue says this lends “strong support” to the idea that CO2 fertilisation drove the greening.
Climate change studies have predicted that many dry areas will get drier and that some deserts will expand. Donohue’s findings make this less certain.
However, the greening effect may not apply to the world’s driest regions. Beth Newingham of the University of Idaho, Moscow, recently published the result of a 10-year experiment involving a greenhouse set up in the Mojave desert of Nevada. She found “no sustained increase in biomass” when extra CO2 was pumped into the greenhouse. “You cannot assume that all these deserts respond the same,” she says. “Enough water needs to be present for the plants to respond at all.”
The extra plant growth could have knock-on effects on climate, Donohue says, by increasing rainfall, affecting river flows and changing the likelihood of wildfires. It will also absorb more CO2 from the air, potentially damping down global warming but also limiting the CO2 fertilisation effect itself.
Read the entire article here.
Image: Global vegetation mapped: Normalized Difference Vegetation Index (NDVI) from Nov. 1, 2007, to Dec. 1, 2007, during autumn in the Northern Hemisphere. This monthly average is based on observations from the Moderate Resolution Imaging Spectroradiometer (MODIS) on NASA’s Terra satellite. The greenness values depict vegetation density; higher values (dark greens) show land areas with plenty of leafy green vegetation, such as the Amazon Rainforest. Lower values (beige to white) show areas with little or no vegetation, including sand seas and Arctic areas. Areas with moderate amounts of vegetation are pale green. Land areas with no data appear gray, and water appears blue. Courtesy of NASA.
- Your Home As Eco-System>
For centuries biologists, zoologists and ecologists have been mapping the wildlife that surrounds us in the great outdoors. Now a group led by microbiologist Noah Fierer at the University of Colorado Boulder is pursuing flora and fauna in one of the last unexplored eco-systems — the home. (Not for the faint of heart).
From the New York Times:
On a sunny Wednesday, with a faint haze hanging over the Rockies, Noah Fierer eyed the field site from the back of his colleague’s Ford Explorer. Two blocks east of a strip mall in Longmont, one of the world’s last underexplored ecosystems had come into view: a sandstone-colored ranch house, code-named Q. A pair of dogs barked in the backyard.
Dr. Fierer, 39, a microbiologist at the University of Colorado Boulder and self-described “natural historian of cooties,” walked across the front lawn and into the house, joining a team of researchers inside. One swabbed surfaces with sterile cotton swabs. Others logged the findings from two humming air samplers: clothing fibers, dog hair, skin flakes, particulate matter and microbial life.
Ecologists like Dr. Fierer have begun peering into an intimate, overlooked world that barely existed 100,000 years ago: the great indoors. They want to know what lives in our homes with us and how we “colonize” spaces with other species — viruses, bacteria, microbes. Homes, they’ve found, contain identifiable ecological signatures of their human inhabitants. Even dogs exert a significant influence on the tiny life-forms living on our pillows and television screens. Once ecologists have more thoroughly identified indoor species, they hope to come up with strategies to scientifically manage homes, by eliminating harmful taxa and fostering species beneficial to our health.
But the first step is simply to take a census of what’s already living with us, said Dr. Fierer; only then can scientists start making sense of their effects. “We need to know what’s out there first. If you don’t know that, you’re wandering blind in the wilderness.”
Here’s an undeniable fact: We are an indoor species. We spend close to 90 percent of our lives in drywalled caves. Yet traditionally, ecologists ventured outdoors to observe nature’s biodiversity, in the Amazon jungles, the hot springs of Yellowstone or the subglacial lakes of Antarctica. (“When you train as an ecologist, you imagine yourself tromping around in the forest,” Dr. Fierer said. “You don’t imagine yourself swabbing a toilet seat.”)
But as humdrum as a home might first appear, it is a veritable wonderland. Ecology does not stop at the front door; a home to you is also home to an incredible array of wildlife.
Besides the charismatic fauna commonly observed in North American homes — dogs, cats, the occasional freshwater fish — ants and roaches, crickets and carpet bugs, mites and millions upon millions of microbes, including hundreds of multicellular species and thousands of unicellular species, also thrive in them. The “built environment” doubles as a complex ecosystem that evolves under the selective pressure of its inhabitants, their behavior and the building materials. As microbial ecologists swab DNA from our homes, they’re creating an atlas of life much as 19th-century naturalists like Alfred Russel Wallace once logged flora and fauna on the Malay Archipelago.
Take an average kitchen. In a study published in February in the journal Environmental Microbiology, Dr. Fierer’s lab examined 82 surfaces in four Boulder kitchens. Predictable patterns emerged. Bacterial species associated with human skin, like Staphylococcaceae or Corynebacteriaceae, predominated. Evidence of soil showed up on the floor, and species associated with raw produce (Enterobacteriaceae, for example) appeared on countertops. Microbes common in moist areas — including sphingomonads, some strains infamous for their ability to survive in the most toxic sites — splashed in a kind of jungle above the faucet.
A hot spot of unrivaled biodiversity was discovered on the stove exhaust vent, probably the result of forced air and settling. The counter and refrigerator, places seemingly as disparate as temperate and alpine grasslands, shared a similar assemblage of microbial species — probably less because of temperature and more a consequence of cleaning. Dr. Fierer’s lab also found a few potential pathogens, like Campylobacter, lurking on the cupboards. There was evidence of the bacterium on a microwave panel, too, presumably a microbial “fingerprint” left by a cook handling raw chicken.
If a kitchen represents a temperate forest, few of its plants would be poison ivy. Most of the inhabitants are relatively benign. In any event, eradicating them is neither possible nor desirable. Dr. Fierer wants to make visible this intrinsic, if unseen, aspect of everyday life. “For a lot of the general public, they don’t care what’s in soil,” he said. “People care more about what’s on their pillowcase.” (Spoiler alert: The microbes living on your pillowcase are not all that different from those living on your toilet seat. Both surfaces come in regular contact with exposed skin.)
Read the entire article after the jump.
Image: Animals commonly found in the home. Courtesy of North Carolina State University.
- Your State Bird>
The official national bird of the United States is the Bald Eagle. For that matter, it’s also the official animal. Thankfully it was removed from the endangered species list a mere 5 years ago. Aside from the bird itself Americans love the symbolism that the eagle implies — strength, speed, leadership and achievement. But do Americans know their State bird. A recent article from the bird-lovers over at Slate will refresh your memory, and also recommend a more relevant alternative.
I drove over a bridge from Maryland into Virginia today and on the big “Welcome to Virginia” sign was an image of the state bird, the northern cardinal—with a yellow bill. I should have scoffed, but it hardly registered. Everyone knows that state birds are a big joke. There are a million cardinals, a scattering of robins, and just a general lack of thought put into the whole thing.
States should have to put more thought into their state bird than I put into picking my socks in the morning. “Ugh, state bird? I dunno, what’re the guys next to us doing? Cardinal? OK, let’s do that too. Yeah put it on all the signs. Nah, no time to research the bill color, let’s just go.” It’s the official state bird! Well, since all these jackanape states are too busy passing laws requiring everyone to own guns or whatever to consider what their state bird should be, I guess I’ll have to do it.
1. Alabama. Official state bird: yellowhammer
Right out of the gate with this thing. Yellowhammer? C’mon. I Asked Jeeves and it told me that Yellowhammer is some backwoods name for a yellow-shafted flicker. The origin story dates to the Civil War, when some Alabama troops wore yellow-trimmed uniforms. Sorry, but that’s dumb, mostly because it’s just a coincidence and has nothing to do with the actual bird. If you want a woodpecker, go for something with a little more cachet, something that’s at least a full species.
What it should be: red-cockaded woodpecker
2. Alaska. Official state bird: willow ptarmigan
Willow Ptarmigans are the dumbest-sounding birds on Earth, sorry. They sound like rejected Star Wars aliens, angrily standing outside the Mos Eisley Cantina because their IDs were rejected. Why go with these dopes, Alaska, when you’re the best state to see the most awesome falcon on Earth?
What it should be: gyrfalcon
3. Arizona. Official state bird: cactus wren
Cactus Wren is like the only boring bird in the entire state. I can’t believe it.
What it should be: red-faced warbler
4. Arkansas. Official state bird: northern mockingbird
Christ. What makes this even less funny is that there are like eight other states with mockingbird as their official bird. I’m convinced that the guy whose job it was to report to the state’s legislature on what the official bird should be forgot until the day it was due and he was in line for a breakfast sandwich at Burger King. In a panic he walked outside and selected the first bird he could find, a dirty mockingbird singing its stupid head off on top of a dumpster.
What it should be: painted bunting
5. California. Official state bird: California quail
… Or perhaps the largest, most radical bird on the continent?
What it should be: California condor
6. Colorado. Official state bird: lark bunting
Read the entire article here.
Image: Bald Eagle, Kodiak Alaska, 2010. Courtesy of Yathin S Krishnappa / Wikipedia.
- MondayMap: Global Intolerance>
Following on from last week’s MondayMap post on intolerance and hatred within the United States — according to tweets on the social media site Twitter — we expand our view this week to cover the globe. This map is a based on a more detailed, global research study of people’s attitudes to having neighbors of a different race.
From the Washington Post:
When two Swedish economists set out to examine whether economic freedom made people any more or less racist, they knew how they would gauge economic freedom, but they needed to find a way to measure a country’s level of racial tolerance. So they turned to something called the World Values Survey, which has been measuring global attitudes and opinions for decades.
Among the dozens of questions that World Values asks, the Swedish economists found one that, they believe, could be a pretty good indicator of tolerance for other races. The survey asked respondents in more than 80 different countries to identify kinds of people they would not want as neighbors. Some respondents, picking from a list, chose “people of a different race.” The more frequently that people in a given country say they don’t want neighbors from other races, the economists reasoned, the less racially tolerant you could call that society. (The study concluded that economic freedom had no correlation with racial tolerance, but it does appear to correlate with tolerance toward homosexuals.)
Unfortunately, the Swedish economists did not include all of the World Values Survey data in their final research paper. So I went back to the source, compiled the original data and mapped it out on the infographic above. In the bluer countries, fewer people said they would not want neighbors of a different race; in red countries, more people did.
If we treat this data as indicative of racial tolerance, then we might conclude that people in the bluer countries are the least likely to express racist attitudes, while the people in red countries are the most likely.
Update: Compare the results to this map of the world’s most and least diverse countries.
Before we dive into the data, a couple of caveats. First, it’s entirely likely that some people lied when answering this question; it would be surprising if they hadn’t. But the operative question, unanswerable, is whether people in certain countries were more or less likely to answer the question honestly. For example, while the data suggest that Swedes are more racially tolerant than Finns, it’s possible that the two groups are equally tolerant but that Finns are just more honest. The willingness to state such a preference out loud, though, might be an indicator of racial attitudes in itself. Second, the survey is not conducted every year; some of the results are very recent and some are several years old, so we’re assuming the results are static, which might not be the case.
• Anglo and Latin countries most tolerant. People in the survey were most likely to embrace a racially diverse neighbor in the United Kingdom and its Anglo former colonies (the United States, Canada, Australia and New Zealand) and in Latin America. The only real exceptions were oil-rich Venezuela, where income inequality sometimes breaks along racial lines, and the Dominican Republic, perhaps because of its adjacency to troubled Haiti. Scandinavian countries also scored high.
• India, Jordan, Bangladesh and Hong Kong by far the least tolerant. In only three of 81 surveyed countries, more than 40 percent of respondents said they would not want a neighbor of a different race. This included 43.5 percent of Indians, 51.4 percent of Jordanians and an astonishingly high 71.8 percent of Hong Kongers and 71.7 percent of Bangladeshis.
Read more about this map here.
- 1920s London in Moving Color>
A recently unearthed celluloid (yes, celluloid) film of London in 1927 shows the capital in bustling, colorful splendor. The film was shot by Claude Fosse-Green, a pioneer of colour film in the UK.
Film courtesy of Claude Fosse-Green archives / Telegraph.
- MondayMap: Intolerance and Hatred>
A fascinating map of tweets espousing hatred and racism across the United States. The data analysis and map were developed by researchers at Humboldt State University.
From the Guardian:
[T]he students and professors at Humboldt State University who produced this map read the entirety of the 150,000 geo-coded tweets they analysed.
Using humans rather than machines means that this research was able to avoid the basic pitfall of most semantic analysis where a tweet stating ‘the word homo is unacceptable’ would still be classed as hate speech. The data has also been ‘normalised’, meaning that the scale accounts for the total twitter traffic in each county so that the final result is something that shows the frequency of hateful words on Twitter. The only question that remains is whether the views of US Twitter users can be a reliable indication of the views of US citizens.
See the interactive map and read the entire article here.
- More CO2 is Good, Right?>
Yesterday, May 10, 2013, scientists published new measures of atmospheric carbon dioxide (CO2). For the first time in human history CO2 levels reached an average of 400 parts per million (ppm). This is particularly troubling since CO2 has long been known as the most potent heat trapping component of the atmosphere. The sobering milestone was recorded from the Mauna Loa Observatory in Hawaii — monitoring has been underway at the site since the mid-1950s.
This has many climate scientists re-doubling their efforts to warn of the consequences of climate change, which is believed to be driven by human activity and specifically the generation of atmospheric CO2 in ever increasing quantities. But not to be outdone, the venerable Wall Street Journal — seldom known for its well-reasoned scientific journalism — chimed in with an op-ed on the subject. According to the WSJ we have nothing to worry about because increased levels of CO2 are good for certain crops and the Earth had historically much higher levels of CO2 (though pre-humanity).
Ashutosh Jogalekar over at The Curious Wavefunction dissects the WSJ article line by line:
Since we were discussing the differences between climate change “skeptics” and “deniers” (or “denialists”, whatever you want to call them) the other day this piece is timely. The Wall Street Journal is not exactly known for reasoned discussion of climate change, but this Op-Ed piece may set a new standard even for its own naysayers and skeptics. It’s a piece by William Happer and Harrison Schmitt that’s so one-sided, sparse on detail, misleading and ultimately pointless that I am wondering if it’s a spoof.
Happer and Schmitt’s thesis can be summed up in one line: More CO2 in the atmosphere is a good thing because it’s good for one particular type of crop plant. That’s basically it. No discussion of the downsides, not even a pretense of a balanced perspective. Unfortunately it’s not hard to classify their piece as a denialist article because it conforms to some of the classic features of denial; it’s entirely one sided, it’s very short on detail, it does a poor job even with the little details that it does present and it simply ignores the massive amount of research done on the topic. In short it’s grossly misleading.
First of all Happer and Schmitt simply dismiss any connection that might exist between CO2 levels and rising temperatures, in the process consigning a fair amount of basic physics and chemistry to the dustbin. There are no references and no actual discussion of why they don’t believe there’s a connection. That’s a shoddy start to put it mildly; you would expect a legitimate skeptic to start with some actual evidence and references. Most of the article after that consists of a discussion of the differences between so-called C3 plants (like rice) and C4 plants (like corn and sugarcane). This is standard stuff found in college biochemistry textbooks, nothing revealing here. But Happer and Schmitt leverage a fundamental difference between the two – the fact that C4 plants can utilize CO2 more efficiently than C3 plants under certain conditions – into an argument for increasing CO2 levels in the atmosphere.
This of course completely ignores all the other potentially catastrophic effects that CO2 could have on agriculture, climate, biodiversity etc. You don’t even have to be a big believer in climate change to realize that focusing on only a single effect of a parameter on a complicated system is just bad science. Happer and Schmitt’s argument is akin to the argument that everyone should get themselves addicted to meth because one of meth’s effects is euphoria. So ramping up meth consumption will make everyone feel happier, right?
But even if you consider that extremely narrowly defined effect of CO2 on C3 and C4 plants, there’s still a problem. What’s interesting is that the argument has been countered by Matt Ridley in the pages of this very publication:
But it is not quite that simple. Surprisingly, the C4 strategy first became common in the repeated ice ages that began about four million years ago. This was because the ice ages were a very dry time in the tropics and carbon-dioxide levels were very low—about half today’s levels. C4 plants are better at scavenging carbon dioxide (the source of carbon for sugars) from the air and waste much less water doing so. In each glacial cold spell, forests gave way to seasonal grasslands on a huge scale. Only about 4% of plant species use C4, but nearly half of all grasses do, and grasses are among the newest kids on the ecological block.
So whereas rising temperatures benefit C4, rising carbon-dioxide levels do not. In fact, C3 plants get a greater boost from high carbon dioxide levels than C4. Nearly 500 separate experiments confirm that if carbon-dioxide levels roughly double from preindustrial levels, rice and wheat yields will be on average 36% and 33% higher, while corn yields will increase by only 24%.
So no, the situation is more subtle than the authors think. In fact I am surprised that, given that C4 plants actually do grow better at higher temperatures, Happer and Schmitt missed an opportunity for making the case for a warmer planet. In any case, there’s a big difference between improving yields of C4 plants under controlled greenhouse conditions and expecting these yields to improve without affecting other components of the ecosystem by doing a giant planetary experiment.
Read the entire article after the jump.
Image courtesy of Sierra Club.
- Your Weekly Groceries>
Photographer Peter Menzel traveled to over 20 countries to compile his culinary atlas Hungry Planet. But this is no ordinary cookbook or trove of local delicacies. The book is a visual catalog of a family’s average weekly grocery shopping.
It is both enlightening and sobering to see the nutritional inventory of a Western family juxtaposed with that of a sub-Saharan African family. It puts into perspective the internal debate within the United States of the 1 percent versus the 99 percent. Those of us lucky enough to have been born in one of the world’s richer nations, even though we may be part of the 99 percent are still truly in the group of haves, rather than the have-nots.
For more on Menzel’s book jump over to Amazon.
The Melander family from Bargteheide, Germany, who spend around £320 [$480] on a week’s worth of food.
Images courtesy of Peter Menzel /Barcroft Media.
- Anti-Eco-Friendly Consumption>
It should come as no surprise that those who deny the science of climate change and human-propelled impact on the environment would also shirk from purchasing products and services that are friendly to the environment.
A recent study shows how extreme political persuasion sways purchasing behavior of light bulbs: conservatives are more likely to purchase incandescent bulbs, while moderates and liberals lean towards more eco-friendly bulbs.
Joe Barton, U.S. Representative from Texas, sums up the issue of light bulb choice quite neatly, “… it is about personal freedom”. All the while our children shake their heads in disbelief.
Presumably many climate change skeptics prefer to purchase items that are harmful to the environment and also to humans just to make a political statement. This might include continuing to purchase products containing dangerous levels of unpronounceable acronyms and questionable chemicals: rBGH (recombinant Bovine Growth Hormone) in milk, BPA (Bisphenol_A) in plastic utensils and bottles, KBrO3 (Potassium Bromate) in highly processed flour, BHA (Butylated Hydroxyanisole) food preservative, Azodicarbonamide in dough.
Freedom truly does come at a cost.
From the Guardian:
Eco-friendly labels on energy-saving bulbs are a turn-off for conservative shoppers, a new study has found.
The findings, published this week in the Proceedings of the National Academy of Sciences, suggest that it could be counterproductive to advertise the environmental benefits of efficient bulbs in the US. This could make it even more difficult for America to adopt energy-saving technologies as a solution to climate change.
Consumers took their ideological beliefs with them when they went shopping, and conservatives switched off when they saw labels reading “protect the environment”, the researchers said.
The study looked at the choices of 210 consumers, about two-thirds of them women. All were briefed on the benefits of compact fluorescent (CFL) bulbs over old-fashioned incandescents.
When both bulbs were priced the same, shoppers across the political spectrum were uniformly inclined to choose CFL bulbs over incandescents, even those with environmental labels, the study found.
But when the fluorescent bulb cost more – $1.50 instead of $0.50 for an incandescent – the conservatives who reached for the CFL bulb chose the one without the eco-friendly label.
“The more moderate and conservative participants preferred to bear a long-term financial cost to avoid purchasing an item associated with valuing environmental protections,” the study said.
The findings suggest the extreme political polarisation over environment and climate change had now expanded to energy-savings devices – which were once supported by right and left because of their money-saving potential.
“The research demonstrates how promoting the environment can negatively affect adoption of energy efficiency in the United States because of the political polarisation surrounding environmental issues,” the researchers said.
Earlier this year Harvard academic Theda Skocpol produced a paper tracking how climate change and the environment became a defining issue for conservatives, and for Republican-elected officials.
Conservative activists elevated opposition to the science behind climate change, and to action on climate change, to core beliefs, Skocpol wrote.
There was even a special place for incandescent bulbs. Republicans in Congress two years ago fought hard to repeal a law phasing out incandescent bulbs – even over the objections of manufacturers who had already switched their product lines to the new energy-saving technology.
Republicans at the time cast the battle of the bulb as an issue of liberty. “This is about more than just energy consumption. It is about personal freedom,” said Joe Barton, the Texas Republican behind the effort to keep the outdated bulbs burning.
Read the entire article following the jump.
Image courtesy of Housecraft.
- Science and Art of the Brain>
Nobel laureate and professor of brain science Eric Kandel describes how our perception of art can help us define a better functional map of the mind.
From the New York Times:
This month, President Obama unveiled a breathtakingly ambitious initiative to map the human brain, the ultimate goal of which is to understand the workings of the human mind in biological terms.
Many of the insights that have brought us to this point arose from the merger over the past 50 years of cognitive psychology, the science of mind, and neuroscience, the science of the brain. The discipline that has emerged now seeks to understand the human mind as a set of functions carried out by the brain.
This new approach to the science of mind not only promises to offer a deeper understanding of what makes us who we are, but also opens dialogues with other areas of study — conversations that may help make science part of our common cultural experience.
Consider what we can learn about the mind by examining how we view figurative art. In a recently published book, I tried to explore this question by focusing on portraiture, because we are now beginning to understand how our brains respond to the facial expressions and bodily postures of others.
The portraiture that flourished in Vienna at the turn of the 20th century is a good place to start. Not only does this modernist school hold a prominent place in the history of art, it consists of just three major artists — Gustav Klimt, Oskar Kokoschka and Egon Schiele — which makes it easier to study in depth.
As a group, these artists sought to depict the unconscious, instinctual strivings of the people in their portraits, but each painter developed a distinctive way of using facial expressions and hand and body gestures to communicate those mental processes.
Their efforts to get at the truth beneath the appearance of an individual both paralleled and were influenced by similar efforts at the time in the fields of biology and psychoanalysis. Thus the portraits of the modernists in the period known as “Vienna 1900” offer a great example of how artistic, psychological and scientific insights can enrich one another.
The idea that truth lies beneath the surface derives from Carl von Rokitansky, a gifted pathologist who was dean of the Vienna School of Medicine in the middle of the 19th century. Baron von Rokitansky compared what his clinician colleague Josef Skoda heard and saw at the bedsides of his patients with autopsy findings after their deaths. This systematic correlation of clinical and pathological findings taught them that only by going deep below the skin could they understand the nature of illness.
This same notion — that truth is hidden below the surface — was soon steeped in the thinking of Sigmund Freud, who trained at the Vienna School of Medicine in the Rokitansky era and who used psychoanalysis to delve beneath the conscious minds of his patients and reveal their inner feelings. That, too, is what the Austrian modernist painters did in their portraits.
Klimt’s drawings display a nuanced intuition of female sexuality and convey his understanding of sexuality’s link with aggression, picking up on things that even Freud missed. Kokoschka and Schiele grasped the idea that insight into another begins with understanding of oneself. In honest self-portraits with his lover Alma Mahler, Kokoschka captured himself as hopelessly anxious, certain that he would be rejected — which he was. Schiele, the youngest of the group, revealed his vulnerability more deeply, rendering himself, often nude and exposed, as subject to the existential crises of modern life.
Such real-world collisions of artistic, medical and biological modes of thought raise the question: How can art and science be brought together?
Alois Riegl, of the Vienna School of Art History in 1900, was the first to truly address this question. He understood that art is incomplete without the perceptual and emotional involvement of the viewer. Not only does the viewer collaborate with the artist in transforming a two-dimensional likeness on a canvas into a three-dimensional depiction of the world, the viewer interprets what he or she sees on the canvas in personal terms, thereby adding meaning to the picture. Riegl called this phenomenon the “beholder’s involvement” or the “beholder’s share.”
Art history was now aligned with psychology. Ernst Kris and Ernst Gombrich, two of Riegl’s disciples, argued that a work of art is inherently ambiguous and therefore that each person who sees it has a different interpretation. In essence, the beholder recapitulates in his or her own brain the artist’s creative steps.
This insight implied that the brain is a creativity machine, which obtains incomplete information from the outside world and completes it. We can see this with illusions and ambiguous figures that trick our brain into thinking that we see things that are not there. In this sense, a task of figurative painting is to convince the beholder that an illusion is true.
Some of this creative process is determined by the way the structure of our brain develops, which is why we all see the world in pretty much the same way. However, our brains also have differences that are determined in part by our individual experiences.
Read the entire article following the jump.
- Cheap Hydrogen>
Researchers at the University of Glasgow, Scotland, have discovered an alternative and possibly more efficient way to make hydrogen at industrial scales. Typically, hydrogen is produced from reacting high temperature steam with methane or natural gas. A small volume of hydrogen, less than five percent annually, is also made through the process of electrolysis — passing an electric current through water.
This new method of production appears to be less costly, less dangerous and also more environmentally sound.
From the Independent:
Scientists have harnessed the principles of photosynthesis to develop a new way of producing hydrogen – in a breakthrough that offers a possible solution to global energy problems.
The researchers claim the development could help unlock the potential of hydrogen as a clean, cheap and reliable power source.
Unlike fossil fuels, hydrogen can be burned to produce energy without producing emissions. It is also the most abundant element on the planet.
Hydrogen gas is produced by splitting water into its constituent elements – hydrogen and oxygen. But scientists have been struggling for decades to find a way of extracting these elements at different times, which would make the process more energy-efficient and reduce the risk of dangerous explosions.
In a paper published today in the journal Nature Chemistry, scientists at the University of Glasgow outline how they have managed to replicate the way plants use the sun’s energy to split water molecules into hydrogen and oxygen at separate times and at separate physical locations.
Experts heralded the “important” discovery yesterday, saying it could make hydrogen a more practicable source of green energy.
Professor Xile Hu, director of the Laboratory of Inorganic Synthesis and Catalysis at the Swiss Federal Institute of Technology in Lausanne, said: “This work provides an important demonstration of the principle of separating hydrogen and oxygen production in electrolysis and is very original. Of course, further developments are needed to improve the capacity of the system, energy efficiency, lifetime and so on. But this research already offers potential and promise and can help in making the storage of green energy cheaper.”
Until now, scientists have separated hydrogen and oxygen atoms using electrolysis, which involves running electricity through water. This is energy-intensive and potentially explosive, because the oxygen and hydrogen are removed at the same time.
But in the new variation of electrolysis developed at the University of Glasgow, hydrogen and oxygen are produced from the water at different times, thanks to what researchers call an “electron-coupled proton buffer”. This acts to collect and store hydrogen while the current runs through the water, meaning that in the first instance only oxygen is released. The hydrogen can then be released when convenient.
Because pure hydrogen does not occur naturally, it takes energy to make it. This new version of electrolysis takes longer, but is safer and uses less energy per minute, making it easier to rely on renewable energy sources for the electricity needed to separate the atoms.
Dr Mark Symes, the report’s co-author, said: “What we have developed is a system for producing hydrogen on an industrial scale much more cheaply and safely than is currently possible. Currently much of the industrial production of hydrogen relies on reformation of fossil fuels, but if the electricity is provided via solar, wind or wave sources we can create an almost totally clean source of power.”
Professor Lee Cronin, the other author of the research, said: “The existing gas infrastructure which brings gas to homes across the country could just as easily carry hydrogen as it currently does methane. If we were to use renewable power to generate hydrogen using the cheaper, more efficient decoupled process we’ve created, the country could switch to hydrogen to generate our electrical power at home. It would also allow us to significantly reduce the country’s carbon footprint.”
Nathan Lewis, a chemistry professor at the California Institute of Technology and a green energy expert, said: “This seems like an interesting scientific demonstration that may possibly address one of the problems involved with water electrolysis, which remains a relatively expensive method of producing hydrogen.”
Read the entire article following the jump.
- Dark Lightning>
It’s fascinating how a seemingly well-understood phenomenon, such as lightning, can still yield enormous surprises. Researchers have found that visible flashes of lightning can also be accompanied by non-visible, and more harmful, radiation such as x- and gamma-rays.
From the Washington Post:
A lightning bolt is one of nature’s most over-the-top phenomena, rarely failing to elicit at least a ping of awe no matter how many times a person has witnessed one. With his iconic kite-and-key experiments in the mid-18th century, Benjamin Franklin showed that lightning is an electrical phenomenon, and since then the general view has been that lightning bolts are big honking sparks no different in kind from the little ones generated by walking in socks across a carpeted room.
But scientists recently discovered something mind-bending about lightning: Sometimes its flashes are invisible, just sudden pulses of unexpectedly powerful radiation. It’s what Joseph Dwyer, a lightning researcher at the Florida Institute of Technology, has termed dark lightning.
Unknown to Franklin but now clear to a growing roster of lightning researchers and astronomers is that along with bright thunderbolts, thunderstorms unleash sprays of X-rays and even intense bursts of gamma rays, a form of radiation normally associated with such cosmic spectacles as collapsing stars. The radiation in these invisible blasts can carry a million times as much energy as the radiation in visible lightning, but that energy dissipates quickly in all directions rather than remaining in a stiletto-like lightning bolt.
Dark lightning appears sometimes to compete with normal lightning as a way for thunderstorms to vent the electrical energy that gets pent up inside their roiling interiors, Dwyer says. Unlike with regular lightning, though, people struck by dark lightning, most likely while flying in an airplane, would not get hurt. But according to Dwyer’s calculations, they might receive in an instant the maximum safe lifetime dose of ionizing radiation — the kind that wreaks the most havoc on the human body.
The only way to determine whether an airplane had been struck by dark lightning, Dwyer says, “would be to use a radiation detector. Right in the middle of [a flash], a very brief bluish-purple glow around the plane might be perceptible. Inside an aircraft, a passenger would probably not be able to feel or hear much of anything, but the radiation dose could be significant.”
However, because there’s only about one dark lightning occurrence for every thousand visible flashes and because pilots take great pains to avoid thunderstorms, Dwyer says, the risk of injury is quite limited. No one knows for sure if anyone has ever been hit by dark lightning.
About 25 million visible thunderbolts hit the United States every year, killing about 30 people and many farm animals, says John Jensenius, a lightning safety specialist with the National Weather Service in Gray, Maine. Worldwide, thunderstorms produce about a billion or so lightning bolts annually.
Read the entire article after the jump.
Image: Lightning in Foshan, China. Courtesy of Telegraph.
No, the drawing is not a construction from the mind of sci fi illustrator extraordinaire Michael Whelan. This is reality. Or, to be more precise an architectural rendering of buildings to come — in China of course.
From the Independent:
A French architecture firm has unveiled their new ambitious ‘farmscraper’ project – six towering structures which promise to change the way that we think about green living.
Vincent Callebaut Architects’ innovative Asian Cairns was planned specifically for Chinese city Shenzhen in response to the growing population, increasing CO2 emissions and urban development.
The structures will consist of a series of pebble-shaped levels – each connected by a central spinal column – which will contain residential areas, offices, and leisure spaces.
Sustainability is key to the innovative project – wind turbines will cover the roof of each tower, water recycling systems will be in place to recycle waste water, and solar panels will be installed on the buildings, providing renewable energy. The structures will also have gardens on the exterior, further adding to the project’s green credentials.
Vincent Callebaut, the Belgian architect behind the firm, is well-known for his ambitious, eco-friendly projects, winning many awards over the years.
His self-sufficient amphibious city Lilypad – ‘a floating ecopolis for climate refugees’ – is perhaps his most famous design. The model has been proposed as a long-term solution to rising water levels, and successfully meets the four challenges of climate, biodiversity, water, and health, that the OECD laid out in 2008.
Vincent Callebaut Architects said: “It is a prototype to build a green, dense, smart city connected by technology and eco-designed from biotechnologies.”
Read the entire article and see more illustrations after the jump.
Image: “Farmscrapers” take eco-friendly architecture to dizzying heights in China. Courtesy of Vincent Callebaut Architects / Independent.
- The Richest Person in the Solar System>
Forget Warren Buffet, Bill Gates and Carlos Slim or the Russian oligarchs and the emirs of the Persian Gulf. These guys are merely multi-billionaires. Their fortunes — combined — account for less than half of 1 percent of the net worth of Dennis Hope, the world’s first trillionaire. In fact, you could describe Dennis as the solar system’s first trillionaire, with an estimated wealth of $100 trillion.
So, why have you never heard of Dennis Hope, trillionaire? Where does he invest his money? And, how did he amass this jaw-dropping uber-fortune? The answer to the first question is that he lives a relatively ordinary and quiet life in Nevada. The answer to the second question is: property. The answer to the third, and most fascinating question: well, he owns most of the Moon. He also owns the majority of the planets Mars, Venus and Mercury, and 90 or so other celestial plots. You too could become an interplanetary property investor for the starting and very modest sum of $19.99. Please write your check to… Dennis Hope.
The New York Times has a recent story and documentary on Mr.Hope, here.From Discover:
Dennis Hope, self-proclaimed Head Cheese of the Lunar Embassy, will promise you the moon. Or at least a piece of it. Since 1980, Hope has raked in over $9 million selling acres of lunar real estate for $19.99 a pop. So far, 4.25 million people have purchased a piece of the moon, including celebrities like Barbara Walters, George Lucas, Ronald Reagan, and even the first President Bush. Hope says he exploited a loophole in the 1967 United Nations Outer Space Treaty, which prohibits nations from owning the moon.
Because the law says nothing about individual holders, he says, his claim—which he sent to the United Nations—has some clout. “It was unowned land,” he says. “For private property claims, 197 countries at one time or another had a basis by which private citizens could make claims on land and not make payment. There are no standardized rules.”
Hope is right that the rules are somewhat murky—both Japan and the United States have plans for moon colonies—and lunar property ownership might be a powder keg waiting to spark. But Ram Jakhu, law professor at the Institute of Air and Space Law at McGill University in Montreal, says that Hope’s claims aren’t likely to hold much weight. Nor, for that matter, would any nation’s. “I don’t see a loophole,” Jakhu says. “The moon is a common property of the international community, so individuals and states cannot own it. That’s very clear in the U.N. treaty. Individuals’ rights cannot prevail over the rights and obligations of a state.”
Jakhu, a director of the International Institute for Space Law, believes that entrepreneurs like Hope have misread the treaty and that the 1967 legislation came about to block property claims in outer space. Historically, “the ownership of private property has been a major cause of war,” he says. “No one owns the moon. No one can own any property in outer space.”
Hope refuses to be discouraged. And he’s focusing on expansion. “I own about 95 different planetary bodies,” he says. “The total amount of property I currently own is about 7 trillion acres. The value of that property is about $100 trillion. And that doesn’t even include mineral rights.”Video courtesy of the New York Times.
- MondayMap: New Jersey Under Water>
We love maps here at theDiagonal. So much so that we’ve begun a new feature: MondayMap. As the name suggests, we plan to feature fascinating new maps on Mondays. For our readers who prefer their plots served up on a Saturday, sorry. Usually we like to highlight maps that cause us to look at our world differently or provide a degree of welcome amusement, such as the wonderful trove of maps over at Strange Maps curated by Frank Jacobs.
However, this first MondayMap is a little different and serious. It’s an interactive map that shows the impact of estimated sea level rise on the streets of New Jersey. Obviously, such a tool would be a great boon for emergency services and urban planners. For the rest of us, whether we live in New Jersey or not, maps like this one — of extreme weather events and projections — are likely to become much more common over the coming decades. Kudos to researchers at Rutgers University for developing the NJ Flood Mapper.From Wall Street Journal:
While superstorm Sandy revealed the Northeast’s vulnerability, a new map by New Jersey scientists suggests how rising seas could make future storms even worse.
The map shows ocean waters surging more than a mile into communities along Raritan Bay, engulfing nearly all of New Jersey’s barrier islands and covering northern sections of the New Jersey Turnpike and land surrounding the Port Newark Container Terminal.
Such damage could occur under a scenario in which sea levels rise 6 feet—or a 3-foot rise in tandem with a powerful coastal storm, according to the map produced by Rutgers University researchers.
The satellite-based tool, one of the first comprehensive, state-specific maps of its kind, uses a Google-maps-style interface that allows viewers to zoom into street-level detail.
“We are not trying to unduly frighten people,” said Rick Lathrop, director of the Grant F. Walton Center for Remote Sensing and Spatial Analysis at Rutgers, who led the map’s development. “This is providing people a look at where our vulnerability is.”
Still, the implications of the Rutgers project unnerve residents of Surf City, on Long Beach Island, where the map shows water pouring over nearly all of the barrier island’s six municipalities with a 6-foot increase in sea levels.
“The water is going to come over the island and there will be no island,” said Barbara Epstein, a 73-year-old resident of nearby Barnegat Light, who added that she is considering moving after 12 years there. “The storms are worsening.”
To be sure, not everyone agrees that climate change will make sea-level rise more pronounced.
Politically, climate change remains an issue of debate. New York Gov. Andrew Cuomo has said Sandy showed the need to address the issue, while New Jersey Gov. Chris Christie has declined to comment on whether Sandy was linked to climate change.
Scientists have gone ahead and started to map sea-level-rise scenarios in New Jersey, New York City and flood-prone communities along the Gulf of Mexico to help guide local development and planning.
Sea levels have risen by 1.3 feet near Atlantic City and 0.9 feet by Battery Park between 1911 and 2006, according to data from the National Oceanic and Atmospheric Administration.
A serious storm could add at least another 3 feet, with historic storm surges—Sandy-scale—registering at 9 feet. So when planning for future coastal flooding, 6 feet or higher isn’t far-fetched when combining sea-level rise with high tides and storm surges, Mr. Lathrop said.
NOAA estimated in December that increasing ocean temperatures could cause sea levels to rise by 1.6 feet in 100 years, and by 3.9 feet if considering some level of Arctic ice-sheet melt.
Such an increase amounts to 0.16 inches per year, but the eventual impact could mean that a small storm could “do the same damage that Sandy did,” said Peter Howd, co-author of a 2012 U.S. Geological Survey report that found the rate of sea level rise had increased in the northeast.Image: NJ Flood Mapper. Courtesy of Grant F. Walton Center for Remote Sensing and Spatial Analysis (CRSSA), Rutgers University, in partnership with the Jacques Cousteau National Estuarine Research Reserve (JCNERR), and in collaboration with the NOAA Coastal Services Center (CSC).
- Engineering Your Food Addiction >
Fast food, snack foods and all manner of processed foods are a multi-billion dollar global industry. So, it’s no surprise that companies collectively spend $100s of millions each year to perfect the perfect bite. Importantly, part of this perfection (for the businesses) is to ensure that you keep coming back for more.
By all accounts the “cheeto” is as close to processed-food-addiction-heaven as we can get — so far. It has just the right amount of salt (too much) and fat (too much), crunchiness, and something known as vanishing caloric density (melts in the mouth at the optimum rate). Aesthetically sad, but scientifically true.From the New York Times:
On the evening of April 8, 1999, a long line of Town Cars and taxis pulled up to the Minneapolis headquarters of Pillsbury and discharged 11 men who controlled America’s largest food companies. Nestlé was in attendance, as were Kraft and Nabisco, General Mills and Procter & Gamble, Coca-Cola and Mars. Rivals any other day, the C.E.O.’s and company presidents had come together for a rare, private meeting. On the agenda was one item: the emerging obesity epidemic and how to deal with it. While the atmosphere was cordial, the men assembled were hardly friends. Their stature was defined by their skill in fighting one another for what they called “stomach share” — the amount of digestive space that any one company’s brand can grab from the competition.
James Behnke, a 55-year-old executive at Pillsbury, greeted the men as they arrived. He was anxious but also hopeful about the plan that he and a few other food-company executives had devised to engage the C.E.O.’s on America’s growing weight problem. “We were very concerned, and rightfully so, that obesity was becoming a major issue,” Behnke recalled. “People were starting to talk about sugar taxes, and there was a lot of pressure on food companies.” Getting the company chiefs in the same room to talk about anything, much less a sensitive issue like this, was a tricky business, so Behnke and his fellow organizers had scripted the meeting carefully, honing the message to its barest essentials. “C.E.O.’s in the food industry are typically not technical guys, and they’re uncomfortable going to meetings where technical people talk in technical terms about technical things,” Behnke said. “They don’t want to be embarrassed. They don’t want to make commitments. They want to maintain their aloofness and autonomy.”
A chemist by training with a doctoral degree in food science, Behnke became Pillsbury’s chief technical officer in 1979 and was instrumental in creating a long line of hit products, including microwaveable popcorn. He deeply admired Pillsbury but in recent years had grown troubled by pictures of obese children suffering from diabetes and the earliest signs of hypertension and heart disease. In the months leading up to the C.E.O. meeting, he was engaged in conversation with a group of food-science experts who were painting an increasingly grim picture of the public’s ability to cope with the industry’s formulations — from the body’s fragile controls on overeating to the hidden power of some processed foods to make people feel hungrier still. It was time, he and a handful of others felt, to warn the C.E.O.’s that their companies may have gone too far in creating and marketing products that posed the greatest health concerns.
In This Article:
• ‘In This Field, I’m a Game Changer.’
• ‘Lunchtime Is All Yours’
• ‘It’s Called Vanishing Caloric Density.’
• ‘These People Need a Lot of Things, but They Don’t Need a Coke.’
The discussion took place in Pillsbury’s auditorium. The first speaker was a vice president of Kraft named Michael Mudd. “I very much appreciate this opportunity to talk to you about childhood obesity and the growing challenge it presents for us all,” Mudd began. “Let me say right at the start, this is not an easy subject. There are no easy answers — for what the public health community must do to bring this problem under control or for what the industry should do as others seek to hold it accountable for what has happened. But this much is clear: For those of us who’ve looked hard at this issue, whether they’re public health professionals or staff specialists in your own companies, we feel sure that the one thing we shouldn’t do is nothing.”
As he spoke, Mudd clicked through a deck of slides — 114 in all — projected on a large screen behind him. The figures were staggering. More than half of American adults were now considered overweight, with nearly one-quarter of the adult population — 40 million people — clinically defined as obese. Among children, the rates had more than doubled since 1980, and the number of kids considered obese had shot past 12 million. (This was still only 1999; the nation’s obesity rates would climb much higher.) Food manufacturers were now being blamed for the problem from all sides — academia, the Centers for Disease Control and Prevention, the American Heart Association and the American Cancer Society. The secretary of agriculture, over whom the industry had long held sway, had recently called obesity a “national epidemic.”
Mudd then did the unthinkable. He drew a connection to the last thing in the world the C.E.O.’s wanted linked to their products: cigarettes. First came a quote from a Yale University professor of psychology and public health, Kelly Brownell, who was an especially vocal proponent of the view that the processed-food industry should be seen as a public health menace: “As a culture, we’ve become upset by the tobacco companies advertising to children, but we sit idly by while the food companies do the very same thing. And we could make a claim that the toll taken on the public health by a poor diet rivals that taken by tobacco.”
“If anyone in the food industry ever doubted there was a slippery slope out there,” Mudd said, “I imagine they are beginning to experience a distinct sliding sensation right about now.”
Mudd then presented the plan he and others had devised to address the obesity problem. Merely getting the executives to acknowledge some culpability was an important first step, he knew, so his plan would start off with a small but crucial move: the industry should use the expertise of scientists — its own and others — to gain a deeper understanding of what was driving Americans to overeat. Once this was achieved, the effort could unfold on several fronts. To be sure, there would be no getting around the role that packaged foods and drinks play in overconsumption. They would have to pull back on their use of salt, sugar and fat, perhaps by imposing industrywide limits. But it wasn’t just a matter of these three ingredients; the schemes they used to advertise and market their products were critical, too. Mudd proposed creating a “code to guide the nutritional aspects of food marketing, especially to children.”
“We are saying that the industry should make a sincere effort to be part of the solution,” Mudd concluded. “And that by doing so, we can help to defuse the criticism that’s building against us.”
What happened next was not written down. But according to three participants, when Mudd stopped talking, the one C.E.O. whose recent exploits in the grocery store had awed the rest of the industry stood up to speak. His name was Stephen Sanger, and he was also the person — as head of General Mills — who had the most to lose when it came to dealing with obesity. Under his leadership, General Mills had overtaken not just the cereal aisle but other sections of the grocery store. The company’s Yoplait brand had transformed traditional unsweetened breakfast yogurt into a veritable dessert. It now had twice as much sugar per serving as General Mills’ marshmallow cereal Lucky Charms. And yet, because of yogurt’s well-tended image as a wholesome snack, sales of Yoplait were soaring, with annual revenue topping $500 million. Emboldened by the success, the company’s development wing pushed even harder, inventing a Yoplait variation that came in a squeezable tube — perfect for kids. They called it Go-Gurt and rolled it out nationally in the weeks before the C.E.O. meeting. (By year’s end, it would hit $100 million in sales.)
According to the sources I spoke with, Sanger began by reminding the group that consumers were “fickle.” (Sanger declined to be interviewed.) Sometimes they worried about sugar, other times fat. General Mills, he said, acted responsibly to both the public and shareholders by offering products to satisfy dieters and other concerned shoppers, from low sugar to added whole grains. But most often, he said, people bought what they liked, and they liked what tasted good. “Don’t talk to me about nutrition,” he reportedly said, taking on the voice of the typical consumer. “Talk to me about taste, and if this stuff tastes better, don’t run around trying to sell stuff that doesn’t taste good.”
To react to the critics, Sanger said, would jeopardize the sanctity of the recipes that had made his products so successful. General Mills would not pull back. He would push his people onward, and he urged his peers to do the same. Sanger’s response effectively ended the meeting.
“What can I say?” James Behnke told me years later. “It didn’t work. These guys weren’t as receptive as we thought they would be.” Behnke chose his words deliberately. He wanted to be fair. “Sanger was trying to say, ‘Look, we’re not going to screw around with the company jewels here and change the formulations because a bunch of guys in white coats are worried about obesity.’ ”
The meeting was remarkable, first, for the insider admissions of guilt. But I was also struck by how prescient the organizers of the sit-down had been. Today, one in three adults is considered clinically obese, along with one in five kids, and 24 million Americans are afflicted by type 2 diabetes, often caused by poor diet, with another 79 million people having pre-diabetes. Even gout, a painful form of arthritis once known as “the rich man’s disease” for its associations with gluttony, now afflicts eight million Americans.
The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.Image: Cheeto puffs. Courtesy of tumblr.
- Geoengineering As a Solution to Climate Change>
Experimental physicist David Keith has a plan: dump hundreds of thousands of tons of atomized sulfuric acid into the upper atmosphere; watch the acid particles reflect additional sunlight; wait for global temperature to drop. Many of Keith’s peers think this geoengineering scheme is crazy, least of which are the possible unknown and unmeasured side-effects, but this hasn’t stopped the healthy debate. One thing is becoming increasingly clear — humans need to take collective action.From Technology Review:
Here is the plan. Customize several Gulfstream business jets with military engines and with equipment to produce and disperse fine droplets of sulfuric acid. Fly the jets up around 20 kilometers—significantly higher than the cruising altitude for a commercial jetliner but still well within their range. At that altitude in the tropics, the aircraft are in the lower stratosphere. The planes spray the sulfuric acid, carefully controlling the rate of its release. The sulfur combines with water vapor to form sulfate aerosols, fine particles less than a micrometer in diameter. These get swept upward by natural wind patterns and are dispersed over the globe, including the poles. Once spread across the stratosphere, the aerosols will reflect about 1 percent of the sunlight hitting Earth back into space. Increasing what scientists call the planet’s albedo, or reflective power, will partially offset the warming effects caused by rising levels of greenhouse gases.
The author of this so-called geoengineering scheme, David Keith, doesn’t want to implement it anytime soon, if ever. Much more research is needed to determine whether injecting sulfur into the stratosphere would have dangerous consequences such as disrupting precipitation patterns or further eating away the ozone layer that protects us from damaging ultraviolet radiation. Even thornier, in some ways, are the ethical and governance issues that surround geoengineering—questions about who should be allowed to do what and when. Still, Keith, a professor of applied physics at Harvard University and a leading expert on energy technology, has done enough analysis to suspect it could be a cheap and easy way to head off some of the worst effects of climate change.
According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft.
One of the startling things about Keith’s proposal is just how little sulfur would be required. A few grams of it in the stratosphere will offset the warming caused by a ton of carbon dioxide, according to his estimate. And even the amount that would be needed by 2070 is dwarfed by the roughly 50 million metric tons of sulfur emitted by the burning of fossil fuels every year. Most of that pollution stays in the lower atmosphere, and the sulfur molecules are washed out in a matter of days. In contrast, sulfate particles remain in the stratosphere for a few years, making them more effective at reflecting sunlight.
The idea of using sulfate aerosols to offset climate warming is not new. Crude versions of the concept have been around at least since a Russian climate scientist named Mikhail Budkyo proposed the idea in the mid-1970s, and more refined descriptions of how it might work have been discussed for decades. These days the idea of using sulfur particles to counteract warming—often known as solar radiation management, or SRM—is the subject of hundreds of papers in academic journals by scientists who use computer models to try to predict its consequences.
But Keith, who has published on geoengineering since the early 1990s, has emerged as a leading figure in the field because of his aggressive public advocacy for more research on the technology—and his willingness to talk unflinchingly about how it might work. Add to that his impeccable academic credentials—last year Harvard lured him away from the University of Calgary with a joint appointment in the school of engineering and the Kennedy School of Government—and Keith is one of the world’s most influential voices on solar geoengineering. He is one of the few who have done detailed engineering studies and logistical calculations on just how SRM might be carried out. And if he and his collaborator James Anderson, a prominent atmospheric chemist at Harvard, gain public funding, they plan to conduct some of the first field experiments to assess the risks of the technique.
Leaning forward from the edge of his chair in a small, sparse Harvard office on an unusually warm day this winter, he explains his urgency. Whether or not greenhouse-gas emissions are cut sharply—and there is little evidence that such reductions are coming—”there is a realistic chance that [solar geoengineering] technologies could actually reduce climate risk significantly, and we would be negligent if we didn’t look at that,” he says. “I’m not saying it will work, and I’m not saying we should do it.” But “it would be reckless not to begin serious research on it,” he adds. “The sooner we find out whether it works or not, the better.”
The overriding reason why Keith and other scientists are exploring solar geoengineering is simple and well documented, though often overlooked: the warming caused by atmospheric carbon dioxide buildup is for all practical purposes irreversible, because the climate change is directly related to the total cumulative emissions. Even if we halt carbon dioxide emissions entirely, the elevated concentrations of the gas in the atmosphere will persist for decades. And according to recent studies, the warming itself will continue largely unabated for at least 1,000 years. If we find in, say, 2030 or 2040 that climate change has become intolerable, cutting emissions alone won’t solve the problem.
“That’s the key insight,” says Keith. While he strongly supports cutting carbon dioxide emissions as rapidly as possible, he says that if the climate “dice” roll against us, that won’t be enough: “The only thing that we think might actually help [reverse the warming] in our lifetime is in fact geoengineering.”
- From Sea to Shining Sea - By Rail>
Now that air travel has become well and truly commoditized, and for most of us, a nightmare, it’s time, again, to revisit the romance of rail. After all, the elitist romance of air travel passed away about 40-50 years ago. Now all we are left with is parking trauma at the airport; endless lines at check in, security, the gate and while boarding and disembarking; inane airport announcements and beeping golf carts; coughing, tweeting passengers crammed shoulder to shoulder in far too small seats; poor quality air and poor quality service in the cabin. It’s even dangerous to open the shade and look out of the aircraft window for fear of waking a cranky neighbor, or, more calamitous still, for washing out the in-seat displays showing the latest reality TV videos.
Some of you, surely, still pine for a quiet and calming ride across the country taking in the local sights at a more leisurely pace. Alfred Twu, who helped define the 2008 high speed rail proposal for California, would have us zooming across the entire United States in trains, again. So, it not be a leisurely ride — think more like 200-300 miles per hour — but it may well bring us closer to what we truly miss when suspended at 30,000 ft. We can’t wait.From the Guardian:
I created this US High Speed Rail Map as a composite of several proposed maps from 2009, when government agencies and advocacy groups were talking big about rebuilding America’s train system.
Having worked on getting California’s high speed rail approved in the 2008 elections, I’ve long sung the economic and environmental benefits of fast trains.
This latest map comes more from the heart. It speaks more to bridging regional and urban-rural divides than about reducing airport congestion or even creating jobs, although it would likely do that as well.
Instead of detailing construction phases and service speeds, I took a little artistic license and chose colors and linked lines to celebrate America’s many distinct but interwoven regional cultures.
The response to my map this week went above and beyond my wildest expectations, sparking vigorous political discussion between thousands of Americans ranging from off-color jokes about rival cities to poignant reflections on how this kind of rail network could change long-distance relationships and the lives of faraway family members.
Commenters from New York and Nebraska talked about “wanting to ride the red line”. Journalists from Chattanooga, Tennessee (population 167,000) asked to reprint the map because they were excited to be on the map. Hundreds more shouted “this should have been built yesterday”.
It’s clear that high speed rail is more than just a way to save energy or extend economic development to smaller cities.
More than mere steel wheels on tracks, high speed rail shrinks space and brings farflung families back together. It keeps couples in touch when distant career or educational opportunities beckon. It calls to adventure and travel. It is duct tape and string to reconnect politically divided regions. Its colorful threads weave new American Dreams.
That said, while trains still live large in the popular imagination, decades of limited service have left some blind spots in the collective consciousness. I’ll address few here:
Myth: High speed rail is just for big city people.
Fact: Unlike airplanes or buses which must make detours to drop off passengers at intermediate points, trains glide into and out of stations with little delay, pausing for under a minute to unload passengers from multiple doors. Trains can, have, and continue to effectively serve small towns and suburbs, whereas bus service increasingly bypasses them.
I do hear the complaint: “But it doesn’t stop in my town!” In the words of one commenter, “the train doesn’t need to stop on your front porch.” Local transit, rental cars, taxis, biking, and walking provide access to and from stations.
Myth: High speed rail is only useful for short distances.
Fact: Express trains that skip stops allow lines to serve many intermediate cities while still providing some fast end-to-end service. Overnight sleepers with lie-flat beds where one boards around dinner and arrives after breakfast have been successful in the US before and are in use on China’s newest 2,300km high speed line.Image: U.S. High Speed Rail System proposal. Alfred Twu created this map to showcase what could be possible.
- Beware North Korea, Google is Watching You>
This week Google refreshed its maps of North Korea. What was previously a blank canvas with only the country’s capital — Pyongyang — visible, now boasts roads, hotels, monuments and even some North Korean internment camps. While this is not the first detailed map of the secretive state it is an important milestone in Google’s quest to map us all.From the Washington Post:
Until Tuesday, North Korea appeared on Google Maps as a near-total white space — no roads, no train lines, no parks and no restaurants. The only thing labeled was the capital city, Pyongyang.
This all changed when Google, on Tuesday, rolled out a detailed map of one of the world’s most secretive states. The new map labels everything from Pyongyang’s subway stops to the country’s several city-sized gulags, as well as its monuments, hotels, hospitals and department stores.
According to a Google blog post, the maps were created by a group of volunteer “citizen cartographers,” through an interface known as Google Map Maker. That program — much like Wikipedia — allows users to submit their own data, which is then fact-checked by other users, and sometimes altered many times over. Similar processes were used in other once-unmapped countries like Afghanistan and Burma.
In the case of North Korea, those volunteers worked from outside of the country, beginning from 2009. They used information that was already public, compiling details from existing analog maps, satellite images, or other Web-based materials. Much of the information was already available on the Internet, said Hwang Min-woo, 28, a volunteer mapmaker from Seoul who worked for two years on the project.
North Korea was the last country virtually unmapped by Google, but other — even more detailed — maps of the North existed before this. Most notable is a map created by Curtis Melvin, who runs the North Korea Economy Watch blog and spent years identifying thousands of landmarks in the North: tombs, textile factories, film studios, even rumored spy training locations. Melvin’s map is available as a downloadable Google Earth file.
Google’s map is important, though, because it is so readily accessible. The map is unlikely to have an immediate influence in the North, where Internet use is restricted to all but a handful of elites. But it could prove beneficial for outsider analysts and scholars, providing an easy-to-access record about North Korea’s provinces, roads, landmarks, as well as hints about its many unseen horrors.Read the entire article and check out more maps after the jump.
- So, You Want to Be a Brit?>
The United Kingdom government has just published its updated 180-page handbook for new residents. So, those seeking to become subjects of Her Majesty will need to brush up on more that Admiral Nelson, Churchill, Spitfires, Chaucer and the Black Death. Now, if you are one of the approximately 150,000 new residents each year, you may well have to learn about Morecambe and Wise, Roald Dahl, and Monty Python. Nudge-nudge, wink-wink!From the Telegraph:
It has been described as “essential reading” for migrants and takes readers on a whirlwind historical tour of Britain from Stone Age hunter-gatherers to Morecambe and Wise, skipping lightly through the Black Death and Tudor England.
The latest Home Office citizenship handbook, Life in the United Kingdom: A Guide for New Residents, has scrapped sections on claiming benefits, written under the Labour government in 2007, for a triumphalist vision of events and people that helped make Britain a “great place to live”.
The Home Office said it had stripped-out “mundane information” about water meters, how to find train timetables, and using the internet.
The guide’s 180 pages, filled with pictures of the Queen, Spitfires and Churchill, are a primer for citizenship tests taken by around 150,000 migrants a year.
Comedies such as Monty Python and The Morecambe and Wise Show are highlighted as examples of British people’s “unique sense of humour and satire”, while Olympic athletes including Jessica Ennis and Sir Chris Hoy are included for the first time.
Previously, historical information was included in the handbook but was not tested. Now the book features sections on Roman, Anglo-Saxon and Viking Britain to give migrants an “understanding of how modern Britain has evolved”.
They can equally expect to be quizzed on the children’s author Roald Dahl, the Harrier jump jet and the Turing machine – a theoretical device proposed by Alan Turing and seen as a precursor to the modern computer.
The handbook also refers to the works of William Shakespeare, Geoffrey Chaucer and Jane Austen alongside Coronation Street. Meanwhile, Christmas pudding, the Last Night of the Proms and cricket matches are described as typical “indulgences”.
The handbook goes on sale today and forms the basis of the 45-minute exam in which migrants must gain marks of 75 per cent to pass.Image: Group shot of the Monty Python crew in 1969. Courtesy of Wikpedia.
- Las Vegas, Tianducheng and Paris: Cultural Borrowing>
These three locations in Nevada, China (near Hangzhou) and Paris, France, have something in common. People the world over travel to these three places to see what they share. But only one has an original. In this case, we’re talking about the Eiffel Tower.
Now, this architectural grand theft is subject to a lengthy debate — the merits of mimicry, on a vast scale. There is even a fascinating coffee table sized book dedicated to this growing trend: Original Copies: Architectural Mimicry in Contemporary China, by Bianca Bosker.
Interestingly, the copycat trend only seems worrisome if those doing the copying are in a powerful and growing nation, and the copying is done on a national scale, perhaps for some form of cultural assimilation. After all, we don’t hear similar cries when developers put up a copy of Venice in Las Vegas — that’s just for entertainment we are told.
Yet haven’t civilizations borrowed, and stolen, ideas both good and bad throughout the ages? The answer of course is an unequivocal yes. Humans are avaricious collectors of memes that work — it’s more efficient to borrow than to invent. The Greeks borrowed from the Egyptians; the Romans borrowed from the Greeks; the Turks borrowed from the Romans; the Arabs borrowed from the Turks; the Spanish from the Arabs, the French from the Spanish, the British from the French, and so on. Of course what seems to be causing a more recent stir is that China is doing the borrowing, and on such a rapid and grand scale — the nation is copying not just buildings (and most other products) but entire urban landscapes. However, this is one way that empires emerge and evolve. In this case, China’s acquisitive impulses could, perhaps, be tempered if most nations of the world borrowed less from the Chinese — money that is. But that’s another story.From the Atlantic:
The latest and most famous case of Chinese architectural mimicry doesn’t look much like its predecessors. On December 28, German news weekly Der Spiegel reported that the Wangjing Soho, Zaha Hadid’s soaring new office and retail development under construction in Beijing, is being replicated, wall for wall and window for window, in Chongqing, a city in central China.
To most outside observers, this bold and quickly commissioned counterfeit represents a familiar form of piracy. In fashion, technology, and architecture, great ideas trickle down, often against the wishes of their progenitors. But in China, architectural copies don’t usually ape the latest designs.
In the vast space between Beijing and Chongqing lies a whole world of Chinese architectural simulacra that quietly aspire to a different ideal. In suburbs around China’s booming cities, developers build replicas of towns like Halstatt, Austria and Dorchester, England. Individual homes and offices, too, are designed to look like Versailles or the Chrysler Building. The most popular facsimile in China is the White House. The fastest-urbanizing country in history isn’t scanning design magazines for inspiration; it’s watching movies.
At Beijing’s Palais de Fortune, two hundred chateaus sit behind gold-tipped fences. At Chengdu’s British Town, pitched roofs and cast-iron street lamps dot the streets. At Shanghai’s Thames Town, a Gothic cathedral has become a tourist attraction in itself. Other developments have names like “Top Aristocrat,” (Beijing), “the Garden of Monet” (Shanghai), and “Galaxy Dante,” (Shenzhen).
Architects and critics within and beyond China have treated these derivative designs with scorn, as shameless kitsch or simply trash. Others cite China’s larger knock-off culture, from handbags to housing, as evidence of the innovation gap between China and the United States. For a larger audience on the Internet, they are merely a punchline, another example of China’s endlessly entertaining wackiness.
In short, the majority of Chinese architectural imitation, oozing with historical romanticism, is not taken seriously.
But perhaps it ought to be.
In Original Copies: Architectural Mimicry in Contemporary China, the first detailed book on the subject, Bianca Bosker argues that the significance of these constructions has been unfairly discounted. Bosker, a senior technology editor at the Huffington Post, has been visiting copycat Chinese villages for some six years, and in her view, these distorted impressions of the West offer a glance at the hopes, dreams and contradictions of China’s middle class.
“Clearly there’s an acknowledgement that there’s something great about Paris,” says Bosker. “But it’s also: ‘We can do it ourselves.’”
Armed with firsthand observation, field research, interviews, and a solid historical background, Bosker’s book is an attempt to change the way we think about Chinese duplitecture. “We’re seeing the Chinese dream in action,” she says. “It has to do with this ability to take control of your life. There’s now this plethora of options to choose from.” That is something new in China, as is the role that private enterprise is taking in molding built environments that will respond to people’s fantasies.
While the experts scoff, the people who build and inhabit these places are quite proud of them. As the saying goes, “The way to live best is to eat Chinese food, drive an American car, and live in a British house. That’s the ideal life.” The Chinese middle class is living in Orange County, Beijing, the same way you listen to reggae music or lounge in Danish furniture.
In practice, though, the depth and scale of this phenomenon has few parallels. No one knows how many facsimile communities there are in China, but the number is increasing every day. “Every time I go looking for more,” Bosker says, “I find more.”
How many are there?
“At least hundreds.”Image: Tianducheng, 13th arrondissement, Paris in China. Courtesy of Bianca Bosker/University of Hawaii Press.
- Light From Gravity>
Often the best creative ideas and the most elegant solutions are the simplest. GravityLight is an example of this type of innovation. Here’s the problem: replace damaging and expensive kerosene fuel lamps in Africa with a less harmful and cheaper alternative. And, the solution:From ars technica:
A London design consultancy has developed a cheap, clean, and safer alternative to the kerosene lamp. Kerosene burning lamps are thought to be used by over a billion people in developing nations, often in remote rural parts where electricity is either prohibitively expensive or simply unavailable. Kerosene’s potential replacement, GravityLight, is powered by gravity without the need of a battery—it’s also seen by its creators as a superior alternative to solar-powered lamps.
Kerosene lamps are problematic in three ways: they release pollutants which can contribute to respiratory disease; they pose a fire risk; and, thanks to the ongoing need to buy kerosene fuel, they are expensive to run. Research out of Brown University from July of last year called kerosene lamps a “significant contributor to respiratory diseases, which kill over 1.5 million people every year” in developing countries. The same paper found that kerosene lamps were responsible for 70 percent of fires (which cause 300,000 deaths every year) and 80 percent of burns. The World Bank has compared the indoor use of a kerosene lamp with smoking two packs of cigarettes per day.
The economics of the kerosene lamps are nearly as problematic, with the fuel costing many rural families a significant proportion of their money. The designers of the GravityLight say 10 to 20 percent of household income is typical, and they describe kerosene as a poverty trap, locking people into a “permanent state of subsistence living.” Considering that the median rural price of kerosene in Tanzania, Mali, Ghana, Kenya, and Senegal is $1.30 per liter, and the average rural income in Tanzania is under $9 per month, the designers’ figures seem depressingly plausible.
Approached by the charity Solar Aid to design a solar-powered LED alternative, London design consultancy Therefore shifted the emphasis away from solar, which requires expensive batteries that degrade over time. The company’s answer is both more simple and more radical: an LED lamp driven by a bag of sand, earth, or stones, pulled toward the Earth by gravity.
It takes only seconds to hoist the bag into place, after which the lamp provides up to half an hour of ambient light, or about 18 minutes of brighter task lighting. Though it isn’t clear quite how much light the GravityLight emits, its makers insist it is more than a kerosene lamp. Also unclear are the precise inner workings of the device, though clearly the weighted bag pulls a cord, driving an inner mechanism with a low-powered dynamo, with the aid of some robust plastic gearing. Talking to Ars by telephone, Therefore’s Jim Fullalove was loath to divulge details, but did reveal the gearing took the kinetic energy from a weighted bag descending at a rate of a millimeter per second to power a dynamo spinning at 2000rpm.Read more about GravityLight after the jump.Video courtesy of GravityLight.
- Map as Illusion>
We love maps here at theDiagonal. We also love ideas that challenge the status quo. And, this latest Strange Map, courtesy of Frank Jacobs over at Big Think does both. What we appreciate about his cartographic masterpiece is that it challenges our visual perception, and, more importantly, challenges our assumed hemispheric worldview.Read more of this article after the jump.
- National Geographic Hits 125>
Chances are that if you don’t have some ancient National Geographic magazines hidden in a box in your attic, then you know someone who does. If not, it’s time to see what you have been missing all these years. National Geographic celebrates 125 years in 2013, and what better way to do this than to look back through some of its glorious photographic archives.See more classic images after the jump.Image: 1964, Tanzania: a touching moment between the primatologist and National Geographic grantee Jane Goodall and a young chimpanzee called Flint at Tanzania’s Gombe Stream reserve. Courtesy of Guardian / National Geographic.
- Climate Change Report>
No pithy headline. The latest U.S. National Climate Assessment makes sobering news. The full 1,146 page report is available for download here.
Over the next 30 years (and beyond), it warns of projected sea-level rises along the Eastern Seaboard of the United States, warmer temperatures across much of the nation, and generally warmer and more acidic oceans. More worrying still are the less direct consequences of climate change: increased threats to human health due to severe weather such as storms, drought and wildfires; more vulnerable infrastructure in regions subject to increasingly volatile weather; and rising threats to regional stability and national security due to a less reliable national and global water supply.From Scientific American:
The consequences of climate change are now hitting the United States on several fronts, including health, infrastructure, water supply, agriculture and especially more frequent severe weather, a congressionally mandated study has concluded.
A draft of the U.S. National Climate Assessment, released on Friday, said observable change to the climate in the past half-century “is due primarily to human activities, predominantly the burning of fossil fuel,” and that no areas of the United States were immune to change.
“Corn producers in Iowa, oyster growers in Washington State, and maple syrup producers in Vermont have observed changes in their local climate that are outside of their experience,” the report said.
Months after Superstorm Sandy hurtled into the U.S. East Coast, causing billions of dollars in damage, the report concluded that severe weather was the new normal.
“Certain types of weather events have become more frequent and/or intense, including heat waves, heavy downpours, and, in some regions, floods and droughts,” the report said, days after scientists at the National Oceanic and Atmospheric Administration declared 2012 the hottest year ever in the United States.
Some environmentalists looked for the report to energize climate efforts by the White House or Congress, although many Republican lawmakers are wary of declaring a definitive link between human activity and evidence of a changing climate.
The U.S. Congress has been mostly silent on climate change since efforts to pass “cap-and-trade” legislation collapsed in the Senate in mid-2010.
The advisory committee behind the report was established by the U.S. Department of Commerce to integrate federal research on environmental change and its implications for society. It made two earlier assessments, in 2000 and 2009.
Thirteen departments and agencies, from the Agriculture Department to NASA, are part of the committee, which also includes academics, businesses, nonprofits and others.
‘A WARNING TO ALL OF US’
The report noted that of an increase in average U.S. temperatures of about 1.5 degrees F (.83 degree C) since 1895, when reliable national record-keeping began, more than 80 percent had occurred in the past three decades.
With heat-trapping gases already in the atmosphere, temperatures could rise by a further 2 to 4 degrees F (1.1 to 2.2 degrees C) in most parts of the country over the next few decades, the report said.
- Plagiarism is the Sincerest Form of Capitalism>
Plagiarism is fine art in China. But, it’s also very big business. The nation knocks off everything, from Hollywood and Bollywood movies, to software, electronics, appliances, drugs, and military equipment. Now, it’s moved on to copying architectural plans.From the Telegraph:
China is famous for its copy-cat architecture: you can find replicas of everything from the Eiffel Tower and the White House to an Austrian village across its vast land. But now they have gone one step further: recreating a building that hasn’t even been finished yet. A building designed by the Iraqi-British architect Dame Zaha Hadid for Beijing has been copied by a developer in Chongqing, south-west China, and now the two projects are racing to be completed first.
Dame Zaha, whose Wangjing Soho complex consists of three pebble-like constructions and will house an office and retail complex, unveiled her designs in August 2011 and hopes to complete the project next year.
Meanwhile, a remarkably similar project called Meiquan 22nd Century is being constructed in Chongqing, that experts (and anyone with eyes, really) deem a rip-off. The developers of the Soho complex are concerned that the other is being built at a much faster rate than their own.
“It is possible that the Chongqing pirates got hold of some digital files or renderings of the project,” Satoshi Ohashi, project director at Zaha Hadid Architects, told Der Spiegel online. “[From these] you could work out a similar building if you are technically very capable, but this would only be a rough simulation of the architecture.”
So where does the law stand? Reporting on the intriguing case, China Intellectual Property magazine commented, “Up to now, there is no special law in China which has specific provisions on IP rights related to architecture.” They added that if it went to court, the likely outcome would be payment of compensation to Dame Zaha’s firm, rather than the defendant being forced to pull the building down. However, Dame Zaha seems somewhat unfazed about the structure, simply remarking that if the finished building contains a certain amount of innovation then “that could be quite exciting”. One of the world’s most celebrated architects, Dame Zaha – who recently designed the Aquatics Centre for the London Olympics – has 11 current projects in China. She is quite the star over there: 15,000 fans flocked to see her give a talk at the unveiling of the designs for the complex.Image: Wangjing Soho Architecture. Courtesy of Zaha Hadid Architects.
- The Future of the Grid>
Two common complaints dog the sustainable energy movement: first, energy generated from the sun and wind is not always present; second, renewable energy is too costly. A new study debunks these notions, and shows that cost effective renewable energy could power our needs 99 percent of the time by 2030.From ars technica:
You’ve probably heard the argument: wind and solar power are well and good, but what about when the wind doesn’t blow and the sun doesn’t shine? But it’s always windy and sunny somewhere. Given a sufficient distribution of energy resources and a large enough network of electrically conducting tubes, plus a bit of storage, these problems can be overcome—technologically, at least.
But is it cost-effective to do so? A new study from the University of Delaware finds that renewable energy sources can, with the help of storage, power a large regional grid for up to 99.9 percent of the time using current technology. By 2030, the cost of doing so will hit parity with current methods. Further, if you can live with renewables meeting your energy needs for only 90 percent of the time, the economics become positively compelling.
“These results break the conventional wisdom that renewable energy is too unreliable and expensive,” said study co-author Willett Kempton, a professor at the University of Delaware’s School of Marine Science and Policy. “The key is to get the right combination of electricity sources and storage—which we did by an exhaustive search—and to calculate costs correctly.”
By exhaustive, Kempton is referring to the 28 billion combinations of inland and offshore wind and photovoltaic solar sources combined with centralized hydrogen, centralized batteries, and grid-integrated vehicles analyzed in the study. The researchers deliberately overlooked constant renewable sources of energy such as geothermal and hydro power on the grounds that they are less widely available geographically.
These technologies were applied to a real-world test case: that of the PJM Interconnection regional grid, which covers parts of states from New Jersey to Indiana, and south to North Carolina. The model used hourly consumption data from the years 1999 to 2002; during that time, the grid had a generational capacity of 72GW catering to an average demand of 31.5GW. Taking in 13 states, either whole or in part, the PJM Interconnection constitutes one fifth of the USA’s grid. “Large” is no overstatement, even before considering more recent expansions that don’t apply to the dataset used.
The researchers constructed a computer model using standard solar and wind analysis tools. They then fed in hourly weather data from the region for the whole four-year period—35,040 hours worth. The goal was to find the minimum cost at which the energy demand could be met entirely by renewables for a given proportion of the time, based on the following game plan:
- When there’s enough renewable energy direct from source to meet demand, use it. Store any surplus.
- When there is not enough renewable energy direct from source, meet the shortfall with the stored energy.
- When there is not enough renewable energy direct from source, and the stored energy reserves are insufficient to bridge the shortfall, top up the remaining few percent of the demand with fossil fuels.
Perhaps unsurprisingly, the precise mix required depends upon exactly how much time you want renewables to meet the full load. Much more surprising is the amount of excess renewable infrastructure the model proposes as the most economic. To achieve a 90-percent target, the renewable infrastructure should be capable of generating 180 percent of the load. To meet demand 99.9 percent of the time, that rises to 290 percent.
“So much excess generation of renewables is a new idea, but it is not problematic or inefficient, any more than it is problematic to build a thermal power plant requiring fuel input at 250 percent of the electrical output, as we do today,” the study argues.Image: Bangui Windfarm, Ilocos Norte, Philippines. Courtesy of
- Places to Visit Before World's End>
In case you missed all the apocalyptic hoopla, the world is supposed to end today. Now, if you’re reading this, you obviously still have a little time, since the Mayans apparently did not specify a precise time for prophesied end. So, we highly recommend that you visit one or more of these beautiful places, immediately. Of course, if we’re all still here tomorrow, you will have some extra time to take in these breathtaking sights before the next planned doomsday.Check out the top 100 places according to the Telegraph after the jump.Image: Lapland for the northern lights. Courtesy of ALAMY / Telegraph.
- Climate change: Not in My Neigborhood>
It’s no surprise that in our daily lives we seek information that reinforces our perceptions, opinions and beliefs of the world around us. It’s also the case that if we do not believe in a particular position, we will overlook any evidence in our immediate surroundings that runs contrary to our disbelief — climate change is no different.From ars technica:
We all know it’s hard to change someone’s mind. In an ideal, rational world, a person’s opinion about some topic would be based on several pieces of evidence. If you were to supply that person with several pieces of stronger evidence that point in another direction, you might expect them to accept the new information and agree with you.
However, this is not that world, and rarely do we find ourselves in a debate with Star Trek’s Spock. There are a great many reasons that we behave differently. One is the way we rate incoming information for trustworthiness and importance. Once we form an opinion, we rate information that confirms our opinion more highly than information that challenges it. This is one form of “motivated reasoning.” We like to think we’re right, and so we are motivated to come to the conclusion that the facts are still on our side.
Publicly contentious issues often put a spotlight on these processes—issues like climate change, example. In a recent paper published in Nature Climate Change, researchers from George Mason and Yale explore how motivated reasoning influences whether people believe they have personally experienced the effects of climate change.
When it comes to communicating the science of global warming, a common strategy is to focus on the concrete here-and-now rather than the abstract and distant future. The former is easier for people to relate to and connect with. Glazed eyes are the standard response to complicated graphs of projected sea level rise, with ranges of uncertainty and several scenarios of future emissions. Show somebody that their favorite ice fishing spot is iced over for several fewer weeks each winter than it was in the late 1800s, though, and you might have their attention.
Public polls show that acceptance of a warming climate correlates with agreement that one has personally experienced its effects. That could be affirmation that personal experience is a powerful force for the acceptance of climate science. Obviously, there’s another possibility—that those who accept that the climate is warming are more likely to believe they’ve experienced the effects themselves, whereas those who deny that warming is taking place are unlikely to see evidence of it in daily life. That’s, at least partly, motivated reasoning at work. (And of course, this cuts both ways. Individuals who agree that the Earth is warming may erroneously interpret unrelated events as evidence of that fact.)
The survey used for this study was unique in that the same people were polled twice, two and a half years apart, to see how their views changed over time. For the group as a whole, there was evidence for both possibilities—experience affected acceptance, and acceptance predicted statements about experience.
Fortunately, the details were a bit more interesting than that. When you categorize individuals by engagement—essentially how confident and knowledgeable they feel about the facts of the issue—differences are revealed. For the highly-engaged groups (on both sides), opinions about whether climate is warming appeared to drive reports of personal experience. That is, motivated reasoning was prevalent. On the other hand, experience really did change opinions for the less-engaged group, and motivated reasoning took a back seat.Image courtesy of: New York Times / Steen Ulrik Johannessen / Agence France-Presse — Getty Images.
- National Emotions Mapped>
Are Canadians as a people more emotional than Brazilians? Are Brits as emotional as Mexicans? While generalizing and mapping a nation’s emotionality is dubious at best, this map is nonetheless fascinating.From the Washington Post:
Since 2009, the Gallup polling firm has surveyed people in 150 countries and territories on, among other things, their daily emotional experience. Their survey asks five questions, meant to gauge whether the respondent felt significant positive or negative emotions the day prior to the survey. The more times that people answer “yes” to questions such as “Did you smile or laugh a lot yesterday?”, the more emotional they’re deemed to be.
Gallup has tallied up the average “yes” responses from respondents in almost every country on Earth. The results, which I’ve mapped out above, are as fascinating as they are indecipherable. The color-coded key in the map indicates the average percentage of people who answered “yes.” Dark purple countries are the most emotional, yellow the least. Here are a few takeaways.
Singapore is the least emotional country in the world. ”Singaporeans recognize they have a problem,” Bloomberg Businessweek writes of the country’s “emotional deficit,” citing a culture in which schools “discourage students from thinking of themselves as individuals.” They also point to low work satisfaction, competitiveness, and the urban experience: “Staying emotionally neutral could be a way of coping with the stress of urban life in a place where 82 percent of the population lives in government-built housing.”
The Philippines is the world’s most emotional country. It’s not even close; the heavily Catholic, Southeast Asian nation, a former colony of Spain and the U.S., scores well above second-ranked El Salvador.
Post-Soviet countries are consistently among the most stoic. Other than Singapore (and, for some reason, Madagascar and Nepal), the least emotional countries in the world are all former members of the Soviet Union. They are also the greatest consumers of cigarettes and alcohol. This could be what you call and chicken-or-egg problem: if the two trends are related, which one came first? Europe appears almost like a gradient here, with emotions increasing as you move West.
People in the Americas are just exuberant. Every nation on the North and South American continents ranked highly on the survey. Americans and Canadians are both among the 15 most emotional countries in the world, as well as ten Latin countries. The only non-American countries in the top 15, other than the Philippines, are the Arab nations of Oman and Bahrain, both of which rank very highly.
- Testosterone and the Moon>
While the United States’ military makes no comment a number of corroborated reports suggest that the country had a plan to drop an atomic bomb on the moon during the height of the Cold War. Apparently, a Hiroshima-like explosion on our satellite would have been seen as a “show of force” by the Soviets. The shear absurdity of this Dr.Strangelove story makes it all the more real.From the Independent:
US Military chiefs, keen to intimidate Russia during the Cold War, plotted to blow up the moon with a nuclear bomb, according to project documents kept secret for for nearly 45 years.
The army chiefs allegedly developed a top-secret project called, ‘A Study of Lunar Research Flights’ – or ‘Project A119′, in the hope that their Soviet rivals would be intimidated by a display of America’s Cold War muscle.
According to The Sun newspaper the military bosses developed a classified plan to launch a nuclear weapon 238,000 miles to the moon where it would be detonated upon impact.
The planners reportedly opted for an atom bomb, rather than a hydrogen bomb, because the latter would be too heavy for the missile.
Physicist Leonard Reiffel, who says he was involved in the project, claims the hope was that the flash from the bomb would intimidate the Russians following their successful launching of the Sputnik satellite in October 1957.
The planning of the explosion reportedly included calculations by astronomer Carl Sagan, who was then a young graduate.
Documents reportedly show the plan was abandoned because of fears it would have an adverse effect on Earth should the explosion fail.Image courtesy of NASA.
- Pluralistic Ignorance>
Why study the science of climate change when you can study the complexities of climate change deniers themselves? That was the question that led several groups of independent researchers to study why some groups of people cling to mistaken beliefs and hold inaccurate views of the public consensus.From ars technica:
By just about every measure, the vast majority of scientists in general—and climate scientists in particular—have been convinced by the evidence that human activities are altering the climate. However, in several countries, a significant portion of the public has concluded that this consensus doesn’t exist. That has prompted a variety of studies aimed at understanding the large disconnect between scientists and the public, with results pointing the finger at everything from the economy to the weather. Other studies have noted societal influences on acceptance, including ideology and cultural identity.
Those studies have generally focused on the US population, but the public acceptance of climate change is fairly similar in Australia. There, a new study has looked at how societal tendencies can play a role in maintaining mistaken beliefs. The authors of the study have found evidence that two well-known behaviors—the “false consensus” and “pluralistic ignorance”—are helping to shape public opinion in Australia.
False consensus is the tendency of people to think that everyone else shares their opinions. This can arise from the fact that we tend to socialize with people who share our opinions, but the authors note that the effect is even stronger “when we hold opinions or beliefs that are unpopular, unpalatable, or that we are uncertain about.” In other words, our social habits tend to reinforce the belief that we’re part of a majority, and we have a tendency to cling to the sense that we’re not alone in our beliefs.
Pluralistic ignorance is similar, but it’s not focused on our own beliefs. Instead, sometimes the majority of people come to believe that most people think a certain way, even though the majority opinion actually resides elsewhere.
As it turns out, the authors found evidence of both these effects. They performed two identical surveys of over 5,000 Australians, done a year apart; about 1,350 people took the survey both times, which let the researchers track how opinions evolve. Participants were asked to describe their own opinion on climate change, with categories including “don’t know,” “not happening,” “a natural occurrence,” and “human-induced.” After voicing their own opinion, people were asked to estimate what percentage of the population would fall into each of these categories.
In aggregate, over 90 percent of those surveyed accepted that climate change was occurring (a rate much higher than we see in the US), with just over half accepting that humans were driving the change. Only about five percent felt it wasn’t happening, and even fewer said they didn’t know. The numbers changed only slightly between the two polls.
The false consensus effect became obvious when the researchers looked at what these people thought that everyone else believed. Here, the false consensus effect was obvious: every single group believed that their opinion represented the plurality view of the population. This was most dramatic among those who don’t think that the climate is changing; even though they represent far less than 10 percent of the population, they believed that over 40 percent of Australians shared their views. Those who profess ignorance also believed they had lots of company, estimating that their view was shared by a quarter of the populace.
Among those who took the survey twice, the effect became even more pronounced. In the year between the surveys, they respondents went from estimating that 30 percent of the population agreed with them to thinking that 45 percent did. And, in general, this group was the least likely to change its opinion between the two surveys.
But there was also evidence of pluralistic ignorance. Every single group grossly overestimated the number of people who were unsure about climate change or convinced it wasn’t occurring. Even those who were convinced that humans were changing the climate put 20 percent of Australians into each of these two groups.Image: Flood victims. Courtesy of NRDC.
- From Finely Textured Beef to Soylent Pink>
Blame corporate euphemisms and branding for the obfuscation of everyday things. More sinister yet, is the constant re-working of names for our ever increasingly processed foodstuffs. Only last year as several influential health studies pointed towards the detrimental health effects of high fructose corn syrup (HFC) did the food industry act, but not by removing copious amounts of the addictive additive from many processed foods. Rather, the industry attempted to re-brand HFC as “corn sugar”. And, now on to the battle over “soylent pink” also known as “pink slim”.From Slate:
What do you call a mash of beef trimmings that have been chopped and then spun in a centrifuge to remove the fatty bits and gristle? According to the government and to the company that invented the process, you call it lean finely textured beef. But to the natural-food crusaders who would have the stuff removed from the nation’s hamburgers and tacos, the protein-rich product goes by another, more disturbing name: Pink slime.
The story of this activist rebranding—from lean finely textured beef to pink slime—reveals just how much these labels matter. It was the latter phrase that, for example, birthed the great ground-beef scare of 2012. In early March, journalists at both the Daily and at ABC began reporting on a burger panic: Lax rules from the U.S. Department of Agriculture allowed producers to fill their ground-beef packs with a slimy, noxious byproduct—a mush the reporters called unsanitary and without much value as a food. Coverage linked back to a New York Times story from 2009 in which the words pink slime had appeared in public for the first time in a quote from an email written by a USDA microbiologist who was frustrated at a decision to leave the additive off labels for ground meat.
The slimy terror spread in the weeks that followed. Less than a month after ABC’s initial reports, almost a quarter million people had signed a petition to get pink slime out of public school cafeterias. Supermarket chains stopped selling burger meat that contained it—all because of a shift from four matter-of-fact words to two visceral ones.
And now that rebranding has become the basis for a 263-page lawsuit. Last month, Beef Products Inc., the first and principal producer of lean/pink/textured/slimy beef, filed a defamation claim against ABC (along with that microbiologist and a former USDA inspector) in a South Dakota court. The company says the network carried out a malicious and dishonest campaign to discredit its ground-beef additive and that this work had grievous consequences. When ABC began its coverage, Beef Products Inc. was selling 5 million pounds of slime/beef/whatever every week. Then three of its four plants were forced to close, and production dropped to 1.6 million pounds. A weekly profit of $2.3 million had turned into a $583,000 weekly loss.
At Reuters, Steven Brill argued that the suit has merit. I won’t try to comment on its legal viability, but the details of the claim do provide some useful background about how we name our processed foods, in both industry and the media. It turns out the paste now known within the business as lean finely textured beef descends from an older, less purified version of the same. Producers have long tried to salvage the trimmings from a cattle carcass by cleaning off the fat and the bacteria that often congregate on these leftover parts. At best they could achieve a not-so-lean class of meat called partially defatted chopped beef, which USDA deemed too low in quality to be a part of hamburger or ground meat.
By the late 1980s, though, Eldon Roth of Beef Products Inc. had worked out a way to make those trimmings a bit more wholesome. He’d found a way, using centrifuges, to separate the fat more fully. In 1991, USDA approved his product as fat reduced beef and signed off on its use in hamburgers. JoAnn Smith, a government official and former president of the National Cattlemen’s Association, signed off on this “euphemistic designation,” writes Marion Nestle in Food Politics. (Beef Products, Inc. maintains that this decision “was not motivated by any official’s so-called ‘links to the beef industry.’ “) So 20 years ago, the trimmings had already been reformulated and rebranded once.
But the government still said that fat reduced beef could not be used in packages marked “ground beef.” (The government distinction between hamburger and ground beef is that the former can contain added fat, while the latter can’t.) So Beef Products Inc. pressed its case, and in 1993 it convinced the USDA to approve the mash for wider use, with a new and better name: lean finely textured beef. A few years later, Roth started killing the microbes on his trimmings with ammonia gas and got approval to do that, too. With government permission, the company went on to sell several billion pounds of the stuff in the next two decades.
In the meantime, other meat processors started making something similar but using slightly different names. AFA Foods (which filed for bankruptcy in April after the recent ground-beef scandal broke), has referred to its products as boneless lean beef trimmings, a more generic term. Cargill, which decontaminates its meat with citric acid in place of ammonia gas, calls its mash of trimmings finely textured beef.Image: Industrial ground beef. Courtesy of Wikipedia.
- GigaBytes and TeraWatts>
Online social networks have expanded to include hundreds of millions of twitterati and their followers. An ever increasing volume of data, images, videos and documents continues to move into the expanding virtual “cloud”, hosted in many nameless data centers. Virtual processing and computation on demand is growing by leaps and bounds.
Yet while business models for the providers of these internet services remain ethereal, one segment of this business ecosystem is salivating — electricity companies and utilities — at the staggering demand for electrical power.From the New York Times:
Jeff Rothschild’s machines at Facebook had a problem he knew he had to solve immediately. They were about to melt.
The company had been packing a 40-by-60-foot rental space here with racks of computer servers that were needed to store and process information from members’ accounts. The electricity pouring into the computers was overheating Ethernet sockets and other crucial components.
Thinking fast, Mr. Rothschild, the company’s engineering chief, took some employees on an expedition to buy every fan they could find — “We cleaned out all of the Walgreens in the area,” he said — to blast cool air at the equipment and prevent the Web site from going down.
That was in early 2006, when Facebook had a quaint 10 million or so users and the one main server site. Today, the information generated by nearly one billion people requires outsize versions of these facilities, called data centers, with rows and rows of servers spread over hundreds of thousands of square feet, and all with industrial cooling systems.
They are a mere fraction of the tens of thousands of data centers that now exist to support the overall explosion of digital information. Stupendous amounts of data are set in motion each day as, with an innocuous click or tap, people download movies on iTunes, check credit card balances through Visa’s Web site, send Yahoo e-mail with files attached, buy products on Amazon, post on Twitter or read newspapers online.
A yearlong examination by The New York Times has revealed that this foundation of the information industry is sharply at odds with its image of sleek efficiency and environmental friendliness.
Most data centers, by design, consume vast amounts of energy in an incongruously wasteful manner, interviews and documents show. Online companies typically run their facilities at maximum capacity around the clock, whatever the demand. As a result, data centers can waste 90 percent or more of the electricity they pull off the grid, The Times found.
To guard against a power failure, they further rely on banks of generators that emit diesel exhaust. The pollution from data centers has increasingly been cited by the authorities for violating clean air regulations, documents show. In Silicon Valley, many data centers appear on the state government’s Toxic Air Contaminant Inventory, a roster of the area’s top stationary diesel polluters.
Worldwide, the digital warehouses use about 30 billion watts of electricity, roughly equivalent to the output of 30 nuclear power plants, according to estimates industry experts compiled for The Times. Data centers in the United States account for one-quarter to one-third of that load, the estimates show.
“It’s staggering for most people, even people in the industry, to understand the numbers, the sheer size of these systems,” said Peter Gross, who helped design hundreds of data centers. “A single data center can take more power than a medium-size town.”Image courtesy of the AP / Thanassis Stavrakis.
- A Link Between BPA and Obesity>
You have probably heard of BPA. It’s a compound used in the manufacture of many plastics, especially hard, polycarbonate plastics. Interestingly, it has hormone-like characteristics, mimicking estrogen. As a result, BPA crops up in many studies that show adverse health affects. As a precaution, the U.S. Food and Drug Administration (FDA) several years ago banned the use of BPA from products aimed at young children, such as baby bottles. But evidence remains inconsistent, so BPA is still found in many products today. Now comes another study linking BPA to obesity.From Smithsonian:
Since the 1960s, manufacturers have widely used the chemical bisphenol-A (BPA) in plastics and food packaging. Only recently, though, have scientists begun thoroughly looking into how the compound might affect human health—and what they’ve found has been a cause for concern.
Starting in 2006, a series of studies, mostly in mice, indicated that the chemical might act as an endocrine disruptor (by mimicking the hormone estrogen), cause problems during development and potentially affect the reproductive system, reducing fertility. After a 2010 Food and Drug Administration report warned that the compound could pose an especially hazardous risk for fetuses, infants and young children, BPA-free water bottles and food containers started flying off the shelves. In July, the FDA banned the use of BPA in baby bottles and sippy cups, but the chemical is still present in aluminum cans, containers of baby formula and other packaging materials.
Now comes another piece of data on a potential risk from BPA but in an area of health in which it has largely been overlooked: obesity. A study by researchers from New York University, published today in the Journal of the American Medical Association, looked at a sample of nearly 3,000 children and teens across the country and found a “significant” link between the amount of BPA in their urine and the prevalence of obesity.
“This is the first association of an environmental chemical in childhood obesity in a large, nationally representative sample,” said lead investigator Leonardo Trasande, who studies the role of environmental factors in childhood disease at NYU. “We note the recent FDA ban of BPA in baby bottles and sippy cups, yet our findings raise questions about exposure to BPA in consumer products used by older children.”
The researchers pulled data from the 2003 to 2008 National Health and Nutrition Examination Surveys, and after controlling for differences in ethnicity, age, caregiver education, income level, sex, caloric intake, television viewing habits and other factors, they found that children and adolescents with the highest levels of BPA in their urine had a 2.6 times greater chance of being obese than those with the lowest levels. Overall, 22.3 percent of those in the quartile with the highest levels of BPA were obese, compared with just 10.3 percent of those in the quartile with the lowest levels of BPA.
The vast majority of BPA in our bodies comes from ingestion of contaminated food and water. The compound is often used as an internal barrier in food packaging, so that the product we eat or drink does not come into direct contact with a metal can or plastic container. When heated or washed, though, plastics containing BPA can break down and release the chemical into the food or liquid they hold. As a result, roughly 93 percent of the U.S. population has detectable levels of BPA in their urine.
The researchers point specifically to the continuing presence of BPA in aluminum cans as a major problem. “Most people agree the majority of BPA exposure in the United States comes from aluminum cans,” Trasande said. “Removing it from aluminum cans is probably one of the best ways we can limit exposure. There are alternatives that manufacturers can use to line aluminum cans.”Image: Bisphenol A. Courtesy of Wikipedia.
- An Answer is Blowing in the Wind>
Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.From Ars Technica:
Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?
Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.
The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?
In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.
The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.
The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?
For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.
Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.
Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.
Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.
For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.
These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).
- Air Conditioning in a Warming World> From the New York Times:
THE blackouts that left hundreds of millions of Indians sweltering in the dark last month underscored the status of air-conditioning as one of the world’s most vexing environmental quandaries.
Fact 1: Nearly all of the world’s booming cities are in the tropics and will be home to an estimated one billion new consumers by 2025. As temperatures rise, they — and we — will use more air-conditioning.
Fact 2: Air-conditioners draw copious electricity, and deliver a double whammy in terms of climate change, since both the electricity they use and the coolants they contain result in planet-warming emissions.
Fact 3: Scientific studies increasingly show that health and productivity rise significantly if indoor temperature is cooled in hot weather. So cooling is not just about comfort.
Sum up these facts and it’s hard to escape: Today’s humans probably need air-conditioning if they want to thrive and prosper. Yet if all those new city dwellers use air-conditioning the way Americans do, life could be one stuttering series of massive blackouts, accompanied by disastrous planet-warming emissions.
We can’t live with air-conditioning, but we can’t live without it.
“It is true that air-conditioning made the economy happen for Singapore and is doing so for other emerging economies,” said Pawel Wargocki, an expert on indoor air quality at the International Center for Indoor Environment and Energy at the Technical University of Denmark. “On the other hand, it poses a huge threat to global climate and energy use. The current pace is very dangerous.”
Projections of air-conditioning use are daunting. In 2007, only 11 percent of households in Brazil and 2 percent in India had air-conditioning, compared with 87 percent in the United States, which has a more temperate climate, said Michael Sivak, a research professor in energy at the University of Michigan. “There is huge latent demand,” Mr. Sivak said. “Current energy demand does not yet reflect what will happen when these countries have more money and more people can afford air-conditioning.” He has estimated that, based on its climate and the size of the population, the cooling needs of Mumbai alone could be about a quarter of those of the entire United States, which he calls “one scary statistic.”
It is easy to decry the problem but far harder to know what to do, especially in a warming world where people in the United States are using our existing air-conditioners more often. The number of cooling degree days — a measure of how often cooling is needed — was 17 percent above normal in the United States in 2010, according to the Environmental Protection Agency, leading to “an increase in electricity demand.” This July was the hottest ever in the United States.
Likewise, the blackouts in India were almost certainly related to the rising use of air-conditioning and cooling, experts say, even if the immediate culprit was a grid that did not properly balance supply and demand.
The late arrival of this year’s monsoons, which normally put an end to India’s hottest season, may have devastated the incomes of farmers who needed the rain. But it “put smiles on the faces of those who sell white goods — like air-conditioners and refrigerators — because it meant lots more sales,” said Rajendra Shende, chairman of the Terre Policy Center in Pune, India.
“Cooling is the craze in India — everyone loves cool temperatures and getting to cool temperatures as quickly as possible,” Mr. Shende said. He said that cooling has become such a cultural priority that rather than advertise a car’s acceleration, salesmen in India now emphasize how fast its air-conditioner can cool.
Scientists are scrambling to invent more efficient air-conditioners and better coolant gases to minimize electricity use and emissions. But so far the improvements have been dwarfed by humanity’s rising demands.
And recent efforts to curb the use of air-conditioning, by fiat or persuasion, have produced sobering lessons.Image courtesy of Parkland Air Conditioning.
- When to Eat Your Fruit and Veg>
It’s time to jettison the $1.99 hyper-burger and super-sized fires and try some real fruits and vegetables. You know — the kind of product that comes directly from the soil. But, when is the best time to suck on a juicy peach or chomp some crispy radicchio?
A great chart, below, summarizes which fruits and vegetables are generally in season for the Northern Hemisphere.Infographic courtesy of Visual News, designed by Column Five.
- Extreme Weather as the New Norm>
Melting glaciers at the poles, wildfires in the western United States, severe flooding across Europe and parts of Asia, hurricanes in northern Australia, warmer temperatures across the globe. According to a many climatologists, including a growing number of ex-climate change skeptics, this is the new normal for our foreseeable future. Welcome to the changed climate.From the New York Times:
BY many measurements, this summer’s drought is one for the record books. But so was last year’s drought in the South Central states. And it has been only a decade since an extreme five-year drought hit the American West. Widespread annual droughts, once a rare calamity, have become more frequent and are set to become the “new normal.”
Until recently, many scientists spoke of climate change mainly as a “threat,” sometime in the future. But it is increasingly clear that we already live in the era of human-induced climate change, with a growing frequency of weather and climate extremes like heat waves, droughts, floods and fires.
Future precipitation trends, based on climate model projections for the coming fifth assessment from the Intergovernmental Panel on Climate Change, indicate that droughts of this length and severity will be commonplace through the end of the century unless human-induced carbon emissions are significantly reduced. Indeed, assuming business as usual, each of the next 80 years in the American West is expected to see less rainfall than the average of the five years of the drought that hit the region from 2000 to 2004.
That extreme drought (which we have analyzed in a new study in the journal Nature-Geoscience) had profound consequences for carbon sequestration, agricultural productivity and water resources: plants, for example, took in only half the carbon dioxide they do normally, thanks to a drought-induced drop in photosynthesis.
In the drought’s worst year, Western crop yields were down by 13 percent, with many local cases of complete crop failure. Major river basins showed 5 percent to 50 percent reductions in flow. These reductions persisted up to three years after the drought ended, because the lakes and reservoirs that feed them needed several years of average rainfall to return to predrought levels.
In terms of severity and geographic extent, the 2000-4 drought in the West exceeded such legendary events as the Dust Bowl of the 1930s. While that drought saw intervening years of normal rainfall, the years of the turn-of-the-century drought were consecutive. More seriously still, long-term climate records from tree-ring chronologies show that this drought was the most severe event of its kind in the western United States in the past 800 years. Though there have been many extreme droughts over the last 1,200 years, only three other events have been of similar magnitude, all during periods of “megadroughts.”
Most frightening is that this extreme event could become the new normal: climate models point to a warmer planet, largely because of greenhouse gas emissions. Planetary warming, in turn, is expected to create drier conditions across western North America, because of the way global-wind and atmospheric-pressure patterns shift in response.
Indeed, scientists see signs of the relationship between warming and drought in western North America by analyzing trends over the last 100 years; evidence suggests that the more frequent drought and low precipitation events observed for the West during the 20th century are associated with increasing temperatures across the Northern Hemisphere.
These climate-model projections suggest that what we consider today to be an episode of severe drought might even be classified as a period of abnormal wetness by the end of the century and that a coming megadrought — a prolonged, multidecade period of significantly below-average precipitation — is possible and likely in the American West.Image courtesy of the Sun.
- A Climate Change Skeptic Recants>
A climate change skeptic recants. Of course, disbelievers in human-influenced climate change will point to the fact that physicist Richard Muller used an op-ed in the New York Times as evidence of flagrant falsehood and unmitigated bias.
Several years ago Muller set up the Berkeley Earth project, to collect and analyze land-surface temperature records from sources independent of NASA and NOAA. Convinced, at the time, that climate change researchers had the numbers all wrong, Muller and team set out to find the proof.From the New York Times:
CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.
My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.
These findings are stronger than those of the Intergovernmental Panel on Climate Change, the United Nations group that defines the scientific and diplomatic consensus on global warming. In its 2007 report, the I.P.C.C. concluded only that most of the warming of the prior 50 years could be attributed to humans. It was possible, according to the I.P.C.C. consensus statement, that the warming before 1956 could be because of changes in solar activity, and that even a substantial part of the more recent warming could be natural.
Our Berkeley Earth approach used sophisticated statistical methods developed largely by our lead scientist, Robert Rohde, which allowed us to determine earth land temperature much further back in time. We carefully studied issues raised by skeptics: biases from urban heating (we duplicated our results using rural data alone), from data selection (prior groups selected fewer than 20 percent of the available temperature stations; we used virtually 100 percent), from poor station quality (we separately analyzed good stations and poor ones) and from human intervention and data adjustment (our work is completely automated and hands-off). In our papers we demonstrate that none of these potentially troublesome effects unduly biased our conclusions.
The historic temperature pattern we observed has abrupt dips that match the emissions of known explosive volcanic eruptions; the particulates from such events reflect sunlight, make for beautiful sunsets and cool the earth’s surface for a few years. There are small, rapid variations attributable to El Niño and other ocean currents such as the Gulf Stream; because of such oscillations, the “flattening” of the recent temperature rise that some people claim is not, in our view, statistically significant. What has caused the gradual but systematic rise of two and a half degrees? We tried fitting the shape to simple math functions (exponentials, polynomials), to solar activity and even to rising functions like world population. By far the best match was to the record of atmospheric carbon dioxide, measured from atmospheric samples and air trapped in polar ice.
Just as important, our record is long enough that we could search for the fingerprint of solar variability, based on the historical record of sunspots. That fingerprint is absent. Although the I.P.C.C. allowed for the possibility that variations in sunlight could have ended the “Little Ice Age,” a period of cooling from the 14th century to about 1850, our data argues strongly that the temperature rise of the past 250 years cannot be attributed to solar changes. This conclusion is, in retrospect, not too surprising; we’ve learned from satellite measurements that solar activity changes the brightness of the sun very little.
How definite is the attribution to humans? The carbon dioxide curve gives a better match than anything else we’ve tried. Its magnitude is consistent with the calculated greenhouse effect — extra warming from trapped heat radiation. These facts don’t prove causality and they shouldn’t end skepticism, but they raise the bar: to be considered seriously, an alternative explanation must match the data at least as well as carbon dioxide does. Adding methane, a second greenhouse gas, to our analysis doesn’t change the results. Moreover, our analysis does not depend on large, complex global climate models, the huge computer programs that are notorious for their hidden assumptions and adjustable parameters. Our result is based simply on the close agreement between the shape of the observed temperature rise and the known greenhouse gas increase.
It’s a scientist’s duty to be properly skeptical. I still find that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I’ve analyzed some of the most alarmist claims, and my skepticism about them hasn’t changed.Image: Global land-surface temperature with a 10-year moving average. Courtesy of Berkeley Earth.
- Best Countries for Women>
If you’re female and value lengthy life expectancy, comprehensive reproductive health services, sound education and equality with males, where should you live? In short, Scandinavia, Australia and New Zealand, and Northern Europe. In a list of the 44 most well-developed nations, the United States ranks towards the middle, just below Canada and Estonia, but above Greece, Italy, Russia and most of Central and Eastern Europe.
The fascinating infographic from the National Post does a great job of summarizing the current state of womens’ affairs from data gathered from 165 countries.Read the entire article and find a higher quality infographic after the jump.
- Two Degrees>
Author and environmentalist Bill McKibben has been writing about climate change and environmental issues for over 20 years. His first book, The End of Nature, was published in 1989, and is considered to be the first book aimed at the general public on the subject of climate change.
In his latest essay in Rolling Stone, which we excerpt below, McKibben offers a sobering assessment based on our current lack of action on a global scale. He argues that in the face of governmental torpor, and with individual action being almost inconsequential (at this late stage), only a radical re-invention of our fossil-fuel industries — to energy companies in the broad sense — can bring significant and lasting change.
Learn more about Bill McKibben, here.From Rolling Stone:
If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.
Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.
Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.
When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.
The First Number: 2° Celsius
If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”
In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.
The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.
Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”
Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.Read the entire article after the jump.Image: Emissions from industry have helped increase the levels of carbon dioxide in the atmosphere, driving climate change. Courtesy of New Scientist / Eye Ubiquitous / Rex Features.
- Your Life Expectancy Mapped>
Your life expectancy mapped, that is, if you live in London, U.K. So, take the iconic London tube (subway) map, then overlay it with figures for average life expectancy. Voila, you get to see how your neighbors on the Piccadilly Line fair in their longevity compared with say, you, who happen to live near a Central Line station. It turns out that in some cases adjacent areas — as depicted by nearby but different subway stations — show an astounding gap of more than 20 years in projected life span.
So, what is at work? And, more importantly, should you move to Bond Street where the average life expectancy is 96 years, versus only 79 in Kennington, South London?From the Atlantic:
Last year’s dystopian action flick In Time has Justin Timberlake playing a street rat who suddenly comes into a great deal of money — only the currency isn’t cash, it’s time. Hours and minutes of Timberlake’s life that can be traded just like dollars and cents in our world. Moving from poor districts to rich ones, and vice versa, requires Timberlake to pay a toll, each time shaving off a portion of his life savings.
Literally paying with your life just to get around town seems like — you guessed it — pure science fiction. It’s absolute baloney to think that driving or taking a crosstown bus could result in a shorter life (unless you count this). But a project by University College London researchers called Lives on the Line echoes something similar with a map that plots local differences in life expectancy based on the nearest Tube stop.
The trends are largely unsurprising, and correlate mostly with wealth. Britons living in the ritzier West London tend to have longer expected lifespans compared to those who live in the east or the south. Those residing near the Oxford Circus Tube stop have it the easiest, with an average life expectancy of 96 years. Going into less wealthy neighborhoods in south and east London, life expectancy begins to drop — though it still hovers in the respectable range of 78-79.
Meanwhile, differences in life expectancy between even adjacent stations can be stark. Britons living near Pimlico are predicted to live six years longer than those just across the Thames near Vauxhall. There’s about a two-decade difference between those living in central London compared to those near some stations on the Docklands Light Railway, according to the BBC. Similarly, moving from Tottenham Court Road to Holborn will also shave six years off the Londoner’s average life expectancy.
Michael Marmot, a UCL professor who wasn’t involved in the project, put the numbers in international perspective.
“The difference between Hackney and the West End,” Marmot told the BBC, “is the same as the difference between England and Guatemala in terms of life expectancy.”Image courtesy of Atlantic / MappingLondon.co.uk.
- The North Continues to Melt Away>
On July 16, 2012 the Petermann Glacier in Greenland calved another gigantic island of ice, about twice the size of Manhattan in New York, or about 46 square miles. Climatologists armed with NASA satellite imagery have been following the glacier for many years, and first spotted the break-off point around 8 years ago. The Petermann Glacier calved a previous huge iceberg, twice this size, in 2010.
According to NASA average temperatures in northern Greenland and the Canadian Arctic have increased by about 4 degrees Fahrenheit in the last 30 years.
So, driven by climate change or not, regardless of whether it is short-term or long-term, temporary or irreversible, man-made or a natural cycle, the trend is clear — the Arctic is warming, the ice cap is shrinking and sea-levels are rising.From the Economist:
STANDING ON THE Greenland ice cap, it is obvious why restless modern man so reveres wild places. Everywhere you look, ice draws the eye, squeezed and chiselled by a unique coincidence of forces. Gormenghastian ice ridges, silver and lapis blue, ice mounds and other frozen contortions are minutely observable in the clear Arctic air. The great glaciers impose order on the icy sprawl, flowing down to a semi-frozen sea.
The ice cap is still, frozen in perturbation. There is not a breath of wind, no engine’s sound, no bird’s cry, no hubbub at all. Instead of noise, there is its absence. You feel it as a pressure behind the temples and, if you listen hard, as a phantom roar. For generations of frosty-whiskered European explorers, and still today, the ice sheet is synonymous with the power of nature.
The Arctic is one of the world’s least explored and last wild places. Even the names of its seas and rivers are unfamiliar, though many are vast. Siberia’s Yenisey and Lena each carries more water to the sea than the Mississippi or the Nile. Greenland, the world’s biggest island, is six times the size of Germany. Yet it has a population of just 57,000, mostly Inuit scattered in tiny coastal settlements. In the whole of the Arctic—roughly defined as the Arctic Circle and a narrow margin to the south (see map)—there are barely 4m people, around half of whom live in a few cheerless post-Soviet cities such as Murmansk and Magadan. In most of the rest, including much of Siberia, northern Alaska, northern Canada, Greenland and northern Scandinavia, there is hardly anyone. Yet the region is anything but inviolate.
A heat map of the world, colour-coded for temperature change, shows the Arctic in sizzling maroon. Since 1951 it has warmed roughly twice as much as the global average. In that period the temperature in Greenland has gone up by 1.5°C, compared with around 0.7°C globally. This disparity is expected to continue. A 2°C increase in global temperatures—which appears inevitable as greenhouse-gas emissions soar—would mean Arctic warming of 3-6°C.
Almost all Arctic glaciers have receded. The area of Arctic land covered by snow in early summer has shrunk by almost a fifth since 1966. But it is the Arctic Ocean that is most changed. In the 1970s, 80s and 90s the minimum extent of polar pack ice fell by around 8% per decade. Then, in 2007, the sea ice crashed, melting to a summer minimum of 4.3m sq km (1.7m square miles), close to half the average for the 1960s and 24% below the previous minimum, set in 2005. This left the north-west passage, a sea lane through Canada’s 36,000-island Arctic Archipelago, ice-free for the first time in memory.
Scientists, scrambling to explain this, found that in 2007 every natural variation, including warm weather, clear skies and warm currents, had lined up to reinforce the seasonal melt. But last year there was no such remarkable coincidence: it was as normal as the Arctic gets these days. And the sea ice still shrank to almost the same extent.
There is no serious doubt about the basic cause of the warming. It is, in the Arctic as everywhere, the result of an increase in heat-trapping atmospheric gases, mainly carbon dioxide released when fossil fuels are burned. Because the atmosphere is shedding less solar heat, it is warming—a physical effect predicted back in 1896 by Svante Arrhenius, a Swedish scientist. But why is the Arctic warming faster than other places?
Consider, first, how very sensitive to temperature change the Arctic is because of where it is. In both hemispheres the climate system shifts heat from the steamy equator to the frozen pole. But in the north the exchange is much more efficient. This is partly because of the lofty mountain ranges of Europe, Asia and America that help mix warm and cold fronts, much as boulders churn water in a stream. Antarctica, surrounded by the vast southern seas, is subject to much less atmospheric mixing.
The land masses that encircle the Arctic also prevent the polar oceans revolving around it as they do around Antarctica. Instead they surge, north-south, between the Arctic land masses in a gigantic exchange of cold and warm water: the Pacific pours through the Bering Strait, between Siberia and Alaska, and the Atlantic through the Fram Strait, between Greenland and Norway’s Svalbard archipelago.
That keeps the average annual temperature for the high Arctic (the northernmost fringes of land and the sea beyond) at a relatively sultry -15°C; much of the rest is close to melting-point for much of the year. Even modest warming can therefore have a dramatic effect on the region’s ecosystems. The Antarctic is also warming, but with an average annual temperature of -57°C it will take more than a few hot summers for this to become obvious.Image: Sequence of three images showing the Petermann Glacier sliding toward the sea along the northwestern coast of Greenland, terminating in a huge, new floating ice island. Courtesy: NASA.
- A Different Kind of Hotel>
Bored of the annual family trip to Disneyland? Tired of staying in a suite hotel that still offers musak in the lobby, floral motifs on the walls, and ashtrays and saccharin packets next to the rickety minibar? Well, leaf through this list of 10 exotic and gorgeous hotels and start planning your next real escape today.
Wadi Rum Desert Lodge – The Valley of the Moon, Jordan.From Flavorwire:
A Backward Glance, Pulitzer Prize-winning author Edith Wharton’s gem of an autobiography is highbrow beach reading at its very best. In the memoir, she recalls time spent with her bff traveling buddy, Henry James, and quotes his arcadian proclamation, “summer afternoon — summer afternoon; to me those have always been the two most beautiful words in the English language.” Maybe so in the less than industrious heyday of inherited wealth, but in today’s world where most people work all day for a living, those two words just don’t have the same appeal as our two favorite words: summer getaway.
Like everyone else in our overworked and overheated city, rest and relaxation are all we can think about — especially on a hot Friday afternoon like this. In considering options for our celebrated summer respite, we thought we’d take a virtual gander to check out alternatives to the usual Hamptons summer share. From a treehouse where sloths join you for morning coffee to a giant sandcastle, click through to see some of the most unusual summer getaway destinations in the world.See more stunning hotels after the jump.
- National Education Rankings: C->
One would believe that the most affluent and open country on the planet would have one of the best, if not the best, education systems. Yet, the United States of America distinguishes itself by being thoroughly mediocre in a ranking of developed nations in science, mathematics and reading. How can we makes amends for our children?From Slate:
Take the 2009 PISA test, which assessed the knowledge of students from 65 countries and economies—34 of which are members of the development organization the OECD, including the United States—in math, science, and reading. Of the OECD countries, the United States came in 17th place in science literacy; of all countries and economies surveyed, it came in 23rd place. The U.S. score of 502 practically matched the OECD average of 501. That puts us firmly in the middle. Where we don’t want to be.
What do the leading countries do differently? To find out, Slate asked science teachers from five countries that are among the world’s best in science education—Finland, Singapore, South Korea, New Zealand, and Canada—how they approach their subject and the classroom. Their recommendations: Keep students engaged and make the science seem relevant.
Finland: “To Make Students Enjoy Chemistry Is Hard Work”
Finland was first among the 34 OECD countries in the 2009 PISA science rankings and second—behind mainland China—among all 65 nations and economies that took the test. Ari Myllyviita teaches chemistry and works with future science educators at the Viikki Teacher Training School of Helsinki University.
Finland’s National Core Curriculum is premised on the idea “that learning is a result of a student’s active and focused actions aimed to process and interpret received information in interaction with other students, teachers and the environment and on the basis of his or her existing knowledge structures.”
My conception of learning lies strongly on this citation from our curriculum. My aim is to support knowledge-building, socioculturally: to create socially supported activity in student’s zone of proximal development (the area where student need some support to achieve next level of understanding or skill). The student’s previous knowledge is the starting point, and then the learning is bound to the activity during lessons—experiments, simulations, and observing phenomena.
The National Core Curriculum also states, “The purpose of instruction in chemistry is to support development of students’ scientific thinking and modern worldview.” Our teaching is based on examination and observations of substances and chemical phenomena, their structures and properties, and reactions between substances. Through experiments and theoretical models, students are taught to understand everyday life and nature. In my classroom, I use discussion, lectures, demonstrations, and experimental work—quite often based on group work. Between lessons, I use social media and other information communication technologies to stay in touch with students.
In addition to the National Core Curriculum, my school has its own. They have the same bases, but our own curriculum is more concrete. Based on these, I write my course and lesson plans. Because of different learning styles, I use different kinds of approaches, sometimes theoretical and sometimes experimental. Always there are new concepts and perhaps new models to explain the phenomena or results.
To make students enjoy learning chemistry is hard work. I think that as a teacher, you have to love your subject and enjoy teaching even when there are sometimes students who don´t pay attention to you. But I get satisfaction when I can give a purpose for the future by being a supportive teacher.
New Zealand: “Students Disengage When a Teacher Is Simply Repeating Facts or Ideas”
New Zealand came in seventh place out of 65 in the 2009 PISA assessment. Steve Martin is head of junior science at Howick College. In 2010, he received the prime minister’s award for science teaching.
Science education is an important part of preparing students for their role in the community. Scientific understanding will allow them to engage in issues that concern them now and in the future, such as genetically modified crops. In New Zealand, science is also viewed as having a crucial role to play in the future of the economic health of the country. This can be seen in the creation of the “Prime Minister’s Science Prizes,” a program that identifies the nation’s leading scientists, emerging and future scientists, and science teachers.
The New Zealand Science Curriculum allows for flexibility depending on contextual factors such as school location, interests of students, and teachers’ specialization. The curriculum has the “Nature of Science” as its foundation, which supports students learning the skills essential to a scientist, such as problem-solving and effective communication. The Nature of Science refers to the skills required to work as a scientist, how to communicate science effectively through science-specific vocabulary, and how to participate in debates and issues with a scientific perspective.
School administrators support innovation and risk-taking by teachers, which fosters the “let’s have a go” attitude. In my own classroom, I utilize computer technology to create virtual science lessons that support and encourage students to think for themselves and learn at their own pace. Virtual Lessons are Web-based documents that support learning in and outside the classroom. They include support for students of all abilities by providing digital resources targeted at different levels of thinking. These could include digital flashcards that support vocabulary development, videos that explain the relationships between ideas or facts, and links to websites that allow students to create cartoon animations. The students are then supported by the use of instant messaging, online collaborative documents, and email so they can get support from their peers and myself at anytime. I provide students with various levels of success criteria, which are statements that students and teachers use to evaluate performance. In every lesson I provide the students with three different levels of success criteria, each providing an increase in cognitive demand. The following is an example based on the topic of the carbon cycle:
I can identify the different parts of the carbon cycle.
I can explain how all the parts interact with each other to form the carbon cycle.
I can predict the effect that removing one part of the carbon cycle has on the environment.
These provide challenge for all abilities and at the same time make it clear what students need to do to be successful. I value creativity and innovation, and this greatly influences the opportunities I provide for students.
My students learn to love to be challenged and to see that all ideas help develop greater understanding. Students value the opportunity to contribute to others’ understanding, and they disengage when a teacher is simply repeating facts or ideas.Image: Coloma 1914 Classroom. Courtesy of Coloma Convent School, Croydon UK.
- King Canute or Mother Nature in North Carolina, Virginia, Texas?>
Legislators in North Carolina recently went one better than King C’Nut (Canute). The king of Denmark, England, Norway and parts of Sweden during various periods between 1018 and 1035, famously and unsuccessfully tried to hold back the incoming tide. The now mythic story tells of Canute’s arrogance. Not to be outdone, North Carolina’s state legislature recently passed a law that bans state agencies from reporting that sea-level rise is accelerating.
The bill From North Carolina states:
“… rates shall only be determined using historical data, and these data shall be limited to the time period following the year 1900. Rates of sea-level rise may be extrapolated linearly to estimate future rates of rise but shall not include scenarios of accelerated rates of sea-level rise.”
This comes hot on the heals of the recent revisionist push in Virginia where references to phrases such as “sea level rise” and “climate change” are forbidden in official state communications. Last year of course, Texas led the way for other states following the climate science denial program when the Texas Commission on Environmental Quality, which had commissioned a scientific study of Galveston Bay, removed all references to “rising sea levels”.
For more detailed reporting on this unsurprising and laughable state of affairs check out this article at Skeptical Science.From Scientific American:
Less than two weeks after the state’s senate passed a climate science-squelching bill, research shows that sea level along the coast between N.C. and Massachusetts is rising faster than anywhere on Earth.
Could nature be mocking North Carolina’s law-makers? Less than two weeks after the state’s senate passed a bill banning state agencies from reporting that sea-level rise is accelerating, research has shown that the coast between North Carolina and Massachusetts is experiencing the fastest sea-level rise in the world.
Asbury Sallenger, an oceanographer at the US Geological Survey in St Petersburg, Florida, and his colleagues analysed tide-gauge records from around North America. On 24 June, they reported in Nature Climate Change that since 1980, sea-level rise between Cape Hatteras, North Carolina, and Boston, Massachusetts, has accelerated to between 2 and 3.7 millimetres per year. That is three to four times the global average, and it means the coast could see 20–29 centimetres of sea-level rise on top of the metre predicted for the world as a whole by 2100 ( A. H. Sallenger Jr et al. Nature Clim. Change http://doi.org/hz4; 2012).
“Many people mistakenly think that the rate of sea-level rise is the same everywhere as glaciers and ice caps melt,” says Marcia McNutt, director of the US Geological Survey. But variations in currents and land movements can cause large regional differences. The hotspot is consistent with the slowing measured in Atlantic Ocean circulation, which may be tied to changes in water temperature, salinity and density.
North Carolina’s senators, however, have tried to stop state-funded researchers from releasing similar reports. The law approved by the senate on 12 June banned scientists in state agencies from using exponential extrapolation to predict sea-level rise, requiring instead that they stick to linear projections based on historical data.
Following international opprobrium, the state’s House of Representatives rejected the bill on 19 June. However, a compromise between the house and the senate forbids state agencies from basing any laws or plans on exponential extrapolations for the next three to four years, while the state conducts a new sea-level study.
According to local media, the bill was the handiwork of industry lobbyists and coastal municipalities who feared that investors and property developers would be scared off by predictions of high sea-level rises. The lobbyists invoked a paper published in the Journal of Coastal Research last year by James Houston, retired director of the US Army Corps of Engineers’ research centre in Vicksburg, Mississippi, and Robert Dean, emeritus professor of coastal engineering at the University of Florida in Gainesville. They reported that global sea-level rise has slowed since 1930 ( J. R. Houston and R. G. Dean J. Coastal Res. 27 , 409 – 417 ; 2011) — a contention that climate sceptics around the world have seized on.
Speaking to Nature, Dean accused the oceanographic community of ideological bias. “In the United States, there is an overemphasis on unrealistically high sea-level rise,” he says. “The reason is budgets. I am retired, so I have the freedom to report what I find without any bias or need to chase funding.” But Sallenger says that Houston and Dean’s choice of data sets masks acceleration in the sea-level-rise hotspot.Image courtesy of Policymic.
- Good Grades and Good Drugs?>
A sad story chronicling the rise in amphetamine use in the quest for good school grades. More frightening now is the increase in addiction of ever younger kids, and not for dubious goal of excelling at school. Many kids are just taking the drug to get high.From the Telegraph:
The New York Times has finally woken up to America’s biggest unacknowledged drug problem: the massive overprescription of the amphetamine drug Adderall for Attention Deficit Hyperactivity Disorder. Kids have been selling each other this powerful – and extremely moreish – mood enhancer for years, as ADHD diagnoses and prescriptions for the drug have shot up.
Now, children are snorting the stuff, breaking open the capsules and ingesting it using the time-honoured tool of a rolled-up bank note.
The NYT seems to think these teenage drug users are interested in boosting their grades. It claims that, for children without ADHD, “just one pill can jolt them with the energy focus to push through all-night homework binges and stay awake during exams afterward”.
Really? There are two problems with this.
First, the idea that ADHD kids are “normal” on Adderall and its methylphenidate alternative Ritalin – gentler in its effect but still a psychostimulant – is open to question. Reading this scorching article by the child psychologist Prof L Alan Sroufe, who says there’s no evidence that attention-deficit children are born with an organic disease, or that ADHD and non-ADHD kids react differently to their doctor-prescribed amphetamines. Yes, there’s an initial boost to concentration, but the effect wears off – and addiction often takes its place.
Second, the school pupils illicitly borrowing or buying Adderall aren’t necessarily doing it to concentrate on their work. They’re doing it to get high.
Adderall, with its mixture of amphetamine salts, has the ability to make you as euphoric as a line of cocaine – and keep you that way, particularly if it’s the slow-release version and you’re taking it for the first time. At least, that was my experience. Here’s what happened.
I was staying with a hospital consultant and his attorney wife in the East Bay just outside San Francisco. I’d driven overnight from Los Angeles after a flight from London; I was jetlagged, sleep-deprived and facing a deadline to write an article for the Spectator about, of all things, Bach cantatas.
Sitting in the courtyard garden with my laptop, I tapped and deleted one clumsy sentence after another. The sun was going down; my hostess saw me shivering and popped out with a blanket, a cup of herbal tea and ‘something to help you concentrate’.
I took the pill, didn’t notice any effect, and was glad when I was called in for dinner.
The dining room was a Californian take on the Second Empire. The lady next to me was a Southern Belle turned realtor, her eyelids already drooping from the effects of her third giant glass of Napa Valley chardonnay. She began to tell me about her divorce. Every time she refilled her glass, her new husband raised his eyes to heaven.
It felt as if I was stuck in an episode of Dallas, or a very bad Tennessee Williams play. But it didn’t matter in the least because, at some stage between the mozzarella salad and the grilled chicken, I’d become as high as a kite.
Adderall helps you concentrate, no doubt about it. I was riveted by the details of this woman’s alimony settlement. Even she, utterly self- obsessed as she was, was surprised by my gushing empathy. After dinner, I sat down at the kitchen table to finish the article. The head rush was beginning to wear off, but then, just as I started typing, a second wave of amphetamine pushed its way into my bloodstream. This was timed-release Adderall. Gratefully I plunged into 18th-century Leipzig, meticulously noting the catalogue numbers of cantatas. It was as if the great Johann Sebastian himself was looking over my shoulder. By the time I glanced at the clock, it was five in the morning. My pleasure at finishing the article was boosted by the dopamine high. What a lovely drug.
The blues didn’t hit me until the next day – and took the best part of a week to banish.
And this is what they give to nine-year-olds.Read the entire article after the jump.From the New York Times:
He steered into the high school parking lot, clicked off the ignition and scanned the scraps of his recent weeks. Crinkled chip bags on the dashboard. Soda cups at his feet. And on the passenger seat, a rumpled SAT practice book whose owner had been told since fourth grade he was headed to the Ivy League. Pencils up in 20 minutes.
The boy exhaled. Before opening the car door, he recalled recently, he twisted open a capsule of orange powder and arranged it in a neat line on the armrest. He leaned over, closed one nostril and snorted it.
Throughout the parking lot, he said, eight of his friends did the same thing.
The drug was not cocaine or heroin, but Adderall, an amphetamine prescribed for attention deficit hyperactivity disorder that the boy said he and his friends routinely shared to study late into the night, focus during tests and ultimately get the grades worthy of their prestigious high school in an affluent suburb of New York City. The drug did more than just jolt them awake for the 8 a.m. SAT; it gave them a tunnel focus tailor-made for the marathon of tests long known to make or break college applications.
“Everyone in school either has a prescription or has a friend who does,” the boy said.
At high schools across the United States, pressure over grades and competition for college admissions are encouraging students to abuse prescription stimulants, according to interviews with students, parents and doctors. Pills that have been a staple in some college and graduate school circles are going from rare to routine in many academically competitive high schools, where teenagers say they get them from friends, buy them from student dealers or fake symptoms to their parents and doctors to get prescriptions.
Of the more than 200 students, school officials, parents and others contacted for this article, about 40 agreed to share their experiences. Most students spoke on the condition that they be identified by only a first or middle name, or not at all, out of concern for their college prospects or their school systems’ reputations — and their own.
“It’s throughout all the private schools here,” said DeAnsin Parker, a New York psychologist who treats many adolescents from affluent neighborhoods like the Upper East Side. “It’s not as if there is one school where this is the culture. This is the culture.”
Observed Gary Boggs, a special agent for the Drug Enforcement Administration, “We’re seeing it all across the United States.”
The D.E.A. lists prescription stimulants like Adderall and Vyvanse (amphetamines) and Ritalin and Focalin (methylphenidates) as Class 2 controlled substances — the same as cocaine and morphine — because they rank among the most addictive substances that have a medical use. (By comparison, the long-abused anti-anxiety drug Valium is in the lower Class 4.) So they carry high legal risks, too, as few teenagers appreciate that merely giving a friend an Adderall or Vyvanse pill is the same as selling it and can be prosecuted as a felony.
While these medicines tend to calm people with A.D.H.D., those without the disorder find that just one pill can jolt them with the energy and focus to push through all-night homework binges and stay awake during exams afterward. “It’s like it does your work for you,” said William, a recent graduate of the Birch Wathen Lenox School on the Upper East Side of Manhattan.
But abuse of prescription stimulants can lead to depression and mood swings (from sleep deprivation), heart irregularities and acute exhaustion or psychosis during withdrawal, doctors say. Little is known about the long-term effects of abuse of stimulants among the young. Drug counselors say that for some teenagers, the pills eventually become an entry to the abuse of painkillers and sleep aids.
“Once you break the seal on using pills, or any of that stuff, it’s not scary anymore — especially when you’re getting A’s,” said the boy who snorted Adderall in the parking lot. He spoke from the couch of his drug counselor, detailing how he later became addicted to the painkiller Percocet and eventually heroin.
Paul L. Hokemeyer, a family therapist at Caron Treatment Centers in Manhattan, said: “Children have prefrontal cortexes that are not fully developed, and we’re changing the chemistry of the brain. That’s what these drugs do. It’s one thing if you have a real deficiency — the medicine is really important to those people — but not if your deficiency is not getting into Brown.”
The number of prescriptions for A.D.H.D. medications dispensed for young people ages 10 to 19 has risen 26 percent since 2007, to almost 21 million yearly, according to IMS Health, a health care information company — a number that experts estimate corresponds to more than two million individuals. But there is no reliable research on how many high school students take stimulants as a study aid. Doctors and teenagers from more than 15 schools across the nation with high academic standards estimated that the portion of students who do so ranges from 15 percent to 40 percent.
“They’re the A students, sometimes the B students, who are trying to get good grades,” said one senior at Lower Merion High School in Ardmore, a Philadelphia suburb, who said he makes hundreds of dollars a week selling prescription drugs, usually priced at $5 to $20 per pill, to classmates as young as freshmen. “They’re the quote-unquote good kids, basically.”
The trend was driven home last month to Nan Radulovic, a psychotherapist in Santa Monica, Calif. Within a few days, she said, an 11th grader, a ninth grader and an eighth grader asked for prescriptions for Adderall solely for better grades. From one girl, she recalled, it was not quite a request.
“If you don’t give me the prescription,” Dr. Radulovic said the girl told her, “I’ll just get it from kids at school.”Image: Illegal use of Adderall is prevalent enough that many students seem to take it for granted. Courtesy of Minnesota Post / Flickr/ CC/ Hipsxxhearts.
- The 10,000 Year Clock>
Aside from the ubiquitous plastic grocery bag will any human made artifact last 10,000 years? Before you answer, let’s qualify the question by mandating the artifact have some long-term value. That would seem to eliminate plastic bags, plastic toys embedded in fast food meals, and DVDs of reality “stars” ripped from YouTube. What does that leave? Most human made products consisting of metals or biodegradable components, such as paper and wood, will rust, rot or breakdown in 20-300 years. Even some plastics left exposed to sun and air will breakdown within a thousand years. Of course, buried deep in a landfill, plastic containers, styrofoam cups and throwaway diapers may remain with us for tens or hundreds of thousands of years.
Archaeological excavations show us that artifacts made of glass and ceramic would fit the bill — lasting well into the year 12012 and beyond. But, in the majority of cases we usually unearth fragments of things.
But what if some ingenious humans could build something that would still be around 10,000 years from now? Better still, build something that will still function as designed 10,000 years from now. This would represent an extraordinary feat of contemporary design and engineering. And, more importantly it would provide a powerful story for countless generations beginning with ours.
So, enter Danny Hillis and the Clock of the Long Now (also knows as the Millennium Clock or the 10,000 Year Clock). Danny Hillis is an inventor, scientist, and computer designer. He pioneered the concept of massively parallel computers.
In Hillis’ own words:
Ten thousand years – the life span I hope for the clock – is about as long as the history of human technology. We have fragments of pots that old. Geologically, it’s a blink of an eye. When you start thinking about building something that lasts that long, the real problem is not decay and corrosion, or even the power source. The real problem is people. If something becomes unimportant to people, it gets scrapped for parts; if it becomes important, it turns into a symbol and must eventually be destroyed. The only way to survive over the long run is to be made of materials large and worthless, like Stonehenge and the Pyramids, or to become lost. The Dead Sea Scrolls managed to survive by remaining lost for a couple millennia. Now that they’ve been located and preserved in a museum, they’re probably doomed. I give them two centuries – tops. The fate of really old things leads me to think that the clock should be copied and hidden.
Plans call for the 200 foot tall, 10,000 Year Clock to be installed inside a mountain in remote west Texas, with a second location in remote eastern Nevada. Design and engineering work on the clock, and preparation of the Clock’s Texas home are underway.
For more on the 10,000 Year Clock jump to the Long Now Foundation, here.More from Rationally Speaking:
I recently read Brian Hayes’ wonderful collection of mathematically oriented essays called Group Theory In The Bedroom, and Other Mathematical Diversions. Not surprisingly, the book contained plenty of philosophical musings too. In one of the essays, called “Clock of Ages,” Hayes describes the intricacies of clock building and he provides some interesting historical fodder.
For instance, we learn that in the sixteenth century Conrad Dasypodius, a Swiss mathematician, could have chosen to restore the old Clock of the Three Kings in Strasbourg Cathedral. Dasypodius, however, preferred to build a new clock of his own rather than maintain an old one. Over two centuries later, Jean-Baptiste Schwilgue was asked to repair the clock built by Dasypodius, but he decided to build a new and better clock which would last for 10,000 years.
Did you know that a large-scale project is underway to build another clock that will be able to run with minimal maintenance and interruption for ten millennia? It’s called The 10,000 Year Clock and its construction is sponsored by The Long Now Foundation. The 10,000 Year Clock is, however, being built for more than just its precision and durability. If the creators’ intentions are realized, then the clock will serve as a symbol to encourage long-term thinking about the needs and claims of future generations. Of course, if all goes to plan, our future descendants will be left to maintain it too. The interesting question is: will they want to?
If history is any indicator, then I think you know the answer. As Hayes puts it: “The fact is, winding and dusting and fixing somebody else’s old clock is boring. Building a brand-new clock of your own is much more fun, especially if you can pretend that it’s going to inspire awe and wonder for the ages to come. So why not have the fun now and let the future generations do the boring bit.” I think Hayes is right, it seems humans are, by nature, builders and not maintainers.
Projects like The 10,000 Year Clock are often undertaken with the noblest of environmental intentions, but the old proverb is relevant here: the road to hell is paved with good intentions. What I find troubling, then, is that much of the environmental do-goodery in the world may actually be making things worse. It’s often nothing more than a form of conspicuous consumption, which is a term coined by the economist and sociologist Thorstein Veblen. When it pertains specifically to “green” purchases, I like to call it being conspicuously environmental. Let’s use cars as an example. Obviously it depends on how the calculations are processed, but in many instances keeping and maintaining an old clunker is more environmentally friendly than is buying a new hybrid. I can’t help but think that the same must be true of building new clocks.
In his book, The Conundrum, David Owen writes: “How appealing would ‘green’ seem if it meant less innovation and fewer cool gadgets — not more?” Not very, although I suppose that was meant to be a rhetorical question. I enjoy cool gadgets as much as the next person, but it’s delusional to believe that conspicuous consumption is somehow a gift to the environment.
Using insights from evolutionary psychology and signaling theory, I think there is also another issue at play here. Buying conspicuously environmental goods, like a Prius, sends a signal to others that one cares about the environment. But if it’s truly the environment (and not signaling) that one is worried about, then surely less consumption must be better than more. The homeless person ironically has a lesser environmental impact than your average yuppie, yet he is rarely recognized as an environmental hero. Using this logic I can’t help but conclude that killing yourself might just be the most environmentally friendly act of all time (if it wasn’t blatantly obvious, this is a joke). The lesson here is that we shouldn’t confuse smug signaling with actually helping.Image: Prototype of the 10,000 Year Clock. Courtesy of the Long Now Foundation / Science Museum of London.
- High Fructose Corn Syrup = Corn Sugar?>
Hats off to the global agro-industrial complex that feeds most of the Earth’s inhabitants. With high fructose corn syrup (HFCS) getting an increasingly bad rap for helping to expand our waistlines and catalyze our diabetes, the industry is becoming more creative.
However, it’s only the type of “creativity” that a cynic would come to expect from a faceless, trillion dollar industry; it’s not a fresh, natural innovation. The industry wants to rename HFCS to “corn sugar”, making it sound healthier and more natural in the process.From the New York Times:
The United States Food and Drug Administration has rejected a request from the Corn Refiners Association to change the name of high-fructose corn syrup.
The association, which represents the companies that make the syrup, had petitioned the F.D.A. in September 2010 to begin calling the much-maligned sweetener “corn sugar.” The request came on the heels of a national advertising campaign promoting the syrup as a natural ingredient made from corn.
But in a letter, Michael M. Landa, director of the Center for Food Safety and Applied Nutrition at the F.D.A., denied the petition, saying that the term “sugar” is used only for food “that is solid, dried and crystallized.”
“HFCS is an aqueous solution sweetener derived from corn after enzymatic hydrolysis of cornstarch, followed by enzymatic conversion of glucose (dextrose) to fructose,” the letter stated. “Thus, the use of the term ‘sugar’ to describe HFCS, a product that is a syrup, would not accurately identify or describe the basic nature of the food or its characterizing properties.”
In addition, the F.D.A. concluded that the term “corn sugar” has been used to describe the sweetener dextrose and therefore should not be used to describe high-fructose corn syrup. The agency also said the term “corn sugar” could pose a risk to consumers who have been advised to avoid fructose because of a hereditary fructose intolerance or fructose malabsorption.Image: Fructose vs. D-Glucose Structural Formulae. Courtesy of Wikipedia.
- The Most Beautiful Railway Stations> From Flavorwire:
In 1972, Pulitzer Prize-winning author, and The New York Times’ very first architecture critic, Ada Louise Huxtable observed that “nothing was more up-to-date when it was built, or is more obsolete today, than the railroad station.” A comment on the emerging age of the jetliner and a swanky commercial air travel industry that made the behemoth train stations of the time appear as cumbersome relics of an outdated industrial era, we don’t think the judgment holds up today — at all. Like so many things that we wrote off in favor of what was seemingly more modern and efficient (ahem, vinyl records and Polaroid film), the train station is back and better than ever. So, we’re taking the time to look back at some of the greatest stations still standing.See other beautiful stations and read the entire article after the jump.Image: Grand Central Terminal — New York City, New York. Courtesy of Flavorwire.
- Java by the Numbers>
If you think the United States is a nation of coffee drinkers, thing again. The U.S., only ranks eighth in terms of annual java consumption per person. Way out in front is Finland. Makes one wonder if there is a correlation of coffee drinking and heavy metal music.Infographic courtesy of Hamilton Beach.
- Human Evolution: Stalled>
It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?From the Wall Street Journal:
If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.
I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).
More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.
Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.
Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.
Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.
So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”Image: The “Robot Evolution”. Courtesy of STRK3.
- Reconnecting with Our Urban Selves>
Christopher Mims over at the Technology Review revisits a recent study of our social networks, both real-world and online. It’s startling to see the growth in our social isolation despite the corresponding growth in technologies that increase our ability to communicate and interact with one another. Is the suburbanization of our species to blame, and can Facebook save us?From Technology Review:
In 2009, the Pew Internet Trust published a survey worth resurfacing for what it says about the significance of Facebook. The study was inspired by earlier research that “argued that since 1985 Americans have become more socially isolated, the size of their discussion networks has declined, and the diversity of those people with whom they discuss important matters has decreased.”
In particular, the study found that Americans have fewer close ties to those from their neighborhoods and from voluntary associations. Sociologists Miller McPherson, Lynn Smith-Lovin and Matthew Brashears suggest that new technologies, such as the internet and mobile phone, may play a role in advancing this trend.
If you read through all the results from Pew’s survey, you’ll discover two surprising things:
1. “Use of newer information and communication technologies (ICTs), such as the internet and mobile phones, is not the social change responsible for the restructuring of Americans’ core networks. We found that ownership of a mobile phone and participation in a variety of internet activities were associated with larger and more diverse core discussion networks.”
2. However, Americans on the whole are more isolated than they were in 1985. “The average size of Americans’ core discussion networks has declined since 1985; the mean network size has dropped by about one-third or a loss of approximately one confidant.” In addition, “The diversity of core discussion networks has markedly declined; discussion networks are less likely to contain non-kin – that is, people who are not relatives by blood or marriage.”
In other words, the technologies that have isolated Americans are anything but informational. It’s not hard to imagine what they are, as there’s been plenty of research on the subject. These technologies are the automobile, sprawl and suburbia. We know that neighborhoods that aren’t walkable decrease the number of our social connections and increase obesity. We know that commutes make us miserable, and that time spent in an automobile affects everything from our home life to our level of anxiety and depression.
Indirect evidence for this can be found in the demonstrated preferences of Millenials, who are opting for cell phones over automobiles and who would rather live in the urban cores their parents abandoned, ride mass transit and in all other respects physically re-integrate themselves with the sort of village life that is possible only in the most walkable portions of cities.
Meanwhile, it’s worth contemplating one of the primary factors that drove Facebook’s adoption by (soon) 1 billion people: Loneliness. Americans have less support than ever — one in eight in the Pew survey reported having no “discussion confidants.”
It’s clear that for all our fears about the ability of our mobile devices to isolate us in public, the primary way they’re actually used is for connection.Image: Typical suburban landscape. Courtesy of Treehugger.
- Heavy Metal Density>
Heavy Metal in the musical sense, not as in elements, such as iron or manganese, is really popular in Finland and Iceland. It even pops up in Iran and Saudia Arabia.Frank Jacobs over at Strange Maps tells us more.
This map reflects the number of heavy metal bands per 100,000 inhabitants for each country in the world. It codes the result on a colour temperature scale, with blue indicating low occurrence, and red high occurrence. The data for this map is taken from the extensive Encyclopaedia Metallum, an online archive of metal music that lists bands per country, and provides some background by listing their subgenre (Progressive Death Metal, Symphonic Gothic Metal, Groove Metal, etc).
Even if you barely know your Def Leppard from your Deep Purple, you won’t be surprised by the obvious point of this map: Scandinavia is the world capital of heavy metal music. Leaders of the pack are Finland and Sweden, coloured with the hottest shade of red. With 2,825 metal bands listed in the Encyclopaedia Metallum, the figure for Finland works out to 54.3 bands per 100,000 Finns (for a total of 5.2 million inhabitants). Second is Sweden, with a whopping 3,398 band entries. For 9.1 million Swedes, that amounts to 37.3 metal bands per 100,000 inhabitants.
The next-hottest shade of red is coloured in by Norway and Iceland. The Icelandic situation is interesting: with only 71 bands listed, the country seems not particulary metal-oriented. But the total population of the North Atlantic island is a mere 313,000. Which produces a result of 22.6 metal bands per 100,000 inhabitants. That’s almost the double, relatively speaking, of Denmark, which has a score of 12.9 (708 metal bands for 5.5 million Danes)
The following shades of colour, from dark orange to light yellow, are almost all found in North America, Europe and Australasia. A notable addition to this list of usual suspects are Israel, and the three countries of Latin America’s Southern Cone: Chile, Argentina and Uruguay.
Some interesting variations in Europe: Portugal is much darker – i.e. much more metal-oriented – than its Iberian neighbour Spain, and Greece is a solid southern outpost of metal on an otherwise wishy-washy Balkan Peninsula.
On the other side of the scale, light blue indicates the worst – or at least loneliest – places to be a metal fan: Papua New Guinea, North Korea, Cambodia, Afghanistan, Yemen, and most of Africa outside its northern and southern fringe. According to the Encyclopaedia Metallum, there isn’t a single metal band in any of those countries.
- Everything You Ever Wanted to Know About Plastic>
Yes, it’s like a monstrous other-worldly being that will eventually eat you; horrifying facts about plastic that you wish you had never known. This sobering infographic courtesy of ReuseThisBag.com, created by Obizmedia.
- Our Children: Independently Dependent>
Why can’t our kids tie their own shoes?
Are we raising our children to be self-obsessed, attention-seeking, helpless and dependent groupthinkers? And, why may the phenomenon of “family time” in the U.S. be a key culprit?
These are some of the questions raised by anthropologist Elinor Ochs and her colleagues. Over the last decade they have studied family life across the globe, from the Amazon region, to Samoa, and middle-America.From the Wall Street Journal:
Why do American children depend on their parents to do things for them that they are capable of doing for themselves? How do U.S. working parents’ views of “family time” affect their stress levels? These are just two of the questions that researchers at UCLA’s Center on Everyday Lives of Families, or CELF, are trying to answer in their work.
By studying families at home—or, as the scientists say, “in vivo”—rather than in a lab, they hope to better grasp how families with two working parents balance child care, household duties and career, and how this balance affects their health and well-being.
The center, which also includes sociologists, psychologists and archeologists, wants to understand “what the middle class thought, felt and what they did,” says Dr. Ochs. The researchers plan to publish two books this year on their work, and say they hope the findings may help families become closer and healthier.
Ten years ago, the UCLA team recorded video for a week of nearly every moment at home in the lives of 32 Southern California families. They have been picking apart the footage ever since, scrutinizing behavior, comments and even their refrigerators’s contents for clues.
The families, recruited primarily through ads, owned their own homes and had two or three children, at least one of whom was between 7 and 12 years old. About a third of the families had at least one nonwhite member, and two were headed by same-sex couples. Each family was filmed by two cameras and watched all day by at least three observers.
Among the findings: The families had very a child-centered focus, which may help explain the “dependency dilemma” seen among American middle-class families, says Dr. Ochs. Parents intend to develop their children’s independence, yet raise them to be relatively dependent, even when the kids have the skills to act on their own, she says.
In addition, these parents tended to have a very specific, idealized way of thinking about family time, says Tami Kremer-Sadlik, a former CELF research director who is now the director of programs for the division of social sciences at UCLA. These ideals appeared to generate guilt when work intruded on family life, and left parents feeling pressured to create perfect time together. The researchers noted that the presence of the observers may have altered some of the families’ behavior.
How kids develop moral responsibility is an area of focus for the researchers. Dr. Ochs, who began her career in far-off regions of the world studying the concept of “baby talk,” noticed that American children seemed relatively helpless compared with those in other cultures she and colleagues had observed.
In those cultures, young children were expected to contribute substantially to the community, says Dr. Ochs. Children in Samoa serve food to their elders, waiting patiently in front of them before they eat, as shown in one video snippet. Another video clip shows a girl around 5 years of age in Peru’s Amazon region climbing a tall tree to harvest papaya, and helping haul logs thicker than her leg to stoke a fire.
By contrast, the U.S. videos showed Los Angeles parents focusing more on the children, using simplified talk with them, doing most of the housework and intervening quickly when the kids had trouble completing a task.
In 22 of 30 families, children frequently ignored or resisted appeals to help, according to a study published in the journal Ethos in 2009. In the remaining eight families, the children weren’t asked to do much. In some cases, the children routinely asked the parents to do tasks, like getting them silverware. “How am I supposed to cut my food?” Dr. Ochs recalls one girl asking her parents.
Asking children to do a task led to much negotiation, and when parents asked, it sounded often like they were asking a favor, not making a demand, researchers said. Parents interviewed about their behavior said it was often too much trouble to ask.
For instance, one exchange caught on video shows an 8-year-old named Ben sprawled out on a couch near the front door, lifting his white, high-top sneaker to his father, the shoe laced. “Dad, untie my shoe,” he pleads. His father says Ben needs to say “please.”
“Please untie my shoe,” says the child in an identical tone as before. After his father hands the shoe back to him, Ben says, “Please put my shoe on and tie it,” and his father obliges.Read the entire article after the jump:Image courtesy of Kyle T. Webster / Wall Street Journal.
- Skyscrapers A La Mode>
Since 2006 Evolo architecture magazine has run a competition for architects to bring life to their most fantastic skyscraper designs. All the finalists of 2012 competition presented some stunning ideas, and topped by the winner, Himalaya Water Tower, from Zhi Zheng, Hongchuan Zhao, Dongbai Song of China.From Evolo:
Housed within 55,000 glaciers in the Himalaya Mountains sits 40 percent of the world’s fresh water. The massive ice sheets are melting at a faster-than-ever pace due to climate change, posing possible dire consequences for the continent of Asia and the entire world stand, and especially for the villages and cities that sit on the seven rivers that come are fed from the Himalayas’ runoff as they respond with erratic flooding or drought.
The “Himalaya Water Tower” is a skyscraper located high in the mountain range that serves to store water and helps regulate its dispersal to the land below as the mountains’ natural supplies dry up. The skyscraper, which can be replicated en masse, will collect water in the rainy season, purify it, freeze it into ice and store it for future use. The water distribution schedule will evolve with the needs of residents below; while it can be used to help in times of current drought, it’s also meant to store plentiful water for future generations.
Follow the other notable finalists at Evolo magazine after the jump.
- Best Days to Avoid Car Crash - Tuesday and Wednesday>
The cool inforgraphic below courtesy of FlowingData shows us at a glance that Saturday is the most likely day of the week to be involved in a (fatal) car crash. So, if you’re cautious stick to driving in the middle of the week.
The data is sourced from the National Highway Traffic Safety Association.
- Engineering the Ultimate Solar Power Collector: The Leaf> From Cosmic Log:
Researchers have been trying for decades to improve upon Mother Nature’s favorite solar-power trick — photosynthesis — but now they finally think they see the sunlight at the end of the tunnel.
“We now understand photosynthesis much better than we did 20 years ago,” said Richard Cogdell, a botanist at the University of Glasgow who has been doing research on bacterial photosynthesis for more than 30 years. He and three colleagues discussed their efforts to tweak the process that powers the world’s plant life today in Vancouver, Canada, during the annual meeting of the American Association for the Advancement of Science.
The researchers are taking different approaches to the challenge, but what they have in common is their search for ways to get something extra out of the biochemical process that uses sunlight to turn carbon dioxide and water into sugar and oxygen. “You can really view photosynthesis as an assembly line with about 168 steps,” said Steve Long, head of the University of Illinois’ Photosynthesis and Atmospheric Change Laboratory.
Revving up Rubisco
Howard Griffiths, a plant physiologist at the University of Cambridge, just wants to make improvements in one section of that assembly line. His research focuses on ways to get more power out of the part of the process driven by an enzyme called Rubisco. He said he’s trying to do what many auto mechanics have done to make their engines run more efficiently: “You turbocharge it.”
Some plants, such as sugar cane and corn, already have a turbocharged Rubisco engine, thanks to a molecular pathway known as C4. Geneticists believe the C4 pathway started playing a significant role in plant physiology in just the past 10 million years or so. Now Griffiths is looking into strategies to add the C4 turbocharger to rice, which ranks among the world’s most widely planted staple crops.
The new cellular machinery might be packaged in a micro-compartment that operates within the plant cell. That’s the way biochemical turbochargers work in algae and cyanobacteria. Griffiths and his colleagues are looking at ways to create similar micro-compartments for higher plants. The payoff would come in the form of more efficient carbon dioxide conversion, with higher crop productivity as a result. “For a given amount of carbon gain, the plant uses less water,” Griffiths said.Image courtesy of Kumaravel via Flickr, Creative Commons.
- Great Architecture>
Jonathan Glancey, architecture critic at the Guardian in the UK for the last fifteen years, is moving on to greener pastures, and presumably new buildings. In his final article for the newspaper he reflects on some buildings that have engendered shock and/or awe.From the Guardian:
Fifteen years is not a long time in architecture. It is the slowest as well as the most political of the arts. This much was clear when I joined the Guardian as its architecture and design correspondent, from the Independent, in 1997. I thought the Millennium Experience (the talk of the day) decidedly dimwitted and said so in no uncertain terms; it lacked a big idea and anything like the imagination of, say, the Great Exhibition of 1851, or the Festival of Britain in 1951.
For the macho New Labour government, newly in office and all football and testosterone, criticism of this cherished project was tantamount to sedition. They lashed out like angry cats; there were complaints from 10 Downing Street’s press office about negative coverage of the Dome. Hard to believe then, much harder now. That year’s London Model Engineer Exhibition was far more exciting; here was an enthusiastic celebration of the making of things, at a time when manufacturing was becoming increasingly looked down on.
New Labour, meanwhile, promised it would do things for architecture and urban design that Roman emperors and Renaissance princes could only have dreamed of. The north Greenwich peninsula was to become a new Florence, with trams and affordable housing. As would the Thames Gateway, that Siberia stretching – marshy, mysterious, semi-industrial – to Southend Pier and the sea. To a new, fast-breeding generation of quangocrats this land looked like a blank space on the London A-Z, ready to fill with “environmentally friendly” development. Precious little has happened there since, save for some below-standard housing, Boris Johnson’s proposal for an estuary airport and – a very good thing – an RSPB visitors’ centre designed by Van Heyningen and Haward near Purfleet on the Rainham marshes.
Labour’s promises turned out to be largely tosh, of course. Architecture and urban planning are usually best when neither hyped nor hurried. Grand plans grow best over time, as serendipity and common sense soften hard edges. In 2002, Tony Blair decided to invade Iraq – not a decision that, on the face of it, has a lot to do with architecture; but one of the articles I am most proud to have written for this paper was the story of a journey I made from one end of Iraq to the other, with Stuart Freedman, an unflappable press photographer. At the time, the Blair government was denying there would be a war, yet every Iraqi we spoke to knew the bombs were about to fall. It was my credentials as a critic and architectural historian that got me my Iraqi visa. Foreign correspondents, including several I met in Baghdad’s al-Rashid hotel, were understandably finding the terrain hard-going. But handwritten in my passport was an instruction saying: “Give this man every assistance.”
We travelled to Babylon to see Saddam’s reconstruction of the fabled walled city, and to Ur, Abraham’s home, and its daunting ziggurat and then – wonder of wonders – into the forbidden southern deserts to Eridu. Here I walked on the sand-covered remains of one of the world’s first cities. This, if anywhere, is where architecture was born. At Samarra, in northern Iraq, I climbed to the top of the wondrous spiral minaret of what was once the town’s Great Mosque. How the sun shone that day. When I got to the top, there was nothing to hang on to. I was confronted by the blazing blue sky and its gods, or God; the architecture itself was all but invisible. Saddam’s soldiers, charming recruits in starched and frayed uniforms drilled by a tough and paternal sergeant, led me through the country, through miles of unexploded war material piled high along sandy tracks, and across the paths of Shia militia.
Ten years on, Zaha Hadid, a Baghdad-born architect who has risen to stellar prominence since 2002, has won her first Iraqi commission, a new headquarters for the Iraqi National Bank in Baghdad. With luck, other inspired architects will get to work in Iraq, too, reconnecting the country with its former role as a crucible of great buildings and memorable cities.
Architecture is also the stuff of construction, engineering, maths and science. Of philosophy, sociology, Le Corbusier and who knows what else. It is also, I can’t help feeling, harder to create great buildings now than it was in the past. When Eridu or the palaces and piazzas of Renaissance Italy were shaped, architecture was the most expensive and prestigious of all cultural endeavours. Today we spread our wealth more thinly, spending ever more on disposable consumer junk, building more roads to serve ever more grim private housing estates, unsustainable supermarkets and distribution depots (and container ports and their giant ships), and the landfill sites we appear to need to shore up our insatiable, throwaway culture. Architecture has been in danger, like our indefensibly mean and horrid modern housing, of becoming little more than a commodity. Government talk of building a rash of “eco-towns” proved not just unpopular but more hot air. A policy initiative too far, the idea has effectively been dropped.
And, yet, despite all these challenges, the art form survives and even thrives. I have been moved in different ways by the magnificent Neues Museum, Berlin, a 10-year project led by David Chipperfield; by the elemental European Southern Observatory Hotel by Auer + Weber, for scientists in Chile’s Atacama Desert; and by Charles Barclay’s timber Kielder Observatory, where I spent a night in 2008 watching stars hanging above the Northumbrian forest.
I have been enchanted by the 2002 Serpentine Pavilion, a glimpse into a possible future by Toyo Ito and Cecil Balmond; by the inspiring reinvention of St Pancras station by Alastair Lansley and fellow architects; and by Blur, a truly sensational pavilion by Diller + Scofidio set on a steel jetty overlooking Lake Neuchatel at Yverdon-les-Bains. A part of Switzerland’s Expo 2002, this cat’s cradle of tensile steel was a machine for making clouds. You walked through the clouds as they appeared and, when conditions were right, watched them float away over the lake.Read the entire article here.Image: The spiral minaret of the Great Mosque of Samarra, Iraq. Courtesy Reuters / Guardian.
- Suburbia as Mass Murderer>
Jane Brody over at the Well blog makes a compelling case for the dismantling of suburbia. After all, these so-called “built environments” where we live, work, eat, play and raise our children, are an increasingly serious health hazard.From the New York Times:
Developers in the last half-century called it progress when they built homes and shopping malls far from city centers throughout the country, sounding the death knell for many downtowns. But now an alarmed cadre of public health experts say these expanded metropolitan areas have had a far more serious impact on the people who live there by creating vehicle-dependent environments that foster obesity, poor health, social isolation, excessive stress and depression.
As a result, these experts say, our “built environment” — where we live, work, play and shop — has become a leading cause of disability and death in the 21st century. Physical activity has been disappearing from the lives of young and old, and many communities are virtual “food deserts,” serviced only by convenience stores that stock nutrient-poor prepared foods and drinks.
According to Dr. Richard J. Jackson, professor and chairman of environmental health sciences at the University of California, Los Angeles, unless changes are made soon in the way many of our neighborhoods are constructed, people in the current generation (born since 1980) will be the first in America to live shorter lives than their parents do.
Although a decade ago urban planning was all but missing from public health concerns, a sea change has occurred. At a meeting of the American Public Health Association in October, Dr. Jackson said, there were about 300 presentations on how the built environment inhibits or fosters the ability to be physically active and get healthy food.
In a healthy environment, he said, “people who are young, elderly, sick or poor can meet their life needs without getting in a car,” which means creating places where it is safe and enjoyable to walk, bike, take in nature and socialize.
“People who walk more weigh less and live longer,” Dr. Jackson said. “People who are fit live longer. People who have friends and remain socially active live longer. We don’t need to prove all of this,” despite the plethora of research reports demonstrating the ill effects of current community structures.Image courtesy of Duke University.
- See the Aurora, then Die>
One item that features prominently on so-called “things-to-do-before-you-die” lists is seeing the Aurora Borealis, or Northern Lights.
The recent surge in sunspot activity and solar flares has caused a corresponding uptick in geo-magnetic storms here on Earth. The resulting Aurorae have been nothing short of spectacular. More images here, courtesy of Smithsonian magazine.
- Driving Across the U.S. at 146,700 Miles per Hour>
Through the miracle of time-lapse photography we bring you a journey of 12,225 miles across 32 States in 55 days compressed into 5 minutes. Brian Defrees snapped an image every five seconds from his car-mounted camera during the adventure, which began and ended in New York, via Washington D.C., Florida, Los Angeles and Washington State, and many points in between.
- Oil: Where it Comes From and Where it Goes>
Compiled from recent U.S. government and OPEC (Organization of Petroleum Exporting Countries) statistics, the infographic below highlights the global thirst for oil.From Daily Infographic:
- The Corporate One Percent of the One Percent>
With the Occupy Wall Street movement and related protests continuing to gather steam much recent media and public attention has focused on 1 percent versus the remaining 99 percent of the population. By most accepted estimates, 1 percent of households control around 40 percent of the global wealth, and there is a vast discrepancy between the top and bottom of the economic spectrum. While these statistics are telling, a related analysis of corporate wealth, highlighted in the New Scientist, shows a much tighter concentration among a very select group of transnational corporations (TNC).New Scientist:
An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.
The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.
The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).
“Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”
Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.
The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.
The work, to be published in PLoS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.
When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.Image courtesy of New Scientist / PLoS One. The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue.
- The Myth of Bottled Water>
In 2010 the world spent around $50 Billion on bottled water, with over a third accounted for by the United States alone. During this period the United States House of Representatives spent $860,000 on bottled water for its 435 members. This is close to $2,000 per person per year. (Figures according to Corporate Accountability International).
This is despite the fact that on average bottled water costs around 1,900 times more than it’s cheaper, less glamorous sibling — tap water. Bottled water has become a truly big business even though science shows no discernible benefit of bottled water over that from the faucet. In fact, around 40 percent of bottled water comes from municipal water supplies anyway.
In 2007 Charles Fishman wrote a ground-breaking cover story on the bottled water industry for Fast Company. We excerpt part of the article, Message in a Bottle, below.By Charles Fishman:
The largest bottled-water factory in North America is located on the outskirts of Hollis, Maine. In the back of the plant stretches the staging area for finished product: 24 million bottles of Poland Spring water. As far as the eye can see, there are double-stacked pallets packed with half-pint bottles, half-liters, liters, “Aquapods” for school lunches, and 2.5-gallon jugs for the refrigerator.
Really, it is a lake of Poland Spring water, conveniently celled off in plastic, extending across 6 acres, 8 feet high. A week ago, the lake was still underground; within five days, it will all be gone, to supermarkets and convenience stores across the Northeast, replaced by another lake’s worth of bottles.
Looking at the piles of water, you can have only one thought: Americans sure are thirsty.
Bottled water has become the indispensable prop in our lives and our culture. It starts the day in lunch boxes; it goes to every meeting, lecture hall, and soccer match; it’s in our cubicles at work; in the cup holder of the treadmill at the gym; and it’s rattling around half-finished on the floor of every minivan in America. Fiji Water shows up on the ABC show Brothers & Sisters; Poland Spring cameos routinely on NBC’s The Office. Every hotel room offers bottled water for sale, alongside the increasingly ignored ice bucket and drinking glasses. At Whole Foods, the upscale emporium of the organic and exotic, bottled water is the number-one item by units sold.
Thirty years ago, bottled water barely existed as a business in the United States. Last year, we spent more on Poland Spring, Fiji Water, Evian, Aquafina, and Dasani than we spent on iPods or movie tickets–$15 billion. It will be $16 billion this year.
Bottled water is the food phenomenon of our times. We–a generation raised on tap water and water fountains–drink a billion bottles of water a week, and we’re raising a generation that views tap water with disdain and water fountains with suspicion. We’ve come to pay good money–two or three or four times the cost of gasoline–for a product we have always gotten, and can still get, for free, from taps in our homes.
When we buy a bottle of water, what we’re often buying is the bottle itself, as much as the water. We’re buying the convenience–a bottle at the 7-Eleven isn’t the same product as tap water, any more than a cup of coffee at Starbucks is the same as a cup of coffee from the Krups machine on your kitchen counter. And we’re buying the artful story the water companies tell us about the water: where it comes from, how healthy it is, what it says about us. Surely among the choices we can make, bottled water isn’t just good, it’s positively virtuous.
Except for this: Bottled water is often simply an indulgence, and despite the stories we tell ourselves, it is not a benign indulgence. We’re moving 1 billion bottles of water around a week in ships, trains, and trucks in the United States alone. That’s a weekly convoy equivalent to 37,800 18-wheelers delivering water. (Water weighs 81/3 pounds a gallon. It’s so heavy you can’t fill an 18-wheeler with bottled water–you have to leave empty space.)
Meanwhile, one out of six people in the world has no dependable, safe drinking water. The global economy has contrived to deny the most fundamental element of life to 1 billion people, while delivering to us an array of water “varieties” from around the globe, not one of which we actually need. That tension is only complicated by the fact that if we suddenly decided not to purchase the lake of Poland Spring water in Hollis, Maine, none of that water would find its way to people who really are thirsty.Please read the entire article here.Image courtesy of Wikipedia.
- Book Review: The Big Thirst. Charles Fishman>
Charles Fishman has a fascinating new book entitled The Big Thirst: The Secret Life and Turbulent Future of Water. In it Fishman examines the origins of water on our planet and postulates an all to probable future where water becomes an increasingly limited and precious resource.A brief excerpt from a recent interview, courtesy of NPR:
For most of us, even the most basic questions about water turn out to be stumpers.
Where did the water on Earth come from?
Is water still being created or added somehow?
How old is the water coming out of the kitchen faucet?
For that matter, how did the water get to the kitchen faucet?
And when we flush, where does the water in the toilet actually go?
The things we think we know about water — things we might have learned in school — often turn out to be myths.
We think of Earth as a watery planet, indeed, we call it the Blue Planet; but for all of water’s power in shaping our world, Earth turns out to be surprisingly dry. A little water goes a long way.
We think of space as not just cold and dark and empty, but as barren of water. In fact, space is pretty wet. Cosmic water is quite common.
At the most personal level, there is a bit of bad news. Not only don’t you need to drink eight glasses of water every day, you cannot in any way make your complexion more youthful by drinking water. Your body’s water-balance mechanisms are tuned with the precision of a digital chemistry lab, and you cannot possibly “hydrate” your skin from the inside by drinking an extra bottle or two of Perrier. You just end up with pee sourced in France.
In short, we know nothing of the life of water — nothing of the life of the water inside us, around us, or beyond us. But it’s a great story — captivating and urgent, surprising and funny and haunting. And if we’re going to master our relationship to water in the next few decades — really, if we’re going to remaster our relationship to water — we need to understand the life of water itself.Read more of this article and Charles Fishman’s interview with NPR here.
- The Climate Spin Cycle>
There’s something to be said for a visual aide that puts a complex conversation about simple ideas into perspective. So, here we have a high-level flow chart that characterizes one on the most important debates of our time — climate change. Whether you are for or against the notion or the science, or merely perplexed by the hyperbole inside the “echo chamber” there is no denying that this debate will remain with us for quite sometime.Chart courtesy of Riley E. Dunlap and Aaron M. McCright, “Organized Climate-Change Denial,” In J. S. Dryzek, R. B. Norgaard and D. Schlosberg, (eds.), Oxford
Handbook of Climate Change and Society. New York: Oxford University Press, 2011.
- The Greenest Way To Travel>
A simplistic but nonetheless useful infographic below highlights the comparative energy footprints of our most common means of transportation. Can’t beat that bicycle.From One Block of the Grid:
- A Medical Metaphor for Climate Risk>
While scientific evidence of climate change continues to mount and an increasing number of studies point causal fingers at ourselves there is perhaps another way to visualize the risk of inaction or over-reaction. So, since most people can leave ideology aside when it comes to their own health, a medical metaphor, courtesy of Andrew Revkin over at Dot Earth, may be of use to broaden acceptance of the message.From the New York Times:
Paul C. Stern, the director of the National Research Council committee on the human dimensions of global change, has been involved in a decades-long string of studies of behavior, climate change and energy choices.
This is an arena that is often attacked by foes of cuts in greenhouse gases, who see signs of mind control and propaganda. Stern says that has nothing to do with his approach, as he made clear in “Contributions of Psychology to Limiting Climate Change,” a paper that was part of a special issue of the journal American Psychologist on climate change and behavior:
Psychological contributions to limiting climate change will come not from trying to change people’s attitudes, but by helping to make low-carbon technologies more attractive and user-friendly, economic incentives more transparent and easier to use, and information more actionable and relevant to the people who need it.
The special issue of the journal builds on a 2009 report on climate and behavior from the American Psychological Association that was covered here. Stern has now offered a reaction to the discussion last week of Princeton researcher Robert Socolow’s call for a fresh approach to climate policy that acknowledges “the news about climate change is unwelcome, that today’s climate science is incomplete, and that every ’solution’ carries risk.” Stern’s response, centered on a medical metaphor (not the first) is worth posting as a “Your Dot” contribution. You can find my reaction to his idea below. Here’s Stern’s piece:
I agree with Robert Socolow that scientists could do better at encouraging a high quality of discussion about climate change.
But providing better technical descriptions will not help most people because they do not follow that level of detail. Psychological research shows that people often use simple, familiar mental models as analogies for complex phenomena. It will help people think through climate choices to have a mental model that is familiar and evocative and that also neatly encapsulates Socolow’s points that the news is unwelcome, that science is incomplete, and that some solutions are dangerous. There is such a model.
Too many people think of climate science as an exact science like astronomy that can make highly confident predictions, such as about lunar eclipses. That model misrepresents the science, does poorly at making Socolow’s points, and has provided an opening for commentators and bloggers seeking to use any scientific disagreement to discredit the whole body of knowledge.
A mental model from medical science might work better. In the analogy, the planet is a patient suspected of having a serious, progressive disease (anthropogenic climate change). The symptoms are not obvious, just as they are not with diabetes or hypertension, but the disease may nevertheless be serious. Humans, as guardians of the planet, must decide what to do. Scientists are in the role of physician. The guardians have been asking the physicians about the diagnosis (is this disease present?), the nature of the disease, its prognosis if untreated, and the treatment options, including possible side effects. The medical analogy helps clarify the kinds of errors that are possible and can help people better appreciate how science can help and think through policy choices.
Diagnosis. A physician must be careful to avoid two errors: misdiagnosing the patient with a dread disease that is not present, and misdiagnosing a seriously ill patient as healthy. To avoid these types of error, physicians often run diagnostic tests or observe the patient over a period of time before recommending a course of treatment. Scientists have been doing this with Earth’s climate at least since 1959, when strong signs of illness were reported from observations in Hawaii.
Scientists now have high confidence that the patient has the disease. We know the causes: fossil fuel consumption, certain land cover changes, and a few other physical processes. We know that the disease produces a complex syndrome of symptoms involving change in many planetary systems (temperature, precipitation, sea level and acidity balance, ecological regimes, etc.). The patient is showing more and more of the syndrome, and although we cannot be sure that each particular symptom is due to climate change rather than some other cause, the combined evidence justifies strong confidence that the syndrome is present.
Prognosis. Fundamental scientific principles tell us that the disease is progressive and very hard to reverse. Observations tell us that the processes that cause it have been increasing, as have the symptoms. Without treatment, they will get worse. However, because this is an extremely rare disease (in fact, the first known case), there is uncertainty about how fast it will progress. The prognosis could be catastrophic, but we cannot assign a firm probability to the worst outcomes, and we are not even sure what the most likely outcome is. We want to avoid either seriously underestimating or overestimating the seriousness of the prognosis.
Treatment. We want treatments that improve the patient’s chances at low cost and with limited adverse side effects and we want to avoid “cures” that might be worse than the disease. We want to consider the chances of improvement for each treatment, and its side effects, in addition to the untreated prognosis. We want to avoid the dangers both of under-treatment and of side effects. We know that some treatments (the ones limiting climate change) get at the causes and could alleviate all the symptoms if taken soon enough. But reducing the use of fossil fuels quickly could be painful. Other treatments, called adaptations, offer only symptomatic relief. These make sense because even with strong medicine for limiting climate change, the disease will get worse before it gets better.
Choices. There are no risk-free choices. We know that the longer treatment is postponed, the more painful it will be, and the worse the prognosis. We can also use an iterative treatment approach (as Socolow proposed), starting some treatments and monitoring their effects and side effects before raising the dose. People will disagree about the right course of treatment, but thinking about the choices in this way might give the disagreements the appropriate focus.Read more here.Image courtesy of Stephen Wilkes for The New York Times.
- London's Other River>
You will have heard of the River Thames, the famous swathe of grey that cuts a watery path through London. You may even have heard of several of London’s prominent canals, such as the Grand Union Canal and Regent’s Canal. But, you probably will not have heard of the mysterious River Fleet that meanders through eerie tunnels beneath the city.
The Fleet and its Victorian tunnels are available for exploration, but are not for the faint of heart or sensitive of nose.
For more stunning subterranean images follow the full article here.Images courtesy of Environmental Grafitti.
- Sustainable Living From Your Backyard>
Dreaming of a self-sufficiency? The infographic below shows that an average U.S. household would need around 2 acres of outdoor space for the ultimate sustainable backyard.From One Block Off the Grid:
- Data, data, data: It's Everywhere>
Cities are one of the most remarkable and peculiar inventions of our species. They provide billions in the human family a framework for food, shelter and security. Increasingly, cities are becoming hubs in a vast data network where public officials and citizens mine and leverage vast amounts of information.Krystal D’Costa for Scientific American:
Once upon a time there was a family that lived in homes raised on platforms in the sky. They had cars that flew and sorta drove themselves. Their sidewalks carried them to where they needed to go. Video conferencing was the norm, as were appliances which were mostly automated. And they had a robot that cleaned and dispensed sage advice.
I was always a huge fan of the Jetsons. The family dynamics I could do without—Hey, Jane, you clearly had outside interests. You totally could have pursued them, and rocked at it too!—but they were a social reflection of the times even while set in the future, so that is what it is. But their lives were a technological marvel! They could travel by tube, electronic arms dressed them (at the push of the button), and Rosie herself was astounding. If it rained, the Superintendent could move their complex to a higher altitude to enjoy the sunshine! Though it’s a little terrifying to think that Mr. Spacely could pop up on video chat at any time. Think about your boss having that sort of access. Scary, right?
The year 2062 used to seem impossibly far away. But as the setting for the space-age family’s adventures looms on the horizon, even the tech-expectant Jetsons would have to agree that our worlds are perhaps closer than we realize. The moving sidewalks and push button technology (apps, anyone?) have been realized, we’re developing cars that can drive themselves, and we’re on our way to building more Rosie-like AI. Heck, we’re even testing the limits of personal flight. No joke. We’re even working to build a smarter electrical grid, one that would automatically adjust home temperatures and more accurately measure usage.
Sure, we have a ways to go just yet, but we’re more than peering over the edge. We’ve taken the first big step in revolutionizing our management of data.
The September special issue of Scientific American focuses on the strengths of urban centers. Often disparaged for congestion, pollution, and perceived apathy, cities have a history of being vilified. And yet, they’re also seats of innovation. The Social Nexus explores the potential awaiting to be unleashed by harnessing data.
If there’s one thing cities have an abundance of, it’s data. Number of riders on the subway, parking tickets given in a certain neighborhood, number of street fairs, number of parking facilities, broken parking meters—if you can imagine it, chances are the City has the data available, and it’s now open for you to review, study, compare, and shape, so that you can help built a city that’s responsive to your needs.Image courtesy of Wikipedia / Creative Commons.
- The Right of Not Turning Left>
In 2007 UPS made the headlines by declaring left-hand turns for its army of delivery truck drivers undesirable. Of course, we left-handers have always known that our left or “sinister” side is fatefully less attractive and still branded as unlucky or evil. Chinese culture brands left-handedness as improper as well.
UPS had other motives for poo-pooing left-hand turns. For a company which runs over 95,000 big brown delivery trucks optimizing delivery routes could result in tremendous savings. In fact, careful research showed that the company could reduce its annual delivery routes by 28.5 million miles, save around 3 million gallons of fuel and reduce CO2 emissions by over 30,000 metric tons. And, eliminating or reducing left-hand turns would be safer as well. Of the 2.4 million crashes at intersections in the United States in 2007, most involved left-hand turns, according to the U.S. Federal Highway Administration.
Now urban planners and highway designers in the United States are evaluating the same thing — how to reduce the need for left-hand turns. Drivers in Europe, especially the United Kingdom, will be all too familiar with the roundabout technique for reducing left-handed turns on many A and B roads. Roundabouts have yet to gain significant traction in the United States, so now comes the Diverging Diamond Interchange.From Slate:
. . . Left turns are the bane of traffic engineers. Their idea of utopia runs clockwise. (UPS’ routing software famously has drivers turn right whenever possible, to save money and time.) The left-turning vehicle presents not only the aforementioned safety hazard, but a coagulation in the smooth flow of traffic. It’s either a car stopped in an active traffic lane, waiting to turn; or, even worse, it’s cars in a dedicated left-turn lane that, when traffic is heavy enough, requires its own “dedicated signal phase,” lengthening the delay for through traffic as well as cross traffic. And when traffic volumes really increase, as in the junction of two suburban arterials, multiple left-turn lanes are required, costing even more in space and money.
And, increasingly, because of shifting demographics and “lollipop” development patterns, suburban arterials are where the action is: They represent, according to one report, less than 10 percent of the nation’s road mileage, but account for 48 percent of its vehicle-miles traveled.
. . . What can you do when you’ve tinkered all you can with the traffic signals, added as many left-turn lanes as you can, rerouted as much traffic as you can, in areas that have already been built to a sprawling standard? Welcome to the world of the “unconventional intersection,” where left turns are engineered out of existence.
. . . “Grade separation” is the most extreme way to eliminate traffic conflicts. But it’s not only aesthetically unappealing in many environments, it’s expensive. There is, however, a cheaper, less disruptive approach, one that promises its own safety and efficiency gains, that has become recently popular in the United States: the diverging diamond interchange. There’s just one catch: You briefly have to drive the wrong way. But more on that in a bit.
The “DDI” is the brainchild of Gilbert Chlewicki, who first theorized what he called the “criss-cross interchange” as an engineering student at the University of Maryland in 2000.
The DDI is the sort of thing that is easier to visualize than describe (this simulation may help), but here, roughly, is how a DDI built under a highway overpass works: As the eastbound driver approaches the highway interchange (whose lanes run north-south), traffic lanes “criss cross” at a traffic signal. The driver will now find himself on the “left” side of the road, where he can either make an unimpeded left turn onto the highway ramp, or cross over again to the right once he has gone under the highway overpass.
- The Plastic Bag Wars> From Rolling Stone:
American shoppers use an estimated 102 billion plastic shopping bags each year — more than 500 per consumer. Named by Guinness World Records as “the most ubiquitous consumer item in the world,” the ultrathin bags have become a leading source of pollution worldwide. They litter the world’s beaches, clog city sewers, contribute to floods in developing countries and fuel a massive flow of plastic waste that is killing wildlife from sea turtles to camels. “The plastic bag has come to represent the collective sins of the age of plastic,” says Susan Freinkel, author of Plastic: A Toxic Love Story.
Many countries have instituted tough new rules to curb the use of plastic bags. Some, like China, have issued outright bans. Others, including many European nations, have imposed stiff fees to pay for the mess created by all the plastic trash. “There is simply zero justification for manufacturing them anymore, anywhere,” the United Nations Environment Programme recently declared. But in the United States, the plastics industry has launched a concerted campaign to derail and defeat anti-bag measures nationwide. The effort includes well-placed political donations, intensive lobbying at both the state and national levels, and a pervasive PR campaign designed to shift the focus away from plastic bags to the supposed threat of canvas and paper bags — including misleading claims that reusable bags “could” contain bacteria and unsafe levels of lead.
“It’s just like Big Tobacco,” says Amy Westervelt, founding editor of Plastic Free Times, a website sponsored by the nonprofit Plastic Pollution Coalition. “They’re using the same underhanded tactics — and even using the same lobbying firm that Philip Morris started and bankrolled in the Nineties. Their sole aim is to maintain the status quo and protect their profits. They will stop at nothing to suppress or discredit science that clearly links chemicals in plastic to negative impacts on human, animal and environmental health.”
Made from high-density polyethylene — a byproduct of oil and natural gas — the single-use shopping bag was invented by a Swedish company in the mid-Sixties and brought to the U.S. by ExxonMobil. Introduced to grocery-store checkout lines in 1976, the “T-shirt bag,” as it is known in the industry, can now be found literally every where on the planet, from the bottom of the ocean to the peaks of Mount Everest. The bags are durable, waterproof, cheaper to produce than paper bags and able to carry 1,000 times their own weight. They are also a nightmare to recycle: The flimsy bags, many thinner than a strand of human hair, gum up the sorting equipment used by most recycling facilities. “Plastic bags and other thin-film plastic is the number-one enemy of the equipment we use,” says Jeff Murray, vice president of Far West Fibers, the largest recycler in Oregon. “More than 300,000 plastic bags are removed from our machines every day — and since most of the removal has to be done by hand, that means more than 25 percent of our labor costs involves plastic-bag removal.”
- The Slow Food - Fast Food Debate>
For watchers of the human condition, dissecting and analyzing our food culture is both fascinating and troubling. The global agricultural-industrial complex with its enormous efficiencies and finely engineered end-products, churns out mountains of food stuffs that help feed a significant proportion of the world. And yet, many argue that the same over-refined, highly-processed, preservative-doped, high-fructose enriched, sugar and salt laden, color saturated foods are to blame for many of our modern ills. The catalog of dangers from that box of “fish” sticks, orange “cheese” and twinkies goes something likes this: heart disease, cancer, diabetes, and obesity.
To counterbalance the fast/processed food juggernaut the grassroots International Slow Food movement established its manifesto in 1989. Its stated vision is:
We envision a world in which all people can access and enjoy food that is good for them, good for those who grow it and good for the planet.
They go on to say:
We believe that everyone has a fundamental right to the pleasure of good food and consequently the responsibility to protect the heritage of food, tradition and culture that make this pleasure possible. Our association believes in the concept of neo-gastronomy – recognition of the strong connections between plate, planet, people and culture.
These are lofty ideals. Many would argue that the goals of the Slow Food movement, while worthy, are somewhat elitist and totally impractical in current times on our over-crowded, resource constrained little blue planet.
Krystal D’Costa over at Anthropology in Practice has a fascinating analysis and takes a more pragmatic view.From Krystal D’Costa over at Anthropology in Practice:
There’s a sign hanging in my local deli that offers customers some tips on what to expect in terms of quality and service. It reads:
Can be fast and good, but it won’t be cheap.
Can be fast and cheap, but it won’t be good.
Can be good and cheap, but it won’t be fast.
Pick two—because you aren’t going to get it good, cheap, and fast.
The Good/Fast/Cheap Model is certainly not new. It’s been a longstanding principle in design, and has been applied to many other things. The idea is a simple one: we can’t have our cake and eat it too. But that doesn’t mean we can’t or won’t try—and no where does this battle rage more fiercely than when it comes to fast food.
In a landscape dominated by golden arches, dollar menus, and value meals serving up to 2,150 calories, fast food has been much maligned. It’s fast, it’s cheap, but we know it’s generally not good for us. And yet, well-touted statistics report that Americans are spending more than ever on fast food:
In 1970, Americans spent about $6 billion on fast food; in 2000, they spent more than $110 billion. Americans now spend more money on fast food than on higher education, personal computers, computer software, or new cars. They spend more on fast food than on movies, books, magazines, newspapers, videos, and recorded music—combined.[i]
With waistlines growing at an alarming rate, fast food has become an easy target. Concern has spurned the emergence of healthier chains (where it’s good and fast, but not cheap), half servings, and posted calorie counts. We talk about awareness and “food prints” enthusiastically, aspire to incorporate more organic produce in our diets, and struggle to encourage others to do the same even while we acknowledge that differing economic means may be a limiting factor.
In short, we long to return to a simpler food time—when local harvests were common and more than adequately provided the sustenance we needed, and we relied less on processed, industrialized foods. We long for a time when home-cooked meals, from scratch, were the norm—and any number of cooking shows on the American airways today work to convince us that it’s easy to do. We’re told to shun fast food, and while it’s true that modern, fast, processed foods represent an extreme in portion size and nutrition, it is also true that our nostalgia is misguided: raw, unprocessed foods—the “natural” that we yearn for—were a challenge for our ancestors. In fact, these foods were downright dangerous.
Step back in time to when fresh meat rotted before it could be consumed and you still consumed it, to when fresh fruits were sour, vegetables were bitter, and when roots and tubers were poisonous. Nature, ever fickle, could withhold her bounty as easily as she could share it: droughts wreaked havoc on produce, storms hampered fishing, cows stopped giving milk, and hens stopped laying.[ii] What would you do then?Images courtesy of International Slow Food Movement / Fred Meyer store by lyzadanger.
- How the Great White Egret Spurred Bird Conservation>
The infamous Dead Parrot Sketch from Monty Python’s Flying Circus continues to resonate several generations removed from its creators. One of the most treasured exchanges, between a shady pet shop owner and prospective customer included two immortal comedic words, “Beautiful plumage”, followed by the equally impressive retort, “The plumage don’t enter into it. It’s stone dead.”
Though utterly silly this conversation does point towards a deeper and very ironic truth: that humans so eager to express their status among their peers do this by exploiting another species. Thus, the stunning white plumage of the Great White Egret proved to be its undoing, almost. So utterly sought after were the egrets’ feathers that both males and females were hunted close to extinction. And, in a final ironic twist, the near extinction of these great birds inspired the Audubon campaigns and drove legislation to curb the era of fancy feathers.More courtesy of the Smithsonian
I’m not the only one who has been dazzled by the egret’s feathers, though. At the turn of the 20th century, these feathers were a huge hit in the fashion world, to the detriment of the species, as Thor Hanson explains in his new book Feathers: The Evolution of a Natural Miracle:
One particular group of birds suffered near extermination at the hands of feather hunters, and their plight helped awaken a conservation ethic that still resonates in the modern environmental movement. With striking white plumes and crowded, conspicuous nesting colonies, Great Egrets and Snowy Egrets faced an unfortunate double jeopardy: their feathers fetched a high price, and their breeding habits made them an easy mark. To make matters worse, both sexes bore the fancy plumage, so hunters didn’t just target the males; they decimated entire rookeries. At the peak of the trade, an ounce of egret plume fetched the modern equivalent of two thousand dollars, and successful hunters could net a cool hundred grand in a single season. But every ounce of breeding plumes represented six dead adults, and each slain pair left behind three to five starving nestlings. Millions of birds died, and by the turn of the century this once common species survived only in the deep Everglades and other remote wetlands.
This slaughter inspired Audubon members to campaign for environmental protections and bird preservation, at the state, national and international levels.Image courtesy of Antonio Soto for the Smithsonian.
- Green Bootleggers and Baptists> Bjørn Lomborg for Project Syndicate:
In May, the United Nations’ International Panel on Climate Change made media waves with a new report on renewable energy. As in the past, the IPCC first issued a short summary; only later would it reveal all of the data. So it was left up to the IPCC’s spin-doctors to present the take-home message for journalists.
The first line of the IPCC’s press release declared, “Close to 80% of the world‘s energy supply could be met by renewables by mid-century if backed by the right enabling public policies.” That story was repeated by media organizations worldwide.
Last month, the IPCC released the full report, together with the data behind this startlingly optimistic claim. Only then did it emerge that it was based solely on the most optimistic of 164 modeling scenarios that researchers investigated. And this single scenario stemmed from a single study that was traced back to a report by the environmental organization Greenpeace. The author of that report – a Greenpeace staff member – was one of the IPCC’s lead authors.
The claim rested on the assumption of a large reduction in global energy use. Given the number of people climbing out of poverty in China and India, that is a deeply implausible scenario.
When the IPCC first made the claim, global-warming activists and renewable-energy companies cheered. “The report clearly demonstrates that renewable technologies could supply the world with more energy than it would ever need,” boasted Steve Sawyer, Secretary-General of the Global Wind Energy Council.
This sort of behavior – with activists and big energy companies uniting to applaud anything that suggests a need for increased subsidies to alternative energy – was famously captured by the so-called “bootleggers and Baptists” theory of politics.
The theory grew out of the experience of the southern United States, where many jurisdictions required stores to close on Sunday, thus preventing the sale of alcohol. The regulation was supported by religious groups for moral reasons, but also by bootleggers, because they had the market to themselves on Sundays. Politicians would adopt the Baptists’ pious rhetoric, while quietly taking campaign contributions from the criminals.
Of course, today’s climate-change “bootleggers” are not engaged in any illegal behavior. But the self-interest of energy companies, biofuel producers, insurance firms, lobbyists, and others in supporting “green” policies is a point that is often missed.
Indeed, the “bootleggers and Baptists” theory helps to account for other developments in global warming policy over the past decade or so. For example, the Kyoto Protocol would have cost trillions of dollars, but would have achieved a practically indiscernible difference in stemming the rise in global temperature. Yet activists claimed that there was a moral obligation to cut carbon-dioxide emissions, and were cheered on by businesses that stood to gain.More from theSource here
- Jevons Paradox: Energy Efficiency Increases Consumption?>
Energy efficiency sounds simple, but it’s rather difficult to measure. Sure when you purchase a shiny, new more energy efficient washing machine compared with your previous model you’re making a personal dent in energy consumption. But, what if in aggregate overall consumption increases because more people want that energy efficient model? In a nutshell, that’s Jevons Paradox, named after a 19th-century British economist, William Jevons. He observed that while the steam engine consumed energy more efficiently from coal, it also stimulated so much economic growth that coal consumption actually increased. Thus, Jevons argued that improvements in fuel efficiency tend to increase, rather than decrease, fuel use.
John Tierney over at the New York Times brings Jevons into the 21st century and discovers that the issues remain the same.From the New York Times:
For the sake of a cleaner planet, should Americans wear dirtier clothes?
This is not a simple question, but then, nothing about dirty laundry is simple anymore. We’ve come far since the carefree days of 1996, when Consumer Reports tested some midpriced top-loaders and reported that “any washing machine will get clothes clean.”
In this year’s report, no top-loading machine got top marks for cleaning. The best performers were front-loaders costing on average more than $1,000. Even after adjusting for inflation, that’s still $350 more than the top-loaders of 1996.
What happened to yesterday’s top-loaders? To comply with federal energy-efficiency requirements, manufacturers made changes like reducing the quantity of hot water. The result was a bunch of what Consumer Reports called “washday wash-outs,” which left some clothes “nearly as stained after washing as they were when we put them in.”
Now, you might think that dirtier clothes are a small price to pay to save the planet. Energy-efficiency standards have been embraced by politicians of both parties as one of the easiest ways to combat global warming. Making appliances, cars, buildings and factories more efficient is called the “low-hanging fruit” of strategies to cut greenhouse emissions.
But a growing number of economists say that the environmental benefits of energy efficiency have been oversold. Paradoxically, there could even be more emissions as a result of some improvements in energy efficiency, these economists say.
The problem is known as the energy rebound effect. While there’s no doubt that fuel-efficient cars burn less gasoline per mile, the lower cost at the pump tends to encourage extra driving. There’s also an indirect rebound effect as drivers use the money they save on gasoline to buy other things that produce greenhouse emissions, like new electronic gadgets or vacation trips on fuel-burning planes.Read more here.Image courtesy of Wikipedia, Popular Science Monthly / Creative Commons.
- The Strange Forests that Drink—and Eat—Fog> From Discover:
On the rugged roadway approaching Fray Jorge National Park in north-central Chile, you are surrounded by desert. This area receives less than six inches of rain a year, and the dry terrain is more suggestive of the badlands of the American Southwest than of the lush landscapes of the Amazon. Yet as the road climbs, there is an improbable shift. Perched atop the coastal mountains here, some 1,500 to 2,000 feet above the level of the nearby Pacific Ocean, are patches of vibrant rain forest covering up to 30 acres apiece. Trees stretch as much as 100 feet into the sky, with ferns, mosses, and bromeliads adorning their canopies. Then comes a second twist: As you leave your car and follow a rising path from the shrub into the forest, it suddenly starts to rain. This is not rain from clouds in the sky above, but fog dripping from the tree canopy. These trees are so efficient at snatching moisture out of the air that the fog provides them with three-quarters of all the water they need.
Understanding these pocket rain forests and how they sustain themselves in the middle of a rugged desert has become the life’s work of a small cadre of scientists who are only now beginning to fully appreciate Fray Jorge’s third and deepest surprise: The trees that grow here do more than just drink the fog. They eat it too.
Fray Jorge lies at the north end of a vast rain forest belt that stretches southward some 600 miles to the tip of Chile. In the more southerly regions of this zone, the forest is wetter, thicker, and more contiguous, but it still depends on fog to survive dry summer conditions. Kathleen C. Weathers, an ecosystem scientist at the Cary Institute of Ecosystem Studies in Millbrook, New York, has been studying the effects of fog on forest ecosystems for 25 years, and she still cannot quite believe how it works. “One step inside a fog forest and it’s clear that you’ve entered a remarkable ecosystem,” she says. “The ways in which trees, leaves, mosses, and bromeliads have adapted to harvest tiny droplets of water that hang in the atmosphere is unparalleled.”Image courtesy of Juan J. Armesto/Foundation Senda Darwin Archive
EssentialstheDiagonal is a personal blog by Mike Gerra, skeptic, technologist, psychologist, artist, humanist, collector of grand, eclectic ideas.theDiagonal blog connects the dots across multiple disciplines for inquisitive, objective and critical thinkers, exploring the vertices of big science, disruptive innovation, global sustainability, illuminating literature and leftfield art. It is on this diagonal that creativity thrives, big ideas take flight and reason triumphs.