Tag Archives: energy

Peak Tech Bubble

Nelsons-Column-Great-Smog-London-1952The following story is surely a sign of the impending implosion of the next tech bubble — too much easy money flowing to too many bad and lazy ideas.

While an increasing number of people dream of a future built on renewable, clean energy, some entrepreneurs are defining ways to make gasoline (petrol) consumption even more convenient for consumers. Welcome to Uber-style, gas delivery on-demand.

This casts my mind back to the mid-1960s, recalling the deliveries of black, sooty coal to our cellar (basement) coal bunker. Thankfully, the UK’s Clean Air Acts of the 1950s and 60s finally paved the way for cleaner fuel and cleared the skies of unhealthy, mid-century London smog.

Surely, these modern day counterparts are heading in the wrong direction just to make a quick buck.

From the Guardian:

It is hard to imagine a less hospitable niche for a startup to enter than gasoline – a combustible commodity that is (one hopes) being innovated into obsolescence.

And yet, over the past 18 months, at least six startups have launched some variation on the theme of “Uber for gas” – your car’s tank gets refilled while it is parked somewhere.

The gas delivery startup founders all share similar stories of discovering the wannabe entrepreneur’s holy grail: a point of friction that can be translated into an app.

“David, one of the co-founders, basically said, ‘I hate going to the gas station’,” said Nick Alexander, the other co-founder of Yoshi, of their company’s origins. “I think he had run out of gas recently, so he said, ‘What about an idea where someone comes and fills your car up?’”

For Ale Donzis, co-founder of WeFuel, the moment came when he was trying to get gas in the middle of winter in upstate New York and realized he had forgotten his gloves. For Frank Mycroft, founder and CEO of Booster Fuels, it was during his wife’s pregnancy when he started refueling her car as well as his own.

“It wore on me,” Mycroft said. “I didn’t like doing it.”

The tales of gas station woe are the kind of first-world problems that have inspired a thousand parodies of startup culture. (A customer testimonial on the website of Purple, another gas delivery service, reads: “I live across the street from a gas station, but I don’t always have time to make the stop.”)

But delivering large quantities of a toxic and flammable liquid is significantly more complicated – and regulated – than delivering sandwiches. The companies generally source their gasoline from the same distributors that supply 10,000-gallon tankers to retail gas stations. But the app companies put the fuel into the back of pickup trucks or specially designed mini-tankers. Booster Fuels only services cars in open air, corporate parking lots on private property, but other companies offer to refill your car wherever it’s parked.

Read the entire story here.

Image: Nelson’s Column during the Great Smog of London, 1952. Courtesy: By N T Stobbs, CC BY-SA 2.0.

c2=e/m

Feynmann_Diagram_Gluon_RadiationParticle physicists will soon attempt to reverse the direction of Einstein’s famous equation delineating energy-matter equivalence, e=mc2. Next year, they plan to crash quanta of light into each other to create matter. Cool or what!

From the Guardian:

Researchers have worked out how to make matter from pure light and are drawing up plans to demonstrate the feat within the next 12 months.

The theory underpinning the idea was first described 80 years ago by two physicists who later worked on the first atomic bomb. At the time they considered the conversion of light into matter impossible in a laboratory.

But in a report published on Sunday, physicists at Imperial College London claim to have cracked the problem using high-powered lasers and other equipment now available to scientists.

“We have shown in principle how you can make matter from light,” said Steven Rose at Imperial. “If you do this experiment, you will be taking light and turning it into matter.”

The scientists are not on the verge of a machine that can create everyday objects from a sudden blast of laser energy. The kind of matter they aim to make comes in the form of subatomic particles invisible to the naked eye.

The original idea was written down by two US physicists, Gregory Breit and John Wheeler, in 1934. They worked out that – very rarely – two particles of light, or photons, could combine to produce an electron and its antimatter equivalent, a positron. Electrons are particles of matter that form the outer shells of atoms in the everyday objects around us.

But Breit and Wheeler had no expectations that their theory would be proved any time soon. In their study, the physicists noted that the process was so rare and hard to produce that it would be “hopeless to try to observe the pair formation in laboratory experiments”.

Oliver Pike, the lead researcher on the study, said the process was one of the most elegant demonstrations of Einstein’s famous relationship that shows matter and energy are interchangeable currencies. “The Breit-Wheeler process is the simplest way matter can be made from light and one of the purest demonstrations of E=mc2,” he said.

Writing in the journal Nature Photonics, the scientists describe how they could turn light into matter through a number of separate steps. The first step fires electrons at a slab of gold to produce a beam of high-energy photons. Next, they fire a high-energy laser into a tiny gold capsule called a hohlraum, from the German for “empty room”. This produces light as bright as that emitted from stars. In the final stage, they send the first beam of photons into the hohlraum where the two streams of photons collide.

The scientists’ calculations show that the setup squeezes enough particles of light with high enough energies into a small enough volume to create around 100,000 electron-positron pairs.

The process is one of the most spectacular predictions of a theory called quantum electrodynamics (QED) that was developed in the run up to the second world war. “You might call it the most dramatic consequence of QED and it clearly shows that light and matter are interchangeable,” Rose told the Guardian.

The scientists hope to demonstrate the process in the next 12 months. There are a number of sites around the world that have the technology. One is the huge Omega laser in Rochester, New York. But another is the Orion laser at Aldermaston, the atomic weapons facility in Berkshire.

A successful demonstration will encourage physicists who have been eyeing the prospect of a photon-photon collider as a tool to study how subatomic particles behave. “Such a collider could be used to study fundamental physics with a very clean experimental setup: pure light goes in, matter comes out. The experiment would be the first demonstration of this,” Pike said.

Read the entire story here.

Image: Feynmann diagram for gluon radiation. Courtesy of Wikipedia.

 

 

The Coming Energy Crash

By some accounts the financial crash that began in 2008 is a mere economic hiccup compared with the next big economic (and environmental) disaster — the fossil fuel crisis accompanied by risk denial syndrome.

From the New Scientist:

FIVE years ago the world was in the grip of a financial crisis that is still reverberating around the globe. Much of the blame for that can be attributed to weaknesses in human psychology: we have a collective tendency to be blind to the kind of risks that can crash economies and imperil civilisations.

Today, our risk blindness is threatening an even bigger crisis. In my book The Energy of Nations, I argue that the energy industry’s leaders are guilty of a risk blindness that, unless action is taken, will lead to a global crash – and not just because of the climate change they fuel.

Let me begin by explaining where I come from. I used to be a creature of the oil and gas industry. As a geologist on the faculty at Imperial College London, I was funded by BP, Shell and others, and worked on oil and gas in shale deposits, among other things. But I became worried about society’s overdependency on fossil fuels, and acted on my concerns.

In 1989, I quit Imperial College to become a climate campaigner. A decade later I set up a solar energy business. In 2000 I co-founded a private equity fund investing in renewables.

In these capacities, I have watched captains of the energy and financial industries at work – frequently close to, often behind closed doors – as the financial crisis has played out and the oil price continued its inexorable rise. I have concluded that too many people across the top levels of business and government have found ways to close their eyes and ears to systemic risk-taking. Denial, I believe, has become institutionalised.

As a result of their complacency we face four great risks. The first and biggest is no surprise: climate change. We have way more unburned conventional fossil fuel than is needed to wreck the climate. Yet much of the energy industry is discovering and developing unconventional deposits – shale gas and tar sands, for example – to pile onto the fire, while simultaneously abandoning solar power just as it begins to look promising. It has been vaguely terrifying to watch how CEOs of the big energy companies square that circle.

Second, we risk creating a carbon bubble in the capital markets. If policymakers are to achieve their goal of limiting global warming to 2 °C, 60 to 80 per cent of proved reserves of fossil fuels will have to remain in the ground unburned. If so, the value of oil and gas companies would crash and a lot of people would lose a lot of money.

I am chairman of Carbon Tracker, a financial think tank that aims to draw attention to that risk. Encouragingly, some financial institutions have begun withdrawing investment in fossil fuels after reading our warnings. The latest report from the Intergovernmental Panel on Climate Change (IPCC) should spread appreciation of how crazy it is to have energy markets that are allowed to account for assets as though climate policymaking doesn’t exist.

Third, we risk being surprised by the boom in shale gas production. That, too, may prove to be a bubble, maybe even a Ponzi scheme. Production from individual shale wells declines rapidly, and large amounts of capital have to be borrowed to drill replacements. This will surprise many people who make judgement calls based on the received wisdom that limits to shale drilling are few. But I am not alone in these concerns.

Even if the US shale gas drilling isn’t a bubble, it remains unprofitable overall and environmental downsides are emerging seemingly by the week. According to the Texas Commission on Environmental Quality, whole towns in Texas are now running out of water, having sold their aquifers for fracking. I doubt that this is a boom that is going to appeal to the rest of the world; many others agree.

Fourth, we court disaster with assumptions about oil depletion. Most of us believe the industry mantra that there will be adequate flows of just-about-affordable oil for decades to come. I am in a minority who don’t. Crude oil production peaked in 2005, and oil fields are depleting at more than 6 per cent per year, according to the International Energy Agency. The much-hyped 2 million barrels a day of new US production capacity from shale needs to be put in context: we live in a world that consumes 90 million barrels a day.

It is because of the sheer prevalence of risk blindness, overlain with the pervasiveness of oil dependency in modern economies, that I conclude system collapse is probably inevitable within a few years.

Mine is a minority position, but it would be wise to remember how few whistleblowers there were in the run-up to the financial crash, and how they were vilified in the same way “peakists” – believers in premature peak oil – are today.

Read the entire article here.

Image: power plant. Courtesy of Think Progress.

The Future of the Grid

Two common complaints dog the sustainable energy movement: first, energy generated from the sun and wind is not always present; second, renewable energy is too costly. A new study debunks these notions, and shows that cost effective renewable energy could power our needs 99 percent of the time by 2030.

[div class=attrib]From ars technica:[end-div]

You’ve probably heard the argument: wind and solar power are well and good, but what about when the wind doesn’t blow and the sun doesn’t shine? But it’s always windy and sunny somewhere. Given a sufficient distribution of energy resources and a large enough network of electrically conducting tubes, plus a bit of storage, these problems can be overcome—technologically, at least.

But is it cost-effective to do so? A new study from the University of Delaware finds that renewable energy sources can, with the help of storage, power a large regional grid for up to 99.9 percent of the time using current technology. By 2030, the cost of doing so will hit parity with current methods. Further, if you can live with renewables meeting your energy needs for only 90 percent of the time, the economics become positively compelling.

“These results break the conventional wisdom that renewable energy is too unreliable and expensive,” said study co-author Willett Kempton, a professor at the University of Delaware’s School of Marine Science and Policy. “The key is to get the right combination of electricity sources and storage—which we did by an exhaustive search—and to calculate costs correctly.”

By exhaustive, Kempton is referring to the 28 billion combinations of inland and offshore wind and photovoltaic solar sources combined with centralized hydrogen, centralized batteries, and grid-integrated vehicles analyzed in the study. The researchers deliberately overlooked constant renewable sources of energy such as geothermal and hydro power on the grounds that they are less widely available geographically.

These technologies were applied to a real-world test case: that of the PJM Interconnection regional grid, which covers parts of states from New Jersey to Indiana, and south to North Carolina. The model used hourly consumption data from the years 1999 to 2002; during that time, the grid had a generational capacity of 72GW catering to an average demand of 31.5GW. Taking in 13 states, either whole or in part, the PJM Interconnection constitutes one fifth of the USA’s grid. “Large” is no overstatement, even before considering more recent expansions that don’t apply to the dataset used.

The researchers constructed a computer model using standard solar and wind analysis tools. They then fed in hourly weather data from the region for the whole four-year period—35,040 hours worth. The goal was to find the minimum cost at which the energy demand could be met entirely by renewables for a given proportion of the time, based on the following game plan:

  1. When there’s enough renewable energy direct from source to meet demand, use it. Store any surplus.
  2. When there is not enough renewable energy direct from source, meet the shortfall with the stored energy.
  3. When there is not enough renewable energy direct from source, and the stored energy reserves are insufficient to bridge the shortfall, top up the remaining few percent of the demand with fossil fuels.

Perhaps unsurprisingly, the precise mix required depends upon exactly how much time you want renewables to meet the full load. Much more surprising is the amount of excess renewable infrastructure the model proposes as the most economic. To achieve a 90-percent target, the renewable infrastructure should be capable of generating 180 percent of the load. To meet demand 99.9 percent of the time, that rises to 290 percent.

“So much excess generation of renewables is a new idea, but it is not problematic or inefficient, any more than it is problematic to build a thermal power plant requiring fuel input at 250 percent of the electrical output, as we do today,” the study argues.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bangui Windfarm, Ilocos Norte, Philippines. Courtesy of
Wikipedia.[end-div]

Steam Without Boiling Water

Despite what seems to be an overwhelmingly digital shift in our lives, we still live in a world of steam. Steam plays a vital role in generating most of the world’s electricity, steam heats our buildings (especially if you live in New York City), steam sterilizes our medical supplies.

So, in a research discovery with far-reaching implication, scientists have succeeded in making steam at room temperature without actually boiling water. All courtesy of some ingenious nanoparticles.

[div class=attrib]From Technology Review:[end-div]

Steam is a key ingredient in a wide range of industrial and commercial processes—including electricity generation, water purification, alcohol distillation, and medical equipment sterilization.

Generating that steam, however, typically requires vast amounts of energy to heat and eventually boil water or another fluid. Now researchers at Rice University have found a shortcut. Using light-absorbing nanoparticles suspended in water, the group was able to turn the water molecules surrounding the nanoparticles into steam while scarcely raising the temperature of the remaining water. The trick could dramatically reduce the cost of many steam-reliant processes.

The Rice team used a Fresnel lens to focus sunlight on a small tube of water containing high concentrations of nanoparticles suspended in the fluid. The water, which had been cooled to near freezing, began generating steam within five to 20 seconds, depending on the type of nanoparticles used. Changes in temperature, pressure, and mass revealed that 82 percent of the sunlight absorbed by the nanoparticles went directly to generating steam while only 18 percent went to heating water.

“It’s a new way to make steam without boiling water,” says Naomi Halas, director of the Laboratory for Nanophotonics at Rice University. Halas says that the work “opens up a lot of interesting doors in terms of what you can use steam for.”

The new technique could, for instance, lead to inexpensive steam-generation devices for small-scale water purification, sterilization of medical instruments, and sewage treatment in developing countries with limited resources and infrastructure.

The use of nanoparticles to increase heat transfer in water and other fluids has been well studied, but few researchers have looked at using the particles to absorb light and generate steam.

In the current study, Halas and colleagues used nanoparticles optimized to absorb the widest possible spectrum of sunlight. When light hits the particles, their temperature quickly rises to well above 100 °C, the boiling point of water, causing surrounding water molecules to vaporize.

Precisely how the particles and water molecules interact remains somewhat of a mystery. Conventional heat-transfer models suggest that the absorbed sunlight should dissipate into the surrounding fluid before causing any water to boil. “There seems to be some nanoscale thermal barrier, because it’s clearly making steam like crazy,” Halas says.

The system devised by Halas and colleagues exhibited an efficiency of 24 percent in converting sunlight to steam.

Todd Otanicar, a mechanical engineer at the University of Tulsa who was not involved in the current study, says the findings could have significant implications for large-scale solar thermal energy generation. Solar thermal power stations typically use concentrated sunlight to heat a fluid such as oil, which is then used to heat water to generate steam. Otanicar estimates that by generating steam directly with nanoparticles in water, such a system could see an increased efficiency of 3 to 5 percent and a cost savings of 10 percent because a less complex design could be used.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Stott Park Bobbin Mill Steam Engine. Courtesy of Wikipedia.[end-div]

An Answer is Blowing in the Wind

Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.

[div class=attrib]From Ars Technica:[end-div]

Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?

Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.

The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?

In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.

The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.

The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?

For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.

Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.

Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.

Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.

For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.

These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).

[div class=attrib]Read the entire article after the jump.[end-div]

The Inevitability of Life: A Tale of Protons and Mitochondria

A fascinating article by Nick Lane a leading researcher into the origins of life. Lane is a Research Fellow at University College London.

He suggests that it would be surprising if simple, bacterial-like, life were not common throughout the universe. However, the acquisition of one cell by another — an event that led to all higher organisms on planet Earth, is an altogether much rarer occurrence. So are we alone in the universe?

[div class=attrib]From the New Scientist:[end-div]

UNDER the intense stare of the Kepler space telescope, more and more planets similar to our own are revealing themselves to us. We haven’t found one exactly like Earth yet, but so many are being discovered that it appears the galaxy must be teeming with habitable planets.

These discoveries are bringing an old paradox back into focus. As physicist Enrico Fermi asked in 1950, if there are many suitable homes for life out there and alien life forms are common, where are they all? More than half a century of searching for extraterrestrial intelligence has so far come up empty-handed.

Of course, the universe is a very big place. Even Frank Drake’s famously optimistic “equation” for life’s probability suggests that we will be lucky to stumble across intelligent aliens: they may be out there, but we’ll never know it. That answer satisfies no one, however.

There are deeper explanations. Perhaps alien civilisations appear and disappear in a galactic blink of an eye, destroying themselves long before they become capable of colonising new planets. Or maybe life very rarely gets started even when conditions are perfect.

If we cannot answer these kinds of questions by looking out, might it be possible to get some clues by looking in? Life arose only once on Earth, and if a sample of one were all we had to go on, no grand conclusions could be drawn. But there is more than that. Looking at a vital ingredient for life – energy – suggests that simple life is common throughout the universe, but it does not inevitably evolve into more complex forms such as animals. I might be wrong, but if I’m right, the immense delay between life first appearing on Earth and the emergence of complex life points to another, very different explanation for why we have yet to discover aliens.

Living things consume an extraordinary amount of energy, just to go on living. The food we eat gets turned into the fuel that powers all living cells, called ATP. This fuel is continually recycled: over the course of a day, humans each churn through 70 to 100 kilograms of the stuff. This huge quantity of fuel is made by enzymes, biological catalysts fine-tuned over aeons to extract every last joule of usable energy from reactions.

The enzymes that powered the first life cannot have been as efficient, and the first cells must have needed a lot more energy to grow and divide – probably thousands or millions of times as much energy as modern cells. The same must be true throughout the universe.

This phenomenal energy requirement is often left out of considerations of life’s origin. What could the primordial energy source have been here on Earth? Old ideas of lightning or ultraviolet radiation just don’t pass muster. Aside from the fact that no living cells obtain their energy this way, there is nothing to focus the energy in one place. The first life could not go looking for energy, so it must have arisen where energy was plentiful.

Today, most life ultimately gets its energy from the sun, but photosynthesis is complex and probably didn’t power the first life. So what did? Reconstructing the history of life by comparing the genomes of simple cells is fraught with problems. Nevertheless, such studies all point in the same direction. The earliest cells seem to have gained their energy and carbon from the gases hydrogen and carbon dioxide. The reaction of H2 with CO2 produces organic molecules directly, and releases energy. That is important, because it is not enough to form simple molecules: it takes buckets of energy to join them up into the long chains that are the building blocks of life.

A second clue to how the first life got its energy comes from the energy-harvesting mechanism found in all known life forms. This mechanism was so unexpected that there were two decades of heated altercations after it was proposed by British biochemist Peter Mitchell in 1961.

Universal force field

Mitchell suggested that cells are powered not by chemical reactions, but by a kind of electricity, specifically by a difference in the concentration of protons (the charged nuclei of hydrogen atoms) across a membrane. Because protons have a positive charge, the concentration difference produces an electrical potential difference between the two sides of the membrane of about 150 millivolts. It might not sound like much, but because it operates over only 5 millionths of a millimetre, the field strength over that tiny distance is enormous, around 30 million volts per metre. That’s equivalent to a bolt of lightning.

Mitchell called this electrical driving force the proton-motive force. It sounds like a term from Star Wars, and that’s not inappropriate. Essentially, all cells are powered by a force field as universal to life on Earth as the genetic code. This tremendous electrical potential can be tapped directly, to drive the motion of flagella, for instance, or harnessed to make the energy-rich fuel ATP.

However, the way in which this force field is generated and tapped is extremely complex. The enzyme that makes ATP is a rotating motor powered by the inward flow of protons. Another protein that helps to generate the membrane potential, NADH dehydrogenase, is like a steam engine, with a moving piston for pumping out protons. These amazing nanoscopic machines must be the product of prolonged natural selection. They could not have powered life from the beginning, which leaves us with a paradox.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Transmission electron microscope image of a thin section cut through an area of mammalian lung tissue. The high magnification image shows a mitochondria. Courtesy of Wikipedia.[end-div]

The Right of Not Turning Left

In 2007 UPS made the headlines by declaring left-hand turns for its army of delivery truck drivers undesirable. Of course, we left-handers have always known that our left or “sinister” side is fatefully less attractive and still branded as unlucky or evil. Chinese culture brands left-handedness as improper as well.

UPS had other motives for poo-pooing left-hand turns. For a company which runs over 95,000 big brown delivery trucks optimizing delivery routes could result in tremendous savings. In fact, careful research showed that the company could reduce its annual delivery routes by 28.5 million miles, save around 3 million gallons of fuel and reduce CO2 emissions by over 30,000 metric tons. And, eliminating or reducing left-hand turns would be safer as well. Of the 2.4 million crashes at intersections in the United States in 2007, most involved left-hand turns, according to the U.S. Federal Highway Administration.

Now urban planners and highway designers in the United States are evaluating the same thing — how to reduce the need for left-hand turns. Drivers in Europe, especially the United Kingdom, will be all too familiar with the roundabout technique for reducing left-handed turns on many A and B roads. Roundabouts have yet to gain significant traction in the United States, so now comes the Diverging Diamond Interchange.

[div class=attrib]From Slate:[end-div]

. . . Left turns are the bane of traffic engineers. Their idea of utopia runs clockwise. (UPS’ routing software famously has drivers turn right whenever possible, to save money and time.) The left-turning vehicle presents not only the aforementioned safety hazard, but a coagulation in the smooth flow of traffic. It’s either a car stopped in an active traffic lane, waiting to turn; or, even worse, it’s cars in a dedicated left-turn lane that, when traffic is heavy enough, requires its own “dedicated signal phase,” lengthening the delay for through traffic as well as cross traffic. And when traffic volumes really increase, as in the junction of two suburban arterials, multiple left-turn lanes are required, costing even more in space and money.

And, increasingly, because of shifting demographics and “lollipop” development patterns, suburban arterials are where the action is: They represent, according to one report, less than 10 percent of the nation’s road mileage, but account for 48 percent of its vehicle-miles traveled.

. . . What can you do when you’ve tinkered all you can with the traffic signals, added as many left-turn lanes as you can, rerouted as much traffic as you can, in areas that have already been built to a sprawling standard? Welcome to the world of the “unconventional intersection,” where left turns are engineered out of existence.

. . . “Grade separation” is the most extreme way to eliminate traffic conflicts. But it’s not only aesthetically unappealing in many environments, it’s expensive. There is, however, a cheaper, less disruptive approach, one that promises its own safety and efficiency gains, that has become recently popular in the United States: the diverging diamond interchange. There’s just one catch: You briefly have to drive the wrong way. But more on that in a bit.

The “DDI” is the brainchild of Gilbert Chlewicki, who first theorized what he called the “criss-cross interchange” as an engineering student at the University of Maryland in 2000.

The DDI is the sort of thing that is easier to visualize than describe (this simulation may help), but here, roughly, is how a DDI built under a highway overpass works: As the eastbound driver approaches the highway interchange (whose lanes run north-south), traffic lanes “criss cross” at a traffic signal. The driver will now find himself on the “left” side of the road, where he can either make an unimpeded left turn onto the highway ramp, or cross over again to the right once he has gone under the highway overpass.

[div class=attrib]More from theSource here.[end-div]

Jevons Paradox: Energy Efficiency Increases Consumption?

Energy efficiency sounds simple, but it’s rather difficult to measure. Sure when you purchase a shiny, new more energy efficient washing machine compared with your previous model you’re making a personal dent in energy consumption. But, what if in aggregate overall consumption increases because more people want that energy efficient model? In a nutshell, that’s Jevons Paradox, named after a 19th-century British economist, William Jevons. He observed that while the steam engine consumed energy more efficiently from coal, it also stimulated so much economic growth that coal consumption actually increased. Thus, Jevons argued that improvements in fuel efficiency tend to increase, rather than decrease, fuel use.

John Tierney over at the New York Times brings Jevons into the 21st century and discovers that the issues remain the same.

[div class=attrib]From the New York Times:[end-div]

For the sake of a cleaner planet, should Americans wear dirtier clothes?

This is not a simple question, but then, nothing about dirty laundry is simple anymore. We’ve come far since the carefree days of 1996, when Consumer Reports tested some midpriced top-loaders and reported that “any washing machine will get clothes clean.”

In this year’s report, no top-loading machine got top marks for cleaning. The best performers were front-loaders costing on average more than $1,000. Even after adjusting for inflation, that’s still $350 more than the top-loaders of 1996.

What happened to yesterday’s top-loaders? To comply with federal energy-efficiency requirements, manufacturers made changes like reducing the quantity of hot water. The result was a bunch of what Consumer Reports called “washday wash-outs,” which left some clothes “nearly as stained after washing as they were when we put them in.”

Now, you might think that dirtier clothes are a small price to pay to save the planet. Energy-efficiency standards have been embraced by politicians of both parties as one of the easiest ways to combat global warming. Making appliances, cars, buildings and factories more efficient is called the “low-hanging fruit” of strategies to cut greenhouse emissions.

But a growing number of economists say that the environmental benefits of energy efficiency have been oversold. Paradoxically, there could even be more emissions as a result of some improvements in energy efficiency, these economists say.

The problem is known as the energy rebound effect. While there’s no doubt that fuel-efficient cars burn less gasoline per mile, the lower cost at the pump tends to encourage extra driving. There’s also an indirect rebound effect as drivers use the money they save on gasoline to buy other things that produce greenhouse emissions, like new electronic gadgets or vacation trips on fuel-burning planes.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image courtesy of Wikipedia, Popular Science Monthly / Creative Commons.[end-div]

A Plan to Keep Carbon in Check

[div class=attrib]By Robert H. Socolow and Stephen W. Pacala, From Scientific American:[end-div]

Getting a grip on greenhouse gases is daunting but doable. The technologies already exist. But there is no time to lose.

Retreating glaciers, stronger hurricanes, hotter summers, thinner polar bears: the ominous harbingers of global warming are driving companies and governments to work toward an unprecedented change in the historical pattern of fossil-fuel use. Faster and faster, year after year for two centuries, human beings have been transferring carbon to the atmosphere from below the surface of the earth. Today the world’s coal, oil and natural gas industries dig up and pump out about seven billion tons of carbon a year, and society burns nearly all of it, releasing carbon dioxide (CO2). Ever more people are convinced that prudence dictates a reversal of the present course of rising CO2 emissions.

The boundary separating the truly dangerous consequences of emissions from the merely unwise is probably located near (but below) a doubling of the concentration of CO2 that was in the atmosphere in the 18th century, before the Industrial Revolution began. Every increase in concentration carries new risks, but avoiding that danger zone would reduce the likelihood of triggering major, irreversible climate changes, such as the disappearance of the Greenland ice cap. Two years ago the two of us provided a simple framework to relate future CO2 emissions to this goal.

[div class=attrib]More from theSource here.[end-div]

Plan B for Energy

[div class=attrib]From Scientific American:[end-div]

If efficiency improvements and incremental advances in today’s technologies fail to halt global warming, could revolutionary new carbon-free energy sources save the day? Don’t count on it–but don’t count it out, either.

To keep this world tolerable for life as we like it, humanity must complete a marathon of technological change whose finish line lies far over the horizon. Robert H. Socolow and Stephen W. Pacala of Princeton University have compared the feat to a multigenerational relay race [see their article “A Plan to Keep Carbon in Check”]. They outline a strategy to win the first 50-year leg by reining back carbon dioxide emissions from a century of unbridled acceleration. Existing technologies, applied both wisely and promptly, should carry us to this first milestone without trampling the global economy. That is a sound plan A.

The plan is far from foolproof, however. It depends on societies ramping up an array of carbon-reducing practices to form seven “wedges,” each of which keeps 25 billion tons of carbon in the ground and out of the air. Any slow starts or early plateaus will pull us off track. And some scientists worry that stabilizing greenhouse gas emissions will require up to 18 wedges by 2056, not the seven that Socolow and Pacala forecast in their most widely cited model.
[div class=attrib]More from theSource here.[end-div]