The How and Why of Supersized Sodas

Apparently the Great Depression in the United States is to blame for the mega-sized soda drinks that many now consume on a daily basis, except in New York City of course (sugary drinks larger than 16oz were banned for sale in restaurants beginning September 13, 2012).

[div class=attrib]From Wired:[end-div]

The New York City Board of Health voted Thursday to ban the sale of sugary soft drinks larger than 16 ounces at restaurants, a move that has sparked intense debate between public health advocates and beverage industry lobbyists. When did sodas get so big in the first place?

In the 1930s. At the beginning of the Great Depression, the 6-ounce Coca-Cola bottle was the undisputed king of soft drinks. The situation began to change in 1934, when the smallish Pepsi-Cola company began selling 12-ounces bottles for the same nickel price as 6 ounces of Coke. The move was brilliant. Distribution, bottling, and advertising accounted for most of the company’s costs, so adding six free ounces hardly mattered. In addition, the 12-ounce size enabled Pepsi-Cola to use the same bottles as beer-makers, cutting container costs. The company pursued a similar strategy at the nation’s soda fountains, selling enough syrup to make 10 ounces for the same price as 6 ounces worth of Coca-Cola. Pepsi sales soared, and the company soon produced a jingle about their supersize bottles: “Pepsi-Cola hits the spot, 12 full ounces, that’s a lot. Twice as much for a nickel, too. Pepsi-Cola is the drink for you.” Pepsi’s value-for-volume gambit kicked off a decades-long industry trend.

Coke was slow to respond at first, according to author Mark Pendergrast, who chronicled the company’s history in For God, Country, and Coca-Cola: The Definitive History of the Great American Soft Drink and the Company That Makes It. President Robert Woodruff held firm to the 6-ounce size, even as his subordinates warned him that Pepsi was onto something. By the 1950s, industry observers predicted that Coca-Cola might lose its dominant position, and top company executives were threatening to resign if Woodruff didn’t bend on bottle size. In 1955, 10- and 12-ounce “King Size” Coke bottles hit the market, along with a 26-ounce “Family Size.” Although the new flexibility helped Coca-Cola regain its footing, the brave new world of giant bottles was hard to accept for some. Company vice president Ed Forio noted that “bringing out another bottle was like being unfaithful to your wife.”

The trend toward larger sizes occurred in all sectors of the market. When Coca-Cola partnered with McDonald’s in the 1950s, the original fountain soda at the restaurant chain more closely approximated the classic Coke bottle at seven ounces. The largest cup size grew to 16 ounces in the 1960s and hit 21 ounces by 1974.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Big Gulp. Courtesy of Chicago Tribune.[end-div]

Fusion and the Z Machine

The quest to tap fusion as an energy source here on Earth continues to inch forward with some promising new developments. Of course, we mean nuclear fusion — the type which drives our companion star to shine, not the now debunked “cold fusion” supposedly demonstrated in a test tube in the late 1980s.

[div class=attrib]From Wired:[end-div]

In the high-stakes race to realize fusion energy, a smaller lab may be putting the squeeze on the big boys. Worldwide efforts to harness fusion—the power source of the sun and stars—for energy on Earth currently focus on two multibillion dollar facilities: the ITER fusion reactor in France and the National Ignition Facility (NIF) in California. But other, cheaper approaches exist—and one of them may have a chance to be the first to reach “break-even,” a key milestone in which a process produces more energy than needed to trigger the fusion reaction.

Researchers at the Sandia National Laboratory in Albuquerque, New Mexico, will announce in a Physical Review Letters (PRL) paper accepted for publication that their process, known as magnetized liner inertial fusion (MagLIF) and first proposed 2 years ago, has passed the first of three tests, putting it on track for an attempt at the coveted break-even. Tests of the remaining components of the process will continue next year, and the team expects to take its first shot at fusion before the end of 2013.

Fusion reactors heat and squeeze a plasma—an ionized gas—composed of the hydrogen isotopes deuterium and tritium, compressing the isotopes until their nuclei overcome their mutual repulsion and fuse together. Out of this pressure-cooker emerge helium nuclei, neutrons, and a lot of energy. The temperature required for fusion is more than 100 million°C—so you have to put a lot of energy in before you start to get anything out. ITER and NIF are planning to attack this problem in different ways. ITER, which will be finished in 2019 or 2020, will attempt fusion by containing a plasma with enormous magnetic fields and heating it with particle beams and radio waves. NIF, in contrast, takes a tiny capsule filled with hydrogen fuel and crushes it with a powerful laser pulse. NIF has been operating for a few years but has yet to achieve break-even.

Sandia’s MagLIF technique is similar to NIF’s in that it rapidly crushes its fuel—a process known as inertial confinement fusion. But to do it, MagLIF uses a magnetic pulse rather than lasers. The target in MagLIF is a tiny cylinder about 7 millimeters in diameter; it’s made of beryllium and filled with deuterium and tritium. The cylinder, known as a liner, is connected to Sandia’s vast electrical pulse generator (called the Z machine), which can deliver 26 million amps in a pulse lasting milliseconds or less. That much current passing down the walls of the cylinder creates a magnetic field that exerts an inward force on the liner’s walls, instantly crushing it—and compressing and heating the fusion fuel.

Researchers have known about this technique of crushing a liner to heat the fusion fuel for some time. But the MagLIF-Z machine setup on its own didn’t produce quite enough heat; something extra was needed to make the process capable of reaching break-even. Sandia researcher Steve Slutz led a team that investigated various enhancements through computer simulations of the process. In a paper published in Physics of Plasmas in 2010, the team predicted that break-even could be reached with three enhancements.

First, they needed to apply the current pulse much more quickly, in just 100 nanoseconds, to increase the implosion velocity. They would also preheat the hydrogen fuel inside the liner with a laser pulse just before the Z machine kicks in. And finally, they would position two electrical coils around the liner, one at each end. These coils produce a magnetic field that links the two coils, wrapping the liner in a magnetic blanket. The magnetic blanket prevents charged particles, such as electrons and helium nuclei, from escaping and cooling the plasma—so the temperature stays hot.

Sandia plasma physicist Ryan McBride is leading the effort to see if the simulations are correct. The first item on the list is testing the rapid compression of the liner. One critical parameter is the thickness of the liner wall: The thinner the wall, the faster it will be accelerated by the magnetic pulse. But the wall material also starts to evaporate away during the pulse, and if it breaks up too early, it will spoil the compression. On the other hand, if the wall is too thick, it won’t reach a high enough velocity. “There’s a sweet spot in the middle where it stays intact and you still get a pretty good implosion velocity,” McBride says.

To test the predicted sweet spot, McBride and his team set up an elaborate imaging system that involved blasting a sample of manganese with a high-powered laser (actually a NIF prototype moved to Sandia) to produce x-rays. By shining the x-rays through the liner at various stages in its implosion, the researchers could image what was going on. They found that at the sweet-spot thickness, the liner held its shape right through the implosion. “It performed as predicted,” McBride says. The team aims to test the other two enhancements—the laser preheating and the magnetic blanket—in the coming year, and then put it all together to take a shot at break-even before the end of 2013.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Z Pulsed Power Facility produces tremendous energy when it fires. Courtesy of Sandia National Laboratory.[end-div]

GDP of States Versus Countries

A niffty or neat (depending upon your location) map courtesy of Frank Jacobs over at Strange Maps. This one shows countries in place of U.S. States where the GDP (Gross Domestic Product) is similar. For instance, Canada replaces Texas in the United States map since Canada’s entire GDP matches the economy of Texas. The map is based on data for 2007.

[div class=attrib]Read the entire article after the jump.[end-div]

A Link Between BPA and Obesity

You have probably heard of BPA. It’s a compound used in the manufacture of many plastics, especially hard, polycarbonate plastics. Interestingly, it has hormone-like characteristics, mimicking estrogen. As a result, BPA crops up in many studies that show adverse health affects. As a precaution, the U.S. Food and Drug Administration (FDA) several years ago banned the use of BPA from products aimed at young children, such as baby bottles. But evidence remains inconsistent, so BPA is still found in many products today. Now comes another study linking BPA to obesity.

[div class=attrib]From Smithsonian:[end-div]

Since the 1960s, manufacturers have widely used the chemical bisphenol-A (BPA) in plastics and food packaging. Only recently, though, have scientists begun thoroughly looking into how the compound might affect human health—and what they’ve found has been a cause for concern.

Starting in 2006, a series of studies, mostly in mice, indicated that the chemical might act as an endocrine disruptor (by mimicking the hormone estrogen), cause problems during development and potentially affect the reproductive system, reducing fertility. After a 2010 Food and Drug Administration report warned that the compound could pose an especially hazardous risk for fetuses, infants and young children, BPA-free water bottles and food containers started flying off the shelves. In July, the FDA banned the use of BPA in baby bottles and sippy cups, but the chemical is still present in aluminum cans, containers of baby formula and other packaging materials.

Now comes another piece of data on a potential risk from BPA but in an area of health in which it has largely been overlooked: obesity. A study by researchers from New York University, published today in the Journal of the American Medical Association, looked at a sample of nearly 3,000 children and teens across the country and found a “significant” link between the amount of BPA in their urine and the prevalence of obesity.

“This is the first association of an environmental chemical in childhood obesity in a large, nationally representative sample,” said lead investigator Leonardo Trasande, who studies the role of environmental factors in childhood disease at NYU. “We note the recent FDA ban of BPA in baby bottles and sippy cups, yet our findings raise questions about exposure to BPA in consumer products used by older children.”

The researchers pulled data from the 2003 to 2008 National Health and Nutrition Examination Surveys, and after controlling for differences in ethnicity, age, caregiver education, income level, sex, caloric intake, television viewing habits and other factors, they found that children and adolescents with the highest levels of BPA in their urine had a 2.6 times greater chance of being obese than those with the lowest levels. Overall, 22.3 percent of those in the quartile with the highest levels of BPA were obese, compared with just 10.3 percent of those in the quartile with the lowest levels of BPA.

The vast majority of BPA in our bodies comes from ingestion of contaminated food and water. The compound is often used as an internal barrier in food packaging, so that the product we eat or drink does not come into direct contact with a metal can or plastic container. When heated or washed, though, plastics containing BPA can break down and release the chemical into the food or liquid they hold. As a result, roughly 93 percent of the U.S. population has detectable levels of BPA in their urine.

The researchers point specifically to the continuing presence of BPA in aluminum cans as a major problem. “Most people agree the majority of BPA exposure in the United States comes from aluminum cans,” Trasande said. “Removing it from aluminum cans is probably one of the best ways we can limit exposure. There are alternatives that manufacturers can use to line aluminum cans.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bisphenol A. Courtesy of Wikipedia.[end-div]

As Simple as abc; As Difficult as ABC

As children we all learn our abc’s; as adults very few ponder the ABC Conjecture in mathematics. The first is often a simple task of rote memorization; the second is a troublesome mathematical problem with a fiendishly complex solution (maybe).

[div class=attrib]From the New Scientist:[end-div]

?Whole numbers, addition and multiplication are among the first things schoolchildren learn, but a new mathematical proof shows that even the world’s best minds have plenty more to learn about these seemingly simple concepts.

Shinichi Mochizuki of Kyoto University in Japan has torn up these most basic of mathematical concepts and reconstructed them as never before. The result is a fiendishly complicated proof for the decades-old “ABC conjecture” – and an alternative mathematical universe that should prise open many other outstanding enigmas.

To boot, Mochizuki’s proof also offers an alternative explanation for Fermat’s last theorem, one of the most famous results in the history of mathematics but not proven until 1993 (see “Fermat’s last theorem made easy”, below).

The ABC conjecture starts with the most basic equation in algebra, adding two whole numbers, or integers, to get another: a + b = c. First posed in 1985 by Joseph Oesterlé and David Masser, it places constraints on the interactions of the prime factors of these numbers, primes being the indivisible building blocks that can be multiplied together to produce all integers.

Dense logic

Take 81 + 64 = 145, which breaks down into the prime building blocks 3 × 3 × 3 × 3 + 2 × 2 × 2 × 2 × 2 × 2 = 5 × 29. Simplified, the conjecture says that the large amount of smaller primes on the equation’s left-hand side is always balanced by a small amount of larger primes on the right – the addition restricts the multiplication, and vice versa.

“The ABC conjecture in some sense exposes the relationship between addition and multiplication,” says Jordan Ellenberg of the University of Wisconsin-Madison. “To learn something really new about them at this late date is quite startling.”

Though rumours of Mochizuki’s proof started spreading on mathematics blogs earlier this year, it was only last week that he posted a series of papers on his website detailing what he calls “inter-universal geometry”, one of which claims to prove the ABC conjecture. Only now are mathematicians attempting to decipher its dense logic, which spreads over 500 pages.

So far the responses are cautious, but positive. “It will be fabulously exciting if it pans out, experience suggests that that’s quite a big ‘if’,” wrote University of Cambridge mathematician Timothy Gowers on Google+.

Alien reasoning

“It is going to be a while before people have a clear idea of what Mochizuki has done,” Ellenberg told New Scientist. “Looking at it, you feel a bit like you might be reading a paper from the future, or from outer space,” he added on his blog.

Mochizuki’s reasoning is alien even to other mathematicians because it probes deep philosophical questions about the foundations of mathematics, such as what we really mean by a number, says Minhyong Kim at the University of Oxford. The early 20th century saw a crisis emerge as mathematicians realised they actually had no formal way to define a number – we can talk about “three apples” or “three squares”, but what exactly is the mathematical object we call “three”? No one could say.

Eventually numbers were redefined in terms of sets, rigorously specified collections of objects, and mathematicians now know that the true essence of the number zero is a set which contains no objects – the empty set – while the number one is a set which contains one empty set. From there, it is possible to derive the rest of the integers.

But this was not the end of the story, says Kim. “People are aware that many natural mathematical constructions might not really fall into the universe of sets.”

Terrible deformation

Rather than using sets, Mochizuki has figured out how to translate fundamental mathematical ideas into objects that only exist in new, conceptual universes. This allowed him to “deform” basic whole numbers and push their innate relationships – such as multiplication and addition – to the limit. “He is literally taking apart conventional objects in terrible ways and reconstructing them in new universes,” says Kim.

These new insights led him to a proof of the ABC conjecture. “How he manages to come back to the usual universe in a way that yields concrete consequences for number theory, I really have no idea as yet,” says Kim.

Because of its fundamental nature, a verified proof of ABC would set off a chain reaction, in one swoop proving many other open problems and deepening our understanding of the relationships between integers, fractions, decimals, primes and more.

Ellenberg compares proving the conjecture to the discovery of the Higgs boson, which particle physicists hope will reveal a path to new physics. But while the Higgs emerged from the particle detritus of a machine specifically designed to find it, Mochizuki’s methods are completely unexpected, providing new tools for mathematical exploration.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Clare College Cambridge.[end-div]

What is the True Power of Photography?

Hint. The answer is not shameless self-promotion or exploitative voyeurism; images used in this way may scratch a personal itch, but rarely influence fundamental societal or political behavior. Importantly, photography has given us a rich, nuanced and lasting medium for artistic expression since cameras and film were first invented. However, the principal answer is lies in photography’s ability to tell truth about and to power.

Michael Glover reminds us of this critical role through the works of a dozen of the most influential photographers from the 1960s and 1970s. Their collective works are on display at a new exhibit at the Barbican Art Gallery, London, which runs until mid-January 2013.

[div class=attrib]From the Independent:[end-div]

Photography has become so thoroughly prostituted as a means of visual exchange, available to all or none for every purpose under the sun (or none worthy of the name), that it is easy to forget that until relatively recently one of the most important consequences of fearless photographic practice was to tell the truth about power.

This group show at the Barbican focuses on the work of 12 photographers from around the world, including Vietnam, India, the US, Mexico, Japan, China, Ukraine, Germany, Mali, Japan and South Africa, examining their photographic practice in relation to the particular historical moments through which they lived. The covert eye of the camera often shows us what the authorities do not want us to see: the bleak injustice of life lived under apartheid; the scarring aftermath of the allied bombing and occupation of Japan; the brutish day-to-day realities of the Vietnam war.

Photography, it has often been said, documents the world. This suggests that the photographer might be a dispassionate observer of neutral spaces, more machine than emotive being. Nonsense. Using a camera is the photographer’s own way of discovering his or her own particular angle of view. It is a point of intersection between self and world. There is no such thing as a neutral landscape; there is only ever a personal landscape, cropped by the ever quizzical human eye. The good photographer, in the words of Bruce Davidson, the man (well represented in this show) who tirelessly and fearlessly chronicled the fight for civil rights in America in the early 1960s, seeks out the “emotional truth” of a situation.

For more than half a century, David Goldblatt, born in the mining town of Randfontein of Lithuanian Jewish parentage, has been chronicling the social divisions of South Africa. Goldblatt’s images are stark, forensic and pitiless, from the matchbox houses in the dusty, treeless streets of 1970s Soweto, to the lean man in the hat who is caught wearily and systematically butchering the coal-merchant’s dead horse for food in a bleak scrubland of wrecked cars. Goldblatt captures the day-to-day life of the Afrikaners: their narrowness of view; that tenacious conviction of rightness; the visceral bond with the soil. There is nothing demonstrative or rhetorical about his work. It is utterly, monochromatically sober, and quite subtly focused on the job in hand, as if he wishes to say to the onlooker that reality is quite stark enough.

Boris Mikhailov, wild, impish and contrarian in spirit, turns photography into a self-consciously subversive art form. Born in Kharkov in Ukraine under communism, his photographic montages represent a ferociously energetic fight-back against the grinding dullness, drabness and tedium of accepted notions of conformity. He frames a sugary image of a Kremlin tower in a circlet of slabs of raw meat. He reduces accepted ideas of beauty to kitsch. Underwear swings gaily in the air beside a receding railway track. He mercilessly lampoons the fact that the authorities forbade the photographing of nudity. This is the not-so-gentle art of blowing red raspberries.

Shomei Tomatsu has been preoccupied all his life by a single theme that he circles around obsessively: the American occupation of Japan in the aftermath of its humiliating military capitulation. Born in 1930, he still lives in Okinawa, the island from which the Americans launched their B52s during the Vietnam war. His angle of view suggests a mixture of abhorrence with the invasion of an utterly alien culture and a fascination with its practical consequences: a Japanese child blows a huge chewing gum bubble beside a street sign that reads “Bar Oasis”. The image of the child is distorted in the bubble.

But this show is not all about cocking a snook at authority. It is also about aesthetic issues: the use of colour as a way of shaping a different kind of reality, for example. William Eggleston made his series of photographic portraits of ordinary people from Memphis, Tennessee, often at night, in the 1970s. These are seemingly casual and immediate moments of intimate engagement between photographer and subject. Until this moment, colour had often been used by the camera (and especially the movie camera), not to particularise but to glamorise. Not so here. Eggleston is especially good at registering the lonely decrepitude of objects – a jukebox on a Memphis wall; the reptilian patina of a rusting street light; the resonance of an empty room in Las Vegas.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of “Everything Was Moving: Photography from the 60s and 70s”, Barbican Art Gallery. Copyright Bruce Davidson / Magnum Photos.[end-div]

Social Outcast = Creative Wunderkind

A recent study published in the Journal of Experimental Psychology correlates social ostracization and rejection with creativity. Businesses seeking creative individuals take note: perhaps your next great hire is a social misfit.

[div class=attrib]From Fast Company:[end-div]

Are you a recovering high school geek who still can’t get the girl? Are you always the last person picked for your company’s softball team? When you watched Office Space, did you feel a special kinship to the stapler-obsessed Milton Waddams? If you answered yes to any of these questions, do not despair. Researchers at Johns Hopkins and Cornell have recently found that the socially rejected might also be society’s most creatively powerful people.

The study, which is forthcoming in the Journal of Experimental Psychology, is called “Outside Advantage: Can Social Rejection Fuel Creative Thought?” It found that people who already have a strong “self-concept”–i.e. are independently minded–become creatively fecund in the face of rejection. “We were inspired by the stories of highly creative individuals like Steve Jobs and Lady Gaga,” says the study’s lead author, Hopkins professor Sharon Kim. “And we wanted to find a silver lining in all the popular press about bullying. There are benefits to being different.”

The study consisted of 200 Cornell students and set out to identify the relationship between the strength of an individual’s self-concept and their level of creativity. First, Kim tested the strength of each student’s self-concept by assessing his or her “need for uniqueness.” In other words, how important it is for each individual to feel separate from the crowd. Next, students were told that they’d either been included in or rejected from a hypothetical group project. Finally, they were given a simple, but creatively demanding, task: Draw an alien from a planet unlike earth.

If you’re curious about your own general creativity level (at least by the standards of Kim’s study), go ahead and sketch an alien right now…Okay, got your alien? Now give yourself a point for every non-human characteristic you’ve included in the drawing. If your alien has two eyes between the nose and forehead, you don’t get any points. If your alien has two eyes below the mouth, or three eyes that breathe fire, you get a point. If your alien doesn’t even have eyes or a mouth, give yourself a bunch of points. In short, the more dissimilar your alien is to a human, the higher your creativity score.

Kim found that people with a strong self-concept who were rejected produced more creative aliens than people from any other group, including people with a strong self-concept who were accepted. “If you’re in a mindset where you don’t care what others think,” she explained, “you’re open to ideas that you may not be open to if you’re concerned about what other people are thinking.”

This may seem like an obvious conclusion, but Kim pointed out that most companies don’t encourage the kind of freedom and independence that readers of Fast Company probably expect. “The benefits of being different is not a message everyone is getting,” she said.

But Kim also discovered something unexpected. People with a weak self-concept could be influenced toward a stronger one and, thus, toward a more creative mindset. In one part of the study, students were asked to read a short story in which all the pronouns were either singular (I/me) or plural (we/us) and then to circle all the pronouns. They were then “accepted” or “rejected” and asked to draw their aliens.

Kim found that all of the students who read stories with singular pronouns and were rejected produced more creative aliens. Even the students who originally had a weaker self-concept. Once these group-oriented individuals focused on individual-centric prose, they became more individualized themselves. And that made them more creative.

This finding doesn’t prove that you can teach someone to have a strong self-concept but it suggests that you can create a professional environment that facilitates independent and creative thought.

[div class=attrib]Read the entire article after the jump.[end-div]

Work as Punishment (and For the Sake of Leisure)

Gary Gutting, professor of philosophy at the University of Notre Dame reminds us that work is punishment for Adam’s sin, according to the Book of Genesis. No doubt, many who hold other faiths, as well as those who don’t, may tend to agree with this basic notion.

So, what on earth is work for?

Gutting goes on to remind us that Aristotle and Bertrand Russell had it right: that work is for the sake of leisure.

[div class=attrib]From the New York Times:[end-div]

Is work good or bad?  A fatuous question, it may seem, with unemployment such a pressing national concern.  (Apart from the names of the two candidates, “jobs” was the politically relevant word most used by speakers at the Republican and Democratic conventions.) Even apart from current worries, the goodness of work is deep in our culture. We applaud people for their work ethic, judge our economy by its productivity and even honor work with a national holiday.

But there’s an underlying ambivalence: we celebrate Labor Day by not working, the Book of Genesis says work is punishment for Adam’s sin, and many of us count the days to the next vacation and see a contented retirement as the only reason for working.

We’re ambivalent about work because in our capitalist system it means work-for-pay (wage-labor), not for its own sake.  It is what philosophers call an instrumental good, something valuable not in itself but for what we can use it to achieve.  For most of us, a paying job is still utterly essential — as masses of unemployed people know all too well.  But in our economic system, most of us inevitably see our work as a means to something else: it makes a living, but it doesn’t make a life.

What, then, is work for? Aristotle has a striking answer: “we work to have leisure, on which happiness depends.” This may at first seem absurd. How can we be happy just doing nothing, however sweetly (dolce far niente)?  Doesn’t idleness lead to boredom, the life-destroying ennui portrayed in so many novels, at least since “Madame Bovary”?

Everything depends on how we understand leisure. Is it mere idleness, simply doing nothing?  Then a life of leisure is at best boring (a lesson of Voltaire’s “Candide”), and at worst terrifying (leaving us, as Pascal says, with nothing to distract from the thought of death).  No, the leisure Aristotle has in mind is productive activity enjoyed for its own sake, while work is done for something else.

We can pass by for now the question of just what activities are truly enjoyable for their own sake — perhaps eating and drinking, sports, love, adventure, art, contemplation? The point is that engaging in such activities — and sharing them with others — is what makes a good life. Leisure, not work, should be our primary goal.

Bertrand Russell, in his classic essay “In Praise of Idleness,” agrees. ”A great deal of harm,” he says, “is being done in the modern world by belief in the virtuousness of work.” Instead, “the road to happiness and prosperity lies in an organized diminution of work.” Before the technological breakthroughs of the last two centuries, leisure could be only “the prerogative of small privileged classes,” supported by slave labor or a near equivalent. But this is no longer necessary: “The morality of work is the morality of slaves, and the modern world has no need of slavery.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Bust of Aristotle. Marble, Roman copy after a Greek bronze original by Lysippos from 330 BC; the alabaster mantle is a modern addition. Courtesy of Wikipedia.[end-div]

Innovation Before Its Time

Product driven companies, inventors from all backgrounds and market researchers have long studied how some innovations take off while others fizzle. So, why do some innovations gain traction? Given two similar but competing inventions, what factors lead to one eclipsing the other? Why do some pioneering ideas and inventions fail only to succeed from a different instigator years, sometimes decades, later? Answers to these questions would undoubtedly make many inventors household names, but as is the case in most human endeavors, the process of innovation is murky and more of an art than a science.

Author and columnist Matt Ridley offers some possible answers to the conundrum.

[div class=attrib]From the Wall Street Journal:[end-div]

Bill Moggridge, who invented the laptop computer in 1982, died last week. His idea of using a hinge to attach a screen to a keyboard certainly caught on big, even if the first model was heavy, pricey and equipped with just 340 kilobytes of memory. But if Mr. Moggridge had never lived, there is little doubt that somebody else would have come up with the idea.

The phenomenon of multiple discovery is well known in science. Innovations famously occur to different people in different places at the same time. Whether it is calculus (Newton and Leibniz), or the planet Neptune (Adams and Le Verrier), or the theory of natural selection (Darwin and Wallace), or the light bulb (Edison, Swan and others), the history of science is littered with disputes over bragging rights caused by acts of simultaneous discovery.

As Kevin Kelly argues in his book “What Technology Wants,” there is an inexorability about technological evolution, expressed in multiple discovery, that makes it look as if technological innovation is an autonomous process with us as its victims rather than its directors.

Yet some inventions seem to have occurred to nobody until very late. The wheeled suitcase is arguably such a, well, case. Bernard Sadow applied for a patent on wheeled baggage in 1970, after a Eureka moment when he was lugging his heavy bags through an airport while a local worker effortlessly pushed a large cart past. You might conclude that Mr. Sadow was decades late. There was little to stop his father or grandfather from putting wheels on bags.

Mr. Sadow’s bags ran on four wheels, dragged on a lead like a dog. Seventeen years later a Northwest Airlines pilot, Robert Plath, invented the idea of two wheels on a suitcase held vertically, plus a telescopic handle to pull it with. This “Rollaboard,” now ubiquitous, also feels as if it could have been invented much earlier.

Or take the can opener, invented in the 1850s, eight decades after the can. Early 19th-century soldiers and explorers had to make do with stabbing bayonets into food cans. “Why doesn’t somebody come up with a wheeled cutter?” they must have muttered (or not) as they wrenched open the cans.

Perhaps there’s something that could be around today but hasn’t been invented and that will seem obvious to future generations. Or perhaps not. It’s highly unlikely that brilliant inventions are lying on the sidewalk ignored by the millions of entrepreneurs falling over each other to innovate. Plenty of terrible ideas are tried every day.

Understanding why inventions take so long may require mentally revisiting a long-ago time. For a poorly paid Napoleonic soldier who already carried a decent bayonet, adding a can opener to his limited kitbag was probably a waste of money and space. Indeed, going back to wheeled bags, if you consider the abundance of luggage porters with carts in the 1960s, the ease of curbside drop-offs at much smaller airports and the heavy iron casters then available, 1970 seems about the right date for the first invention of rolling luggage.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class]Image: Joseph Swan, inventor of the incandescent light bulb, which was first publicly demonstrated on 18 December 1878. Courtesy of Wikipedia.[end-div]

An Answer is Blowing in the Wind

Two recent studies report that the world (i.e., humans) could meet its entire electrical energy needs from several million wind turbines.

[div class=attrib]From Ars Technica:[end-div]

Is there not enough wind blowing across the planet to satiate our demands for electricity? If there is, would harnessing that much of it begin to actually affect the climate?

Two studies published this week tried to answer these questions. Long story short: we could supply all our power needs for the foreseeable future from wind, all without affecting the climate in a significant way.

The first study, published in this week’s Nature Climate Change, was performed by Kate Marvel of Lawrence Livermore National Laboratory with Ben Kravitz and Ken Caldeira of the Carnegie Institution for Science. Their goal was to determine a maximum geophysical limit to wind power—in other words, if we extracted all the kinetic energy from wind all over the world, how much power could we generate?

In order to calculate this power limit, the team used the Community Atmosphere Model (CAM), developed by National Center for Atmospheric Research. Turbines were represented as drag forces removing momentum from the atmosphere, and the wind power was calculated as the rate of kinetic energy transferred from the wind to these momentum sinks. By increasing the drag forces, a power limit was reached where no more energy could be extracted from the wind.

The authors found that at least 400 terawatts could be extracted by ground-based turbines—represented by drag forces on the ground—and 1,800 terawatts by high-altitude turbines—represented by drag forces throughout the atmosphere. For some perspective, the current global power demand is around 18 terawatts.

The second study, published in the Proceedings of the National Academy of Sciences by Mark Jacobsen at Stanford and Cristina Archer at the University of Delaware, asked some more practical questions about the limits of wind power. For example, rather than some theoretical physical limit, what is the maximum amount of power that could actually be extracted by real turbines?

For one thing, turbines can’t extract all the kinetic energy from wind—no matter the design, 59.3 percent, the Betz limit, is the absolute maximum. Less-than-perfect efficiencies based on the specific turbine design reduce the extracted power further.

Another important consideration is that, for a given area, you can only add so many turbines before hitting a limit on power extraction—the area is “saturated,” and any power increase you get by adding any turbines ends up matched by a drop in power from existing ones. This happens because the wakes from turbines near each other interact and reduce the ambient wind speed. Jacobsen and Archer expanded this concept to a global level, calculating the saturation wind power potential for both the entire globe and all land except Antarctica.

Like the first study, this one considered both surface turbines and high-altitude turbines located in the jet stream. Unlike the model used in the first study, though, these were placed at specific altitudes: 100 meters, the hub height of most modern turbines, and 10 kilometers. The authors argue improper placement will lead to incorrect reductions in wind speed.

Jacobsen and Archer found that, with turbines placed all over the planet, including the oceans, wind power saturates at about 250 terawatts, corresponding to nearly three thousand terawatts of installed capacity. If turbines are just placed on land and shallow offshore locations, the saturation point is 80 terawatts for 1,500 installed terawatts of installed power.

For turbines at the jet-stream height, they calculated a maximum power of nearly 400 terawatts—about 150 percent of that at 100 meters.

These results show that, even at the saturation point, we could extract enough wind power to supply global demands many times over. Unfortunately, the numbers of turbines required aren’t plausible—300 million five-megawatt turbines in the smallest case (land plus shallow offshore).

[div class=attrib]Read the entire article after the jump.[end-div]

Let the Wealthy Fund Innovation?

Nathan Myhrvold, former CTO of Microsoft, suggests that the wealthy should “think big” by funding large-scale and long-term innovation. Arguably, this would be a much preferred alternative to the wealthy using their millions to gain (more) political influence in much of the West, especially the United States. Myhrvold is now a backer of TerraPower, a nuclear energy startup.

[div class=attrib]From Technology Review:[end-div]

For some technologists, it’s enough to build something that makes them financially successful. They retire happily. Others stay with the company they founded for years and years, enthralled with the platform it gives them. Think how different the work Steve Jobs did at Apple in 2010 was from the innovative ride he took in the 1970s.

A different kind of challenge is to start something new. Once you’ve made it, a new venture carries some disadvantages. It will be smaller than your last company, and more frustrating. Startups require a level of commitment not everyone is ready for after tasting success. On the other hand, there’s no better time than that to be an entrepreneur. You’re not gambling your family’s entire future on what happens next. That is why many accomplished technologists are out in the trenches, leading and funding startups in unprecedented areas.

Jeff Bezos has Blue Origin, a company that builds spaceships. Elon Musk has Tesla, an electric-car company, and SpaceX, another rocket-ship company. Bill Gates took on big challenges in the developing world—combating malaria, HIV, and poverty. He is also funding inventive new companies at the cutting edge of technology. I’m involved in some of them, including TerraPower, which we formed to commercialize a promising new kind of nuclear reactor.

There are few technologies more daunting to inventors (and investors) than nuclear power. On top of the logistics, science, and engineering, you have to deal with the regulations and politics. In the 1970s, much of the world became afraid of nuclear energy, and last year’s events in Fukushima haven’t exactly assuaged those fears.

So why would any rational group of people create a nuclear power company? Part of the reason is that Bill and I have been primed to think long-term. We have the experience and resources to look for game-changing ideas—and the confidence to act when we think we’ve found one. Other technologists who fund ambitious projects have similar motivations. Elon Musk and Jeff Bezos are literally reaching for the stars because they believe NASA and its traditional suppliers can’t innovate at the same rate they can.

In the next few decades, we need more technology leaders to reach for some very big advances. If 20 of us were to try to solve energy problems—with carbon capture and storage, or perhaps some other crazy idea—maybe one or two of us would actually succeed. If nobody tries, we’ll all certainly fail.

I believe the world will need to rely on nuclear energy. A looming energy crisis will force us to rework the underpinnings of our energy economy. That happened last in the 19th century, when we moved at unprecedented scale toward gas and oil. The 20th century didn’t require a big switcheroo, but looking into the 21st century, it’s clear that we have a much bigger challenge.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Nathan Myhrvold. Courtesy of AllThingsD.[end-div]

What’s All the Fuss About Big Data?

We excerpt an interview with big data pioneer and computer scientist, Alex Pentland, via the Edge. Pentland is a leading thinker in computational social science and currently directs the Human Dynamics Laboratory at MIT.

While there is no exact definition of “big data” it tends to be characterized quantitatively and qualitatively differently from data commonly used by most organizations. Where regular data can be stored, processed and analyzed using common database tools and analytical engines, big data refers to vast collections of data that often lie beyond the realm of regular computation. So, often big data requires vast and specialized storage and enormous processing capabilities. Data sets that fall into the big data area cover such areas as climate science, genomics, particle physics, and computational social science.

Big data holds true promise. However, while storage and processing power now enable quick and efficient crunching of tera- and even petabytes of data, tools for comprehensive analysis and visualization lag behind.

[div class=attrib]Alex Pentland via the Edge:[end-div]

Recently I seem to have become MIT’s Big Data guy, with people like Tim O’Reilly and “Forbes” calling me one of the seven most powerful data scientists in the world. I’m not sure what all of that means, but I have a distinctive view about Big Data, so maybe it is something that people want to hear.

I believe that the power of Big Data is that it is information about people’s behavior instead of information about their beliefs. It’s about the behavior of customers, employees, and prospects for your new business. It’s not about the things you post on Facebook, and it’s not about your searches on Google, which is what most people think about, and it’s not data from internal company processes and RFIDs. This sort of Big Data comes from things like location data off of your cell phone or credit card, it’s the little data breadcrumbs that you leave behind you as you move around in the world.

What those breadcrumbs tell is the story of your life. It tells what you’ve chosen to do. That’s very different than what you put on Facebook. What you put on Facebook is what you would like to tell people, edited according to the standards of the day. Who you actually are is determined by where you spend time, and which things you buy. Big data is increasingly about real behavior, and by analyzing this sort of data, scientists can tell an enormous amount about you. They can tell whether you are the sort of person who will pay back loans. They can tell you if you’re likely to get diabetes.

They can do this because the sort of person you are is largely determined by your social context, so if I can see some of your behaviors, I can infer the rest, just by comparing you to the people in your crowd. You can tell all sorts of things about a person, even though it’s not explicitly in the data, because people are so enmeshed in the surrounding social fabric that it determines the sorts of things that they think are normal, and what behaviors they will learn from each other.

As a consequence analysis of Big Data is increasingly about finding connections, connections with the people around you, and connections between people’s behavior and outcomes. You can see this in all sorts of places. For instance, one type of Big Data and connection analysis concerns financial data. Not just the flash crash or the Great Recession, but also all the other sorts of bubbles that occur. What these are is these are systems of people, communications, and decisions that go badly awry. Big Data shows us the connections that cause these events. Big data gives us the possibility of understanding how these systems of people and machines work, and whether they’re stable.

The notion that it is connections between people that is really important is key, because researchers have mostly been trying to understand things like financial bubbles using what is called Complexity Science or Web Science. But these older ways of thinking about Big Data leaves the humans out of the equation. What actually matters is how the people are connected together by the machines and how, as a whole, they create a financial market, a government, a company, and other social structures.

Because it is so important to understand these connections Asu Ozdaglar and I have recently created the MIT Center for Connection Science and Engineering, which spans all of the different MIT departments and schools. It’s one of the very first MIT-wide Centers, because people from all sorts of specialties are coming to understand that it is the connections between people that is actually the core problem in making transportation systems work well, in making energy grids work efficiently, and in making financial systems stable. Markets are not just about rules or algorithms; they’re about people and algorithms together.

Understanding these human-machine systems is what’s going to make our future social systems stable and safe. We are getting beyond complexity, data science and web science, because we are including people as a key part of these systems. That’s the promise of Big Data, to really understand the systems that make our technological society. As you begin to understand them, then you can build systems that are better. The promise is for financial systems that don’t melt down, governments that don’t get mired in inaction, health systems that actually work, and so on, and so forth.

The barriers to better societal systems are not about the size or speed of data. They’re not about most of the things that people are focusing on when they talk about Big Data. Instead, the challenge is to figure out how to analyze the connections in this deluge of data and come to a new way of building systems based on understanding these connections.

Changing The Way We Design Systems

With Big Data traditional methods of system building are of limited use. The data is so big that any question you ask about it will usually have a statistically significant answer. This means, strangely, that the scientific method as we normally use it no longer works, because almost everything is significant!  As a consequence the normal laboratory-based question-and-answering process, the method that we have used to build systems for centuries, begins to fall apart.

Big data and the notion of Connection Science is outside of our normal way of managing things. We live in an era that builds on centuries of science, and our methods of building of systems, governments, organizations, and so on are pretty well defined. There are not a lot of things that are really novel. But with the coming of Big Data, we are going to be operating very much out of our old, familiar ballpark.

With Big Data you can easily get false correlations, for instance, “On Mondays, people who drive to work are more likely to get the flu.” If you look at the data using traditional methods, that may actually be true, but the problem is why is it true? Is it causal? Is it just an accident? You don’t know. Normal analysis methods won’t suffice to answer those questions. What we have to come up with is new ways to test the causality of connections in the real world far more than we have ever had to do before. We no can no longer rely on laboratory experiments; we need to actually do the experiments in the real world.

The other problem with Big Data is human understanding. When you find a connection that works, you’d like to be able to use it to build new systems, and that requires having human understanding of the connection. The managers and the owners have to understand what this new connection means. There needs to be a dialogue between our human intuition and the Big Data statistics, and that’s not something that’s built into most of our management systems today. Our managers have little concept of how to use big data analytics, what they mean, and what to believe.

In fact, the data scientists themselves don’t have much of intuition either…and that is a problem. I saw an estimate recently that said 70 to 80 percent of the results that are found in the machine learning literature, which is a key Big Data scientific field, are probably wrong because the researchers didn’t understand that they were overfitting the data. They didn’t have that dialogue between intuition and causal processes that generated the data. They just fit the model and got a good number and published it, and the reviewers didn’t catch it either. That’s pretty bad because if we start building our world on results like that, we’re going to end up with trains that crash into walls and other bad things. Management using Big Data is actually a radically new thing.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Techcrunch.[end-div]

Scientifiction

Science fiction stories and illustrations from our past provide a wonderful opportunity for us to test the predictive and prescient capabilities of their creators. Some like Arthur C. Clarke, we are often reminded, foresaw the communications satellite and the space elevator. Others, such as science fiction great, Isaac Asimov, fared less well in predicting future technology; while he is considered to have coined the term “robotics”, he famously predicted future computers and robots as using punched cards.

Illustrations of our future from the past are even more fascinating. One of the leading proponents of the science fiction illustration genre, or scientifiction, as it was titled in the mid-1920s, was Frank R. Paul. Paul illustrated many of the now classic U.S. pulp science fiction magazines beginning in the 1920s with vivid visuals of aliens, spaceships, destroyed worlds and bizarre technologies. Though, one of his less apocalyptic, but perhaps prescient, works showed a web-footed alien smoking a cigarette through a lengthy proboscis.

Of Frank R. Paul, Ray Bradbury is quoted as saying, “Paul’s fantastic covers for Amazing Stories changed my life forever.”

See more of Paul’s classic illustrations after the jump.

[div class=attrib]Image courtesy of 50Watts / Frank R. Paul.[end-div]

How Apple With the Help of Others Invented the iPhone

Apple’s invention of the iPhone is story of insight, collaboration, cannibalization and dogged persistence over the period of a decade.

[div class=attrib]From Slate:[end-div]

Like many of Apple’s inventions, the iPhone began not with a vision, but with a problem. By 2005, the iPod had eclipsed the Mac as Apple’s largest source of revenue, but the music player that rescued Apple from the brink now faced a looming threat: The cellphone. Everyone carried a phone, and if phone companies figured out a way to make playing music easy and fun, “that could render the iPod unnecessary,” Steve Jobs once warned Apple’s board, according to Walter Isaacson’s biography.

Fortunately for Apple, most phones on the market sucked. Jobs and other Apple executives would grouse about their phones all the time. The simplest phones didn’t do much other than make calls, and the more functions you added to phones, the more complicated they were to use. In particular, phones “weren’t any good as entertainment devices,” Phil Schiller, Apple’s longtime marketing chief, testified during the company’s patent trial with Samsung. Getting music and video on 2005-era phones was too difficult, and if you managed that, getting the device to actually play your stuff was a joyless trudge through numerous screens and menus.

That was because most phones were hobbled by a basic problem—they didn’t have a good method for input. Hard keys (like the ones on the BlackBerry) worked for typing, but they were terrible for navigation. In theory, phones with touchscreens could do a lot more, but in reality they were also a pain to use. Touchscreens of the era couldn’t detect finger presses—they needed a stylus, and the only way to use a stylus was with two hands (one to hold the phone and one to hold the stylus). Nobody wanted a music player that required two-handed operation.

This is the story of how Apple reinvented the phone. The general outlines of this tale have been told before, most thoroughly in Isaacson’s biography. But the Samsung case—which ended last month with a resounding victory for Apple—revealed a trove of details about the invention, the sort of details that Apple is ordinarily loath to make public. We got pictures of dozens of prototypes of the iPhone and iPad. We got internal email that explained how executives and designers solved key problems in the iPhone’s design. We got testimony from Apple’s top brass explaining why the iPhone was a gamble.

Put it all together and you get remarkable story about a device that, under the normal rules of business, should not have been invented. Given the popularity of the iPod and its centrality to Apple’s bottom line, Apple should have been the last company on the planet to try to build something whose explicit purpose was to kill music players. Yet Apple’s inner circle knew that one day, a phone maker would solve the interface problem, creating a universal device that could make calls, play music and videos, and do everything else, too—a device that would eat the iPod’s lunch. Apple’s only chance at staving off that future was to invent the iPod killer itself. More than this simple business calculation, though, Apple’s brass saw the phone as an opportunity for real innovation. “We wanted to build a phone for ourselves,” Scott Forstall, who heads the team that built the phone’s operating system, said at the trial. “We wanted to build a phone that we loved.”

The problem was how to do it. When Jobs unveiled the iPhone in 2007, he showed off a picture of an iPod with a rotary-phone dialer instead of a click wheel. That was a joke, but it wasn’t far from Apple’s initial thoughts about phones. The click wheel—the brilliant interface that powered the iPod (which was invented for Apple by a firm called Synaptics)—was a simple, widely understood way to navigate through menus in order to play music. So why not use it to make calls, too?

In 2005, Tony Fadell, the engineer who’s credited with inventing the first iPod, got hold of a high-end desk phone made by Samsung and Bang & Olufsen that you navigated using a set of numerical keys placed around a rotating wheel. A Samsung cell phone, the X810, used a similar rotating wheel for input. Fadell didn’t seem to like the idea. “Weird way to hold the cellphone,” he wrote in an email to others at Apple. But Jobs thought it could work. “This may be our answer—we could put the number pad around our clickwheel,” he wrote. (Samsung pointed to this thread as evidence for its claim that Apple’s designs were inspired by other companies, including Samsung itself.)

Around the same time, Jonathan Ive, Apple’s chief designer, had been investigating a technology that he thought could do wonderful things someday—a touch display that could understand taps from multiple fingers at once. (Note that Apple did not invent multitouch interfaces; it was one of several companies investigating the technology at the time.) According to Isaacson’s biography, the company’s initial plan was to the use the new touch system to build a tablet computer. Apple’s tablet project began in 2003—seven years before the iPad went on sale—but as it progressed, it dawned on executives that multitouch might work on phones. At one meeting in 2004, Jobs and his team looked a prototype tablet that displayed a list of contacts. “You could tap on the contact and it would slide over and show you the information,” Forstall testified. “It was just amazing.”

Jobs himself was particularly taken by two features that Bas Ording, a talented user-interface designer, had built into the tablet prototype. One was “inertial scrolling”—when you flick at a list of items on the screen, the list moves as a function of how fast you swipe, and then it comes to rest slowly, as if being affected by real-world inertia. Another was the “rubber-band effect,” which causes a list to bounce against the edge of the screen when there were no more items to display. When Jobs saw the prototype, he thought, “My god, we can build a phone out of this,” he told the D Conference in 2010.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Retro design iPhone courtesy of Ubergizmo.[end-div]

Building Character in Kids

Many parents have known this for a long time: it takes more than a stellar IQ, SAT or ACT score to make a well-rounded kid. Arguably there a many more important traits that never feature on these quantitative tests. Such qualities as leadership, curiosity, initiative, perseverance, motivation, courage and empathy come to mind.

An excerpt below from Paul Tough’s book, “How Children Succeed: Grit, Curiosity and the Hidden Power of Character”.

[div class=attrib]From the Wall Street Journal:[end-div]

We are living through a particularly anxious moment in the history of American parenting. In the nation’s big cities these days, the competition among affluent parents over slots in favored preschools verges on the gladiatorial. A pair of economists from the University of California recently dubbed this contest for early academic achievement the “Rug Rat Race,” and each year, the race seems to be starting earlier and growing more intense.

At the root of this parental anxiety is an idea you might call the cognitive hypothesis. It is the belief, rarely spoken aloud but commonly held nonetheless, that success in the U.S. today depends more than anything else on cognitive skill—the kind of intelligence that gets measured on IQ tests—and that the best way to develop those skills is to practice them as much as possible, beginning as early as possible.

There is something undeniably compelling about the cognitive hypothesis. The world it describes is so reassuringly linear, such a clear case of inputs here leading to outputs there. Fewer books in the home means less reading ability; fewer words spoken by your parents means a smaller vocabulary; more math work sheets for your 3-year-old means better math scores in elementary school. But in the past decade, and especially in the past few years, a disparate group of economists, educators, psychologists and neuroscientists has begun to produce evidence that calls into question many of the assumptions behind the cognitive hypothesis.

What matters most in a child’s development, they say, is not how much information we can stuff into her brain in the first few years of life. What matters, instead, is whether we are able to help her develop a very different set of qualities, a list that includes persistence, self-control, curiosity, conscientiousness, grit and self-confidence. Economists refer to these as noncognitive skills, psychologists call them personality traits, and the rest of us often think of them as character.

If there is one person at the hub of this new interdisciplinary network, it is James Heckman, an economist at the University of Chicago who in 2000 won the Nobel Prize in economics. In recent years, Mr. Heckman has been convening regular invitation-only conferences of economists and psychologists, all engaged in one form or another with the same questions: Which skills and traits lead to success? How do they develop in childhood? And what kind of interventions might help children do better?

The transformation of Mr. Heckman’s career has its roots in a study he undertook in the late 1990s on the General Educational Development program, better known as the GED, which was at the time becoming an increasingly popular way for high-school dropouts to earn the equivalent of high-school diplomas. The GED’s growth was founded on a version of the cognitive hypothesis, on the belief that what schools develop, and what a high-school diploma certifies, is cognitive skill. If a teenager already has the knowledge and the smarts to graduate from high school, according to this logic, he doesn’t need to waste his time actually finishing high school. He can just take a test that measures that knowledge and those skills, and the state will certify that he is, legally, a high-school graduate, as well-prepared as any other high-school graduate to go on to college or other postsecondary pursuits.

Mr. Heckman wanted to examine this idea more closely, so he analyzed a few large national databases of student performance. He found that in many important ways, the premise behind the GED was entirely valid. According to their scores on achievement tests, GED recipients were every bit as smart as high-school graduates. But when Mr. Heckman looked at their path through higher education, he found that GED recipients weren’t anything like high-school graduates. At age 22, Mr. Heckman found, just 3% of GED recipients were either enrolled in a four-year university or had completed some kind of postsecondary degree, compared with 46% of high-school graduates. In fact, Heckman discovered that when you consider all kinds of important future outcomes—annual income, unemployment rate, divorce rate, use of illegal drugs—GED recipients look exactly like high-school dropouts, despite the fact that they have earned this supposedly valuable extra credential, and despite the fact that they are, on average, considerably more intelligent than high-school dropouts.

These results posed, for Mr. Heckman, a confounding intellectual puzzle. Like most economists, he had always believed that cognitive ability was the single most reliable determinant of how a person’s life would turn out. Now he had discovered a group—GED holders—whose good test scores didn’t seem to have any positive effect on their eventual outcomes. What was missing from the equation, Mr. Heckman concluded, were the psychological traits, or noncognitive skills, that had allowed the high-school graduates to make it through school.

So what can parents do to help their children develop skills like motivation and perseverance? The reality is that when it comes to noncognitive skills, the traditional calculus of the cognitive hypothesis—start earlier and work harder—falls apart. Children can’t get better at overcoming disappointment just by working at it for more hours. And they don’t lag behind in curiosity simply because they didn’t start doing curiosity work sheets at an early enough age.

[div class=attrib]Read the entire article after the jump.[end-div]

Sign First; Lie Less

A recent paper filed with the Proceedings of the National Academy of Sciences (PNAS) shows that we are more likely to be honest if we sign a form before, rather than after, completing it. So, over the coming years look out for Uncle Sam to revise the ubiquitous IRS 1040 form by adding a signature line at the top rather than the bottom of the last page.

[div class=attrib]From Ars Technica:[end-div]

What’s the purpose of signing a form? On the simplest level, a signature is simply a way to make someone legally responsible for the content of the form. But in addition to the legal aspect, the signature is an appeal to personal integrity, forcing people to consider whether they’re comfortable attaching their identity to something that may not be completely true.

Based on some figures in a new PNAS paper, the signatures on most forms are miserable failures, at least from the latter perspective. The IRS estimates that it misses out on about $175 billion because people misrepresent their income or deductions. And the insurance industry calculates that it loses about $80 billion annually due to fraudulent claims. But the same paper suggests a fix that is as simple as tweaking the form. Forcing people to sign before they complete the form greatly increases their honesty.

It shouldn’t be a surprise that signing at the end of a form does not promote accurate reporting, given what we know about human psychology. “Immediately after lying,” the paper’s authors write, “individuals quickly engage in various mental justifications, reinterpretations, and other ‘tricks’ such as suppressing thoughts about their moral standards that allow them to maintain a positive self-image despite having lied.” By the time they get to the actual request for a signature, they’ve already made their peace with lying: “When signing comes after reporting, the morality train has already left the station.”

The problem isn’t with the signature itself. Lots of studies have shown that focusing the attention on one’s self, which a signature does successfully, can cause people to behave more ethically. The problem comes from its placement after the lying has already happened. So, the authors posited a quick fix: stick the signature at the start. Their hypothesis was that “signing one’s name before reporting information (rather than at the end) makes morality accessible right before it is most needed, which will consequently promote honest reporting.”

To test this proposal, they designed a series of forms that required self reporting of personal information, either involving performance on a math quiz where higher scores meant higher rewards, or the reimbursable travel expenses involved in getting to the study’s location. The only difference among the forms? Some did not ask for a signature, some put the signature on top, and some placed it in its traditional location, at the end.

In the case of the math quiz, the researchers actually tracked how well the participants had performed. With the signature at the end, a full 79 percent of the participants cheated. Somewhat fewer cheated when no signature was required, though the difference was not statistically significant. But when the signature was required on top, only 37 percent cheated—less than half the rate seen in the signature-at-bottom group. A similar pattern was seen when the authors analyzed the extent of the cheating involved.

Although they didn’t have complete information on travel expenses, the same pattern prevailed: people who were given the signature-on-top form reported fewer expenses than either of the other two groups.

The authors then repeated this experiment, but added a word completion task, where participants were given a series of blanks, some filled in with letters, and asked to complete the word. These completion tasks were set up so that they could be answered with neutral words or with those associated with personal ethics, like “virtue.” They got the same results as in the earlier tests of cheating, and the word completion task showed that the people who had signed on top were more likely to fill in the blanks to form ethics-focused words. This supported the contention that the early signature put people in an ethical state of mind prior to completion of the form.

But the really impressive part of the study came from its real-world demonstration of this effect. The authors got an unnamed auto insurance company to send out two versions of its annual renewal forms to over 13,000 policy holders, identical except for the location of the signature. One part of this form included a request for odometer readings, which the insurance companies use to calculate typical miles travelled, which are proportional to accident risk. These are used to calculate insurance cost—the more you drive, the more expensive it is.

Those who signed at the top reported nearly 2,500 miles more than the ones who signed at the end.

[div class=attrib]Read the entire article after the jump, or follow the article at PNAS, here.[end-div]

[div class=attrib]Image courtesy of University of Illinois at Urbana-Champaign.[end-div]

Scandinavian Killer on Ice

The title could be mistaken for a dark and violent crime novel from the likes of (Stieg) Larrson, Nesbø, Sjöwall-Wahlöö, or Henning Mankell. But, this story is somewhat more mundane, though much more consequential. It’s a story about a Swedish cancer killer.

[div class=attrib]From the Telegraph:[end-div]

On the snow-clotted plains of central Sweden where Wotan and Thor, the clamorous gods of magic and death, once held sway, a young, self-deprecating gene therapist has invented a virus that eliminates the type of cancer that killed Steve Jobs.

‘Not “eliminates”! Not “invented”, no!’ interrupts Professor Magnus Essand, panicked, when I Skype him to ask about this explosive achievement.

‘Our results are only in the lab so far, not in humans, and many treatments that work in the lab can turn out to be not so effective in humans. However, adenovirus serotype 5 is a common virus in which we have achieved transcriptional targeting by replacing an endogenous viral promoter sequence by…’

It sounds too kindly of the gods to be true: a virus that eats cancer.

‘I sometimes use the phrase “an assassin who kills all the bad guys”,’ Prof Essand agrees contentedly.

Cheap to produce, the virus is exquisitely precise, with only mild, flu-like side-effects in humans. Photographs in research reports show tumours in test mice melting away.

‘It is amazing,’ Prof Essand gleams in wonder. ‘It’s better than anything else. Tumour cell lines that are resistant to every other drug, it kills them in these animals.’

Yet as things stand, Ad5[CgA-E1A-miR122]PTD – to give it the full gush of its most up-to-date scientific name – is never going to be tested to see if it might also save humans. Since 2010 it has been kept in a bedsit-sized mini freezer in a busy lobby outside Prof Essand’s office, gathering frost. (‘Would you like to see?’ He raises his laptop computer and turns, so its camera picks out a table-top Electrolux next to the lab’s main corridor.)

Two hundred metres away is the Uppsala University Hospital, a European Centre of Excellence in Neuroendocrine Tumours. Patients fly in from all over the world to be seen here, especially from America, where treatment for certain types of cancer lags five years behind Europe. Yet even when these sufferers have nothing else to hope for, have only months left to live, wave platinum credit cards and are prepared to sign papers agreeing to try anything, to hell with the side-effects, the oncologists are not permitted – would find themselves behind bars if they tried – to race down the corridors and snatch the solution out of Prof Essand’s freezer.

I found out about Prof Magnus Essand by stalking him. Two and a half years ago the friend who edits all my work – the biographer and genius transformer of rotten sentences and misdirected ideas, Dido Davies – was diagnosed with neuroendocrine tumours, the exact type of cancer that Steve Jobs had. Every three weeks she would emerge from the hospital after eight hours of chemotherapy infusion, as pale as ice but nevertheless chortling and optimistic, whereas I (having spent the day battling Dido’s brutal edits to my work, among drip tubes) would stumble back home, crack open whisky and cigarettes, and slump by the computer. Although chemotherapy shrank the tumour, it did not cure it. There had to be something better.

It was on one of those evenings that I came across a blog about a quack in Mexico who had an idea about using sub-molecular particles – nanotechnology. Quacks provide a very useful service to medical tyros such as myself, because they read all the best journals the day they appear and by the end of the week have turned the results into potions and tinctures. It’s like Tommy Lee Jones in Men in Black reading the National Enquirer to find out what aliens are up to, because that’s the only paper trashy enough to print the truth. Keep an eye on what the quacks are saying, and you have an idea of what might be promising at the Wild West frontier of medicine. This particular quack was in prison awaiting trial for the manslaughter (by quackery) of one of his patients, but his nanotechnology website led, via a chain of links, to a YouTube lecture about an astounding new therapy for neuroendocrine cancer based on pig microbes, which is currently being put through a variety of clinical trials in America.

I stopped the video and took a snapshot of the poster behind the lecturer’s podium listing useful research company addresses; on the website of one of these organisations was a reference to a scholarly article that, when I checked through the footnotes, led, via a doctoral thesis, to a Skype address – which I dialled.

‘Hey! Hey!’ Prof Magnus Essand answered.

To geneticists, the science makes perfect sense. It is a fact of human biology that healthy cells are programmed to die when they become infected by a virus, because this prevents the virus spreading to other parts of the body. But a cancerous cell is immortal; through its mutations it has somehow managed to turn off the bits of its genetic programme that enforce cell suicide. This means that, if a suitable virus infects a cancer cell, it could continue to replicate inside it uncontrollably, and causes the cell to ‘lyse’ – or, in non-technical language, tear apart. The progeny viruses then spread to cancer cells nearby and repeat the process. A virus becomes, in effect, a cancer of cancer. In Prof Essand’s laboratory studies his virus surges through the bloodstreams of test animals, rupturing cancerous cells with Viking rapacity.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]The Snowman by Jo Nesbø. Image courtesy of Barnes and Noble.[end-div]

Corporate R&D meets Public Innovation

As corporate purse strings have drawn tighter some companies have looked for innovation beyond the office cubicle.

[div class=attrib]From Technology Review:[end-div]

Where does innovation come from? For one answer, consider the work of MIT professor Eric von Hippel, who has calculated that ordinary U.S. consumers spend $20 billion in time and money trying to improve on household products—for example, modifying a dog-food bowl so it doesn’t slide on the floor. Von Hippel estimates that these backyard Edisons collectively invest more in their efforts than the largest corporation anywhere does in R&D.

The low-tech kludges of consumers might once have had little impact. But one company, Procter & Gamble, has actually found a way to tap into them; it now gets many of its ideas for new Swiffers and toothpaste tubes from the general public. One way it has managed to do so is with the help of InnoCentive, a company in Waltham, Massachusetts, that specializes in organizing prize competitions over the Internet. Volunteer “solvers” can try to earn $500 to $1 million by coming up with answers to a company’s problems.

We like Procter & Gamble’s story because the company has discovered a creative, systematic way to pay for ideas originating far outside of its own development labs. It’s made an innovation in funding innovation, which is the subject of this month’s Technology Review business report.

How we pay for innovation is a question prompted, in part, by the beleaguered state of the venture capital industry. Over the long term, it’s the system that’s most often gotten the economic incentives right. Consider that although fewer than two of every 1,000 new American businesses are venture backed, these account for 11 percent of public companies and 6 percent of U.S. employment, according to Harvard Business School professor Josh Lerner. (Many of those companies, although not all, have succeeded because they’ve brought new technology to market.)

Yet losses since the dot-com boom in the late 1990s have taken a toll. In August, the nation’s largest public pension fund, the California Public Employees Retirement System, said it would basically stop investing with the state’s venture funds, citing returns of 0.0 percent over a decade.

The crisis has partly to do with the size of venture funds—$1 billion isn’t uncommon. That means they need big money plays at a time when entrepreneurs are headed on exactly the opposite course. On the Web, it’s never been cheaper to start a company. You can outsource software development, rent a thousand servers, and order hardware designs from China. That is significant because company founders can often get the money they need from seed accelerators, angel investors, or Internet-based funding mechanisms such as Kickstarter.

“We’re in a period of incredible change in how you fund innovation, especially entrepreneurial innovation,” says Ethan Mollick, a professor of management science at the Wharton School. He sees what’s happening as a kind of democratization—the bets are getting smaller, but also more spread out and numerous. He thinks this could be a good thing. “One of the ways we get more innovation is by taking more draws,” he says.

In an example of the changes ahead, Mollick cites plans by the U.S. Securities and Exchange Commission to allow “crowdfunding”—it will let companies raise $1 million or so directly from the public, every year, over the Internet. (This activity had previously been outlawed as a hazard to gullible investors.) Crowdfunding may lead to a major upset in the way inventions get financed, especially those with popular appeal and modest funding requirements, like new gadget designs.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Louisiana Department of Education.[end-div]

The Power of Lists

Where would you be without lists? Surely your life would be much less organized were it not for the shopping list, gift list, re-gifting list, reading list, items to fix list, resolutions list, medications list, vacation list, work action items list, spouse to-do list, movies to see list, greeting card list, gift wish list, allergies list, school supplies list, and of course the places to visit before you die list. The lists just go on an on.

[div class=attrib]From the New York Times:[end-div]

WITH school starting and vacations ending, this is the month, the season of the list. But face it. We’re living in the era of the list, maybe even its golden age. The Web click has led to the wholesale repackaging of information into lists, which can be complex and wonderful pieces of information architecture. Our technology has imperceptibly infected us with “list thinking.”

Lists are the simplest way to organize information. They are also a symptom of our short attention spans.

The crudest of online lists are galaxies of buttons, replacing real stories. “Listicles,” you might say. They are just one step beyond magazine cover lines like “37 Ways to Drive Your Man Wild in Bed.” Bucket lists have produced competitive list making online. Like competitive birders, people check off books read or travel destinations visited.

But lists can also tell a story. Even the humble shopping list says something about the shopper — and the Netflix queue, a “smart list” built on experience and suggestion algorithms, says much about the subscriber.

Lists can reveal personal dramas. An exhibit of lists at the Morgan Library and Museum showed a passive-aggressive Picasso omitting his bosom buddy, Georges Braque, from a list of recommended artists.

We’ve come a long way from the primitive best-seller lists and hit parade lists, “crowd sourced,” if you will, from sales. We all have our “to-do” lists, and there is a modern, sophisticated form of the list that is as serious as the “best of…” list is frivolous. That is the checklist.

The surgeon Atul Gawande, in his book “The Checklist Manifesto,” explains the utility of the list in assuring orderly procedures and removing error. For all that society has accomplished in such fields as medicine and aviation, he argues, the know-how is often unmanageable — without a checklist.

A 70-page checklist put together by James Lovell, the commander of Apollo 13, helped him navigate the spacecraft back to Earth after an oxygen tank exploded. Capt. Chesley B. Sullenberger safely ditched his Airbus A-320 in the Hudson River after consulting the “engine out” checklist, which advised “Land ASAP” if the engines fail to restart.

At a local fast-food joint, I see checklists for cleanliness, one list for the front of the store and one for restrooms — a set of inspections and cleanups to be done every 30 minutes. The list is mapped on photo views, with numbers of the tasks over the areas in question. A checklist is a kind of story or narrative and has a long history in literature. The heroic list or catalog is a feature of epic poetry, from Homer to Milton. There is the famed catalog of ships and heroes in “The Iliad.”

Homer’s ships are also echoed in a list in Lewis Carroll’s “The Walrus and the Carpenter”: “‘The time has come,’ the walrus said, ‘to talk of many things: Of shoes — and ships — and sealing-wax — of cabbages — and kings.’” This is the prototype of the surrealist list.

There are other sorts of lists in literature. Vladimir Nabokov said he spent a long time working out the list (he called it a poem) of Lolita’s classmates in his famous novel; the names reflect the flavor of suburban America in the 1950s and give sly clues to the plot as well. There are hopeful names like Grace Angel and ominous ones like Aubrey McFate.

[div class=attrib]Read the entire article after the jump.[end-div]

Happy Birthday :-)

Thirty years ago today Professor Scott Fahlman of Carnegie Mellon University sent what is believed to be the first emoticon embedded in an email. The symbol, :-), which he proposed as a joke marker, spread rapidly, morphed and evolved into a universe of symbolic nods, winks, and cyber-emotions.

For a lengthy list of popular emoticons, including some very interesting Eastern ones, jump here.

[div class=attrib]From the Independent:[end-div]

To some, an email isn’t complete without the inclusion of 🙂 or :-(. To others, the very idea of using “emoticons” – communicative graphics – makes the blood boil and represents all that has gone wrong with the English language.

Regardless of your view, as emoticons celebrate their 30th anniversary this month, it is accepted that they are here stay. Their birth can be traced to the precise minute: 11:44am on 19 September 1982. At that moment, Professor Scott Fahlman, of Carnegie Mellon University in Pittsburgh, sent an email on an online electronic bulletin board that included the first use of the sideways smiley face: “I propose the following character sequence for joke markers: 🙂 Read it sideways.” More than anyone, he must take the credit – or the blame.

The aim was simple: to allow those who posted on the university’s bulletin board to distinguish between those attempting to write humorous emails and those who weren’t. Professor Fahlman had seen how simple jokes were often misunderstood and attempted to find a way around the problem.

This weekend, the professor, a computer science researcher who still works at the university, says he is amazed his smiley face took off: “This was a little bit of silliness that I tossed into a discussion about physics,” he says. “It was ten minutes of my life. I expected my note might amuse a few of my friends, and that would be the end of it.”

But once his initial email had been sent, it wasn’t long before it spread to other universities and research labs via the primitive computer networks of the day. Within months, it had gone global.

Nowadays dozens of variations are available, mainly as little yellow, computer graphics. There are emoticons that wear sunglasses; some cry, while others don Santa hats. But Professor Fahlman isn’t a fan.

“I think they are ugly, and they ruin the challenge of trying to come up with a clever way to express emotions using standard keyboard characters. But perhaps that’s just because I invented the other kind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

The Pleasure from Writing Long Sentences

Author Pico Iver distances himself from the short bursts of broken language of the Twitterscape and the exclamatory sound-bites of our modern day lives, and revels in the lush beauty of the long and winding sentence.

[div class=attrib]From the LA Times:[end-div]

“Your sentences are so long,” said a friend who teaches English at a local college, and I could tell she didn’t quite mean it as a compliment. The copy editor who painstakingly went through my most recent book often put yellow dashes on-screen around my multiplying clauses, to ask if I didn’t want to break up my sentences or put less material in every one. Both responses couldn’t have been kinder or more considered, but what my friend and my colleague may not have sensed was this: I’m using longer and longer sentences as a small protest against — and attempt to rescue any readers I might have from — the bombardment of the moment.

When I began writing for a living, my feeling was that my job was to give the reader something vivid, quick and concrete that she couldn’t get in any other form; a writer was an information-gathering machine, I thought, and especially as a journalist, my job was to go out into the world and gather details, moments, impressions as visual and immediate as TV. Facts were what we needed most. And if you watched the world closely enough, I believed (and still do), you could begin to see what it would do next, just as you can with a sibling or a friend; Don DeLillo or Salman Rushdie aren’t mystics, but they can tell us what the world is going to do tomorrow because they follow it so attentively.

Yet nowadays the planet is moving too fast for even a Rushdie or DeLillo to keep up, and many of us in the privileged world have access to more information than we know what to do with. What we crave is something that will free us from the overcrowded moment and allow us to see it in a larger light. No writer can compete, for speed and urgency, with texts or CNN news flashes or RSS feeds, but any writer can try to give us the depth, the nuances — the “gaps,” as Annie Dillard calls them — that don’t show up on many screens. Not everyone wants to be reduced to a sound bite or a bumper sticker.

Enter (I hope) the long sentence: the collection of clauses that is so many-chambered and lavish and abundant in tones and suggestions, that has so much room for near-contradiction and ambiguity and those places in memory or imagination that can’t be simplified, or put into easy words, that it allows the reader to keep many things in her head and heart at the same time, and to descend, as by a spiral staircase, deeper into herself and those things that won’t be squeezed into an either/or. With each clause, we’re taken further and further from trite conclusions — or that at least is the hope — and away from reductionism, as if the writer were a dentist, saying “Open wider” so that he can probe the tender, neglected spaces in the reader (though in this case it’s not the mouth that he’s attending to but the mind).

“There was a little stoop of humility,” Alan Hollinghurst writes in a sentence I’ve chosen almost at random from his recent novel “The Stranger’s Child,” “as she passed through the door, into the larger but darker library beyond, a hint of frailty, an affectation of bearing more than her fifty-nine years, a slight bewildered totter among the grandeur that her daughter now had to pretend to take for granted.” You may notice — though you don’t have to — that “humility” has rather quickly elided into “affectation,” and the point of view has shifted by the end of the sentence, and the physical movement through the rooms accompanies a gradual inner movement that progresses through four parallel clauses, each of which, though legato, suggests a slightly different take on things.

Many a reader will have no time for this; William Gass or Sir Thomas Browne may seem long-winded, the equivalent of driving from L.A. to San Francisco by way of Death Valley, Tijuana and the Sierras. And a highly skilled writer, a Hemingway or James Salter, can get plenty of shading and suggestion into even the shortest and straightest of sentences. But too often nowadays our writing is telegraphic as a way of keeping our thinking simplistic, our feeling slogan-crude. The short sentence is the domain of uninflected talk-radio rants and shouting heads on TV who feel that qualification or subtlety is an assault on their integrity (and not, as it truly is, integrity’s greatest adornment).

If we continue along this road, whole areas of feeling and cognition and experience will be lost to us. We will not be able to read one another very well if we can’t read Proust’s labyrinthine sentences, admitting us to those half-lighted realms where memory blurs into imagination, and we hide from the person we care for or punish the thing that we love. And how can we feel the layers, the sprawl, the many-sidedness of Istanbul in all its crowding amplitude without the 700-word sentence, transcribing its features, that Orhan Pamuk offered in tribute to his lifelong love?

[div class=attrib]Read the entire article after the jump.[end-div]

Old Concepts Die Hard

Regardless of how flawed old scientific concepts may be researchers have found that it is remarkably difficult for people to give these up and accept sound, new reasoning. Even scientists are creatures of habit.

[div class=attrib]From Scientific American:[end-div]

In one sense, science educators have it easy. The things they describe are so intrinsically odd and interesting — invisible fields, molecular machines, principles explaining the unity of life and origins of the cosmos — that much of the pedagogical attention-getting is built right in.  Where they have it tough, though, is in having to combat an especially resilient form of higher ed’s nemesis: the aptly named (if irredeemably clichéd) ‘preconceived idea.’ Worse than simple ignorance, naïve ideas about science lead people to make bad decisions with confidence. And in a world where many high-stakes issues fundamentally boil down to science, this is clearly a problem.

Naturally, the solution to the problem lies in good schooling — emptying minds of their youthful hunches and intuitions about how the world works, and repopulating them with sound scientific principles that have been repeatedly tested and verified. Wipe out the old operating system, and install the new. According to a recent paper by Andrew Shtulman and Joshua Valcarcel, however, we may not be able to replace old ideas with new ones so cleanly. Although science as a field discards theories that are wrong or lacking, Shtulman and Valcarcel’s work suggests that individuals —even scientifically literate ones — tend to hang on to their early, unschooled, and often wrong theories about the natural world. Even long after we learn that these intuitions have no scientific support, they can still subtly persist and influence our thought process. Like old habits, old concepts seem to die hard.

Testing for the persistence of old concepts can’t be done directly. Instead, one has to set up a situation in which old concepts, if present, measurably interfere with mental performance. To do this, Shtulman and Valcarcel designed a task that tested how quickly and accurately subjects verified short scientific statements (for example: “air is composed of matter.”). In a clever twist, the authors interleaved two kinds of statements — “consistent” ones that had the same truth-value under a naive theory and a proper scientific theory, and “inconsistent” ones. For example, the statement “air is composed of matter”  is inconsistent: it’s false under a naive theory (air just seems like empty space, right?), but is scientifically true. By contrast, the statement “people turn food into energy” is consistent: anyone who’s ever eaten a meal knows it’s true, and science affirms this by filling in the details about digestion, respiration and metabolism.

Shtulman and Valcarcel tested 150 college students on a battery of 200 such statements that included an equal and random mix of consistent and inconsistent statements from several domains, including astronomy, evolution, physiology, genetics, waves, and others. The scientists measured participants’ response speed and accuracy, and looked for systematic differences in how consistent vs. inconsistent statements were evaluated.

If scientific concepts, once learned, are fully internalized and don’t conflict with our earlier naive concepts, one would expect consistent and inconsistent statements to be processed similarly. On the other hand, if naive concepts are never fully supplanted, and are quietly threaded into our thought process, it should take take longer to evaluate inconsistent statements. In other words, it should take a bit of extra mental work (and time) to go against the grain of a naive theory we once held.

This is exactly what Shtulman and Valcarcel found. While there was some variability between the different domains tested, inconsistent statements took almost a half second longer to verify, on average. Granted, there’s a significant wrinkle in interpreting this result. Specifically, it may simply be the case that scientific concepts that conflict with naive intuition are simply learned more tenuously than concepts that are consistent with our intuition. Under this view, differences in response times aren’t necessarily evidence of ongoing inner conflict between old and new concepts in our brains — it’s just a matter of some concepts being more accessible than others, depending on how well they were learned.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of New Scientist.[end-div]

Mobile Phone as Survival Gear

So, here’s the premise. You have hiked alone for days and now find yourself isolated and lost in a dense forest half-way up a mountain. Yes! You have a cell phone. But, oh no, there is no service in this remote part of the world. So, no call for help and no GPS. And, it gets worse: you have no emergency supplies and no food. What can you do? The neat infographic offers some tips.

[div class=attrib]Infographic courtesy of Natalie Bracco / AnsonAlex.com.[end-div]

Death Cafe

“Death Cafe” sounds like the name of a group of alternative musicians from Denmark. But it’s not. Its rather more literal definition is a coffee shop where customers go to talk about death over a cup of earl grey tea or double shot espresso. And, while it’s not displacing Starbucks (yet), death cafes are a growing trend in Europe, first inspired by the pop-up Cafe Mortels of Switzerland.

[div class=attrib]From the Independent:[end-div]

Do you have a death wish?” is not a question normally bandied about in seriousness. But have you ever actually asked whether a parent, partner or friend has a wish, or wishes, concerning their death? Burial or cremation? Where would they like to die? It’s not easy to do.

Stiff-upper-lipped Brits have a particular problem talking about death. Anyone who tries invariably gets shouted down with “Don’t talk like that!” or “If you say it, you’ll make it happen.” A survey by the charity Dying Matters reveals that more than 70 per cent of us are uncomfortable talking about death and that less than a third of us have spoken to family members about end-of-life wishes.

But despite this ingrained reluctance there are signs of burgeoning interest in exploring death. I attended my first death cafe recently and was surprised to discover that the gathering of goths, emos and the terminally ill that I’d imagined, turned out to be a collection of fascinating, normal individuals united by a wish to discuss mortality.

At a trendy coffee shop called Cakey Muto in Hackney, east London, taking tea (and scones!) with death turned out to be rather a lot of fun. What is believed to be the first official British death cafe took place in September last year, organised by former council worker Jon Underwood. Since then, around 150 people have attended death cafes in London and the one I visited was the 17th such happening.

“We don’t want to shove death down people’s throats,” Underwood says. “We just want to create an environment where talking about death is natural and comfortable.” He got the idea from the Swiss model (cafe mortel) invented by sociologist Bernard Crettaz, the popularity of which gained momentum in the Noughties and has since spread to France.

Underwood is keen to start a death cafe movement in English-speaking countries and his website (deathcafe.com) includes instructions for setting up your own. He has already inspired the first death cafe in America and groups have sprung up in Northern England too. Last month, he arranged the first death cafe targeting issues around dying for a specific group, the LGBT community, which he says was extremely positive and had 22 attendees.

Back in Cakey Muto, 10 fellow attendees and I eye each other nervously as the cafe door is locked and we seat ourselves in a makeshift circle. Conversation is kicked off by our facilitator, grief specialist Kristie West, who sets some ground rules. “This is a place for people to talk about death,” she says. “I want to make it clear that it is not about grief, even though I’m a grief specialist. It’s also not a debate platform. We don’t want you to air all your views and pick each other apart.”

A number of our party are directly involved in the “death industry”: a humanist-funeral celebrant, an undertaker and a lady who works in a funeral home. Going around the circle explaining our decision to come to a death cafe, what came across from this trio, none of whom knew each other, was their satisfaction in their work.

“I feel more alive than ever since working in a funeral home,” one of the women remarked. “It has helped me recognise that it isn’t a circle between life and death, it is more like a cosmic soup. The dead and the living are sort of floating about together.”

Others in the group include a documentary maker, a young woman whose mother died 18 months ago, a lady who doesn’t say much but was persuaded by her neighbour to come, and a woman who has attended three previous death cafes but still hasn’t managed to admit this new interest to her family or get them to talk about death.

The funeral celebrant tells the circle she’s been thinking a lot about what makes a good or bad death. She describes “the roaring corrosiveness of stepping into a household” where a “bad death” has taken place and the group meditates on what a bad death entails: suddenness, suffering and a difficult relationship between the deceased and bereaved?

“I have seen people have funerals which I don’t think they would have wanted,” says the undertaker, who has 17 years of experience. “It is possible to provide funerals more cheaply, more sensitively and with greater respect for the dead.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Death cafe menu courtesy of Death Cafe.[end-div]

Art That Makes You Scratch Your Head

Some works of art are visceral or grotesque, others evoke soaring and enlightening emotions. Some art just makes you think deeply about a specific event or about fundamental philosophical questions. Then, every once in a while, along comes a work that requires serious head-scratching.

[div class=attrib]From NPR:[end-div]

You are standing in a park in New Zealand. You look up at the top of a hill, and there, balanced on the ground, looking like it might catch a breeze and blow away, is a gigantic, rumpled piece of paper.

Except … one side of it, the underside, is … not there. You can see the sky, clouds, birds where there should be paper, so what is this?

As you approach, you realize it is made of metal. It’s a sculpture, made of welded and painted steel that looks like a two dimensional cartoon drawing of a three dimensional piece of paper … that is three dimensional if you get close, but looks two dimensional if you stay at the bottom of the hill…

[div class=attrib]Read the entire article and catch more images after the jump, and see more of Neil Dawson’s work here.[end-div]

[div class=attrib]Image: Horizons at Gibbs Farm by sculptor Neil Dawson, private art park, New Zealand. Courtesy of NPR / Gibbs Farm / Neil Dawson.[end-div]

Instagram: Confusing Mediocrity with Artistry

Professional photographers take note: there will always be room for high-quality images that tell a story or capture a timeless event or exude artistic elegance. But, your domain is under attack, again — and the results are not particularly pretty. This time courtesy of Instagram.

Just over a hundred years ago, to be a good photographer one required the skills of an alchemist; the chemical processing of plates and prints was more complex, much more time-consuming than capturing the shot itself, and sometimes dangerous. A good print required constant attention, lengthy cajoling and considerable patience, and of course a darkroom and some interesting chemicals.

Then Kodak came along; it commoditized film and processing, expanding photography to the masses. More recently as technology has improved and hardware prices have continued to drop, more cameras have found their ways into the hands of more people. However, until recently access to good quality (yet still expensive) photographic equipment has played an important role in allowing photographers to maintain superiority of their means and ends over everyday amateurs.

Even as photography has become a primarily digital process, with camera prices  continuing to plummet, many photographers have continued to distinguish their finished images from the burgeoning mainstream. After all, it still takes considerable skill and time to post-process an image in Photoshop or other imaging software.

Nowadays, anyone armed with a $99 smartphone is a photographer with a high-resolution camera. And, through the power of blogs and social networks every photographer is also a publisher. Technology has considerably democratized and shortened the process. So, now an image can find its way from the hands of the photographer to the eyes of a vast audience almost instantaneously. The numbers speak for themselves — by most estimates, around 4.2 million images are uploaded daily to Flickr and 4.5 million to Instagram.

And, as the smartphone is to a high-end medium or large format camera, so is Instagram to Photoshop. Now, armed with both smartphone and Instagram a photographer — applying the term loosely — can touch-up an image of their last meal with digital sepia or apply a duo-tone filter to a landscape of their bedroom, or, most importantly, snap a soft-focus, angled self-portrait. All this, and the photographer can still deliver the finished work to a horde of followers for instant, gratuitous “likes”.

But, here’s why Instagram may not be such a threat to photography after all, despite the vast ocean of images washing across the internet.

[div class=attrib]From the Atlantic Wire:[end-div]

While the Internet has had a good time making fun of these rich kid Instagram photos, haters should be careful. These postings are emblematic of the entire medium we all use. To be certain, these wealthy kid pix are particularly funny (and also sad) because they showcase a gross variant of entitlement. Preteens posing with helicopters they did nothing to earn and posting the pictures online for others to ogle provides an easy in for commentary on the state of the American dream. (Dead.) While we don’t disagree with that reading, it’s par for the course on Instagram, a shallow medium all about promoting superficiality that photo takers did little to nothing to earn.

The very basis of Instagram is not just to show off, but to feign talent we don’t have, starting with the filters themselves. The reason we associate the look with “cool” in the first place is that many of these pretty hazes originated from processes coveted either for their artistic or unique merits, as photographer and blogger Ming Thein explains: “Originally, these styles were either conscious artistic decisions, or the consequences of not enough money and using expired film. They were chosen precisely because they looked unique—either because it was a difficult thing to execute well (using tilt-shift lenses, for instance) or because nobody else did it (cross-processing),” he writes. Instagram, however, has made such techniques easy and available, taking away that original value. “It takes the skill out of actually having to do any of these things (learn to process B&W properly, either chemically or in Photoshop, for instance),” he continues.

Yet we apply them to make ourselves look like we’ve got something special. Everything becomes “amaaazzing,” to put it in the words of graphic design blogger Jack Mancer, who has his own screed about the site. But actually, nothing about it is truly amazing. Some might call the process democratizing—everyone is a professional!—but really, it’s a big hoax. Everyone is just pressing buttons to add computer-generated veneers to our mostly mundane lives. There is nothing artsy about that. But we still do it. Is that really better than the rich kids? Sure, we’re not embarrassing ourselves by posting extreme wealth we happened into. But what are we posting? And why? At the very least, we’re doing it to look artsy; if not that, there is some other, deeper, more sinister thing we’re trying to prove, which means we’re right up there with the rich kids.

Here are some examples of how we see this playing out on the network:

The Food Pic

Why you post this: This says my food looks cool, therefore it is yummy. Look how well I eat, or how well I cook, or what a foodie I am.

Why this is just like the rich kids: Putting an artsy filter on a pretty photo can make the grossest slosh look like gourmet eats. It does not prove culinary or photographic skill, it proves that you can press a button.

The Look How much Fun I’m Having Pic

Why you post this: To prove you have the best, most social, coolest life, and friends. To prove you are happy and fun.

Why this is just like the rich kids: This also has an underlying tone of flaunting wealth. Fun usually costs money, and it’s something not everybody else has.

The Picture of Thing Pic

Why you post this: This proves your fantastic, enviable artistic eye: “I turned a mundane object into art!”

What that is just like the rich kids: See above. Essentially, you’re bragging, but without the skills to support it.

Instagram and photo apps like it are shallow mediums that will generate shallow results. They are there for people to showcase something that doesn’t deserve a platform. The rich kids are a particularly salient example of how the entire network operates, but those who live in glass houses shot by Instagram shouldn’t throw beautifully if artfully filtered stones.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Tumblr: Rich Kids of Instgram.[end-div]

Air Conditioning in a Warming World

[div class=attrib]From the New York Times:[end-div]

THE blackouts that left hundreds of millions of Indians sweltering in the dark last month underscored the status of air-conditioning as one of the world’s most vexing environmental quandaries.

Fact 1: Nearly all of the world’s booming cities are in the tropics and will be home to an estimated one billion new consumers by 2025. As temperatures rise, they — and we — will use more air-conditioning.

Fact 2: Air-conditioners draw copious electricity, and deliver a double whammy in terms of climate change, since both the electricity they use and the coolants they contain result in planet-warming emissions.

Fact 3: Scientific studies increasingly show that health and productivity rise significantly if indoor temperature is cooled in hot weather. So cooling is not just about comfort.

Sum up these facts and it’s hard to escape: Today’s humans probably need air-conditioning if they want to thrive and prosper. Yet if all those new city dwellers use air-conditioning the way Americans do, life could be one stuttering series of massive blackouts, accompanied by disastrous planet-warming emissions.

We can’t live with air-conditioning, but we can’t live without it.

“It is true that air-conditioning made the economy happen for Singapore and is doing so for other emerging economies,” said Pawel Wargocki, an expert on indoor air quality at the International Center for Indoor Environment and Energy at the Technical University of Denmark. “On the other hand, it poses a huge threat to global climate and energy use. The current pace is very dangerous.”

Projections of air-conditioning use are daunting. In 2007, only 11 percent of households in Brazil and 2 percent in India had air-conditioning, compared with 87 percent in the United States, which has a more temperate climate, said Michael Sivak, a research professor in energy at the University of Michigan. “There is huge latent demand,” Mr. Sivak said. “Current energy demand does not yet reflect what will happen when these countries have more money and more people can afford air-conditioning.” He has estimated that, based on its climate and the size of the population, the cooling needs of Mumbai alone could be about a quarter of those of the entire United States, which he calls “one scary statistic.”

It is easy to decry the problem but far harder to know what to do, especially in a warming world where people in the United States are using our existing air-conditioners more often. The number of cooling degree days — a measure of how often cooling is needed — was 17 percent above normal in the United States in 2010, according to the Environmental Protection Agency, leading to “an increase in electricity demand.” This July was the hottest ever in the United States.

Likewise, the blackouts in India were almost certainly related to the rising use of air-conditioning and cooling, experts say, even if the immediate culprit was a grid that did not properly balance supply and demand.

The late arrival of this year’s monsoons, which normally put an end to India’s hottest season, may have devastated the incomes of farmers who needed the rain. But it “put smiles on the faces of those who sell white goods — like air-conditioners and refrigerators — because it meant lots more sales,” said Rajendra Shende, chairman of the Terre Policy Center in Pune, India.

“Cooling is the craze in India — everyone loves cool temperatures and getting to cool temperatures as quickly as possible,” Mr. Shende said. He said that cooling has become such a cultural priority that rather than advertise a car’s acceleration, salesmen in India now emphasize how fast its air-conditioner can cool.

Scientists are scrambling to invent more efficient air-conditioners and better coolant gases to minimize electricity use and emissions. But so far the improvements have been dwarfed by humanity’s rising demands.

And recent efforts to curb the use of air-conditioning, by fiat or persuasion, have produced sobering lessons.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Parkland Air Conditioning.[end-div]

Watch Out Corporate America: Gen-Y is Coming

Social scientists have had Generation-Y, also known as “millenials”, under their microscopes for a while. Born between 1982 and 1999, Gen-Y is now coming of age and becoming a force in the workplace displacing aging “boomers” as they retire to the hills. So, researchers are now looking at how Gen-Y is faring inside corporate America. Remember, Gen-Y is the “it’s all about me generation”; members are characterized as typically lazy and spoiled, have a grandiose sense of entitlement, inflated self-esteem and deep emotional fragility. Their predecessors, the baby boomers, on the other hand are often seen as over-bearing, work-obsessed, competitive and narrow-minded. A clash of cultures is taking shape in office cubes across the country as these groups, with such differing personalities and philosophies, tussle within the workplace. However, it may not be all bad, as columnist Emily Matchar, argues below — corporate America needs the kind of shake-up that Gen-Y promises.

[div class=attrib]From the Washington Post:[end-div]

Have you heard the one about the kid who got his mom to call his boss and ask for a raise? Or about the college student who quit her summer internship because it forbade Facebook in the office?

Yep, we’re talking about Generation Y — loosely defined as those born between 1982 and 1999 — also known as millennials. Perhaps you know them by their other media-generated nicknames: teacup kids,for their supposed emotional fragility; boomerang kids, who always wind up back home; trophy kids — everyone’s a winner!; the Peter Pan generation, who’ll never grow up.

Now this pampered, over-praised, relentlessly self-confident generation (at age 30, I consider myself a sort of older sister to them) is flooding the workplace. They’ll make up 75 percent of the American workforce by 2025 — and they’re trying to change everything.

These are the kids, after all, who text their dads from meetings. They think “business casual” includes skinny jeans. And they expect the company president to listen to their “brilliant idea.”

When will they adapt?

They won’t. Ever. Instead, through their sense of entitlement and inflated self-esteem, they’ll make the modern workplace adapt to them. And we should thank them for it. Because the modern workplace frankly stinks, and the changes wrought by Gen Y will be good for everybody.

Few developed countries demand as much from their workers as the United States. Americans spend more time at the office than citizens of most other developed nations. Annually, we work 408 hours more than the Dutch, 374 hours more than the Germans and 311 hours more than the French. We even work 59 hours more than the stereotypically nose-to-the-grindstone Japanese. Though women make up half of the American workforce, the United States is the only country in the developed world without guaranteed paid maternity leave.

All this hard work is done for less and less reward. Wages have been stagnant for years, benefits shorn, opportunities for advancement blocked. While the richest Americans get richer, middle-class workers are left to do more with less. Because jobs are scarce and we’re used to a hierarchical workforce, we accept things the way they are. Worse, we’ve taken our overwork as a badge of pride. Who hasn’t flushed with a touch of self-importance when turning down social plans because we’re “too busy with work”?

Into this sorry situation strolls the self-esteem generation, printer-fresh diplomas in hand. And they’re not interested in business as usual.

The current corporate culture simply doesn’t make sense to much of middle-class Gen Y. Since the cradle, these privileged kids have been offered autonomy, control and choices (“Green pants or blue pants today, sweetie?”). They’ve been encouraged to show their creativity and to take their extracurricular interests seriously. Raised by parents who wanted to be friends with their kids, they’re used to seeing their elders as peers rather than authority figures. When they want something, they’re not afraid to say so.

[div class=attrib]Read the entire article after the jump.[end-div]

Subjective Objectivism: The Paradox that is Ayn Rand

Ayn Rand: anti-collectivist ideologue, standard-bearer for unapologetic individualism and rugged self-reliance, or selfish, fantasist and elitist hypocrite?

Political conservatives and libertarians increasingly flock to her writings and support her philosophy of individualism and unfettered capitalism, which she dubbed, “objectivism”. On the other hand, liberals see her as selfish zealot, elitist, narcissistic, even psychopathic.

The truth, of course, is more nuanced and complex, especially the private Ayn Rand versus the very public persona. Thus those who fail to delve into Rand’s traumatic and colorful history fail to grasp the many paradoxes and contradictions that she enshrined.

Rand was firmly and vociferously pro-choice, yet she believed that women should submit to the will of great men. She was a devout atheist and outspoken pacifist, yet she believed Native Americans fully deserved their cultural genocide for not grasping capitalism. She viewed homosexuality as disgusting and immoral, but supported non-discrimination protection for homosexuals in the public domain, yet opposed such rights in private, all the while having an extremely colorful private life herself. She was a valiant opponent of government and federal regulation in all forms. Publicly, she viewed Social Security, Medicare and other “big government” programs with utter disdain, their dependents nothing more than weak-minded loafers and “takers”. Privately, later in life, she accepted payments from Social Security and Medicare. Perhaps most paradoxically, Rand derided those who would fake their own reality, while at the same time being chronically dependent on mind-distorting amphetamines; popping speed at the same time as writing her keystones to objectivism: Fountainhead and Atlas Shrugged.

[div class=attrib]From the Guardian:[end-div]

As an atheist Ayn Rand did not approve of shrines but the hushed, air-conditioned headquarters which bears her name acts as a secular version. Her walnut desk occupies a position of honour. She smiles from a gallery of black and white photos, young in some, old in others. A bronze bust, larger than life, tilts her head upward, jaw clenched, expression resolute.

The Ayn Rand Institute in Irvine, California, venerates the late philosopher as a prophet of unfettered capitalism who showed America the way. A decade ago it struggled to have its voice heard. Today its message booms all the way to Washington DC.

It was a transformation which counted Paul Ryan, chairman of the House budget committee, as a devotee. He gave Rand’s novel, Atlas Shrugged, as Christmas presents and hailed her as “the reason I got into public service”.

Then, last week, he was selected as the Republican vice-presidential nominee and his enthusiasm seemed to evaporate. In fact, the backtracking began earlier this year when Ryan said as a Catholic his inspiration was not Rand’s “objectivism” philosophy but Thomas Aquinas’.

The flap has illustrated an acute dilemma for the institute. Once peripheral, it has veered close to mainstream, garnering unprecedented influence. The Tea Party has adopted Rand as a seer and waves placards saying “We should shrug” and “Going Galt”, a reference to an Atlas Shrugged character named John Galt.

Prominent Republicans channel Rand’s arguments in promises to slash taxes and spending and to roll back government. But, like Ryan, many publicly renounce the controversial Russian emigre as a serious influence. Where, then, does that leave the institute, the keeper of her flame?

Given Rand’s association with plutocrats – she depicted captains of industry as “producers” besieged by parasitic “moochers” – the headquarters are unexpectedly modest. Founded in 1985 three years after Rand’s death, the institution moved in 2002 from Marina del Rey, west of Los Angeles, to a drab industrial park in Irvine, 90 minutes south, largely to save money. It shares a nondescript two-storey building with financial services and engineering companies.

There is little hint of Galt, the character who symbolises the power and glory of the human mind, in the bland corporate furnishings. But the quotations and excerpts adorning the walls echo a mission which drove Rand and continues to inspire followers as an urgent injunction.

“The demonstration of a new moral philosophy: the morality of rational self-interest.”

These, said Onkar Ghate, the institute’s vice-president, are relatively good times for Randians. “Our primary mission is to advance awareness of her ideas and promote her philosophy. I must say, it’s going very well.”

On that point, if none other, conservatives and progressives may agree. Thirty years after her death Rand, as a radical intellectual and political force, is going very well indeed. Her novel Atlas Shrugged, a 1,000 page assault on big government, social welfare and altruism first published in 1957, is reportedly selling more than 400,000 copies per year and is being made into a movie trilogy. Its radical author, who also penned The Fountainhead and other novels and essays, is the subject of a recent documentary and spate of books.

To critics who consider Rand’s philosophy that “of the psychopath, a misanthropic fantasy of cruelty, revenge and greed”, her posthumous success is alarming.

Relatively little attention however has been paid to the institute which bears her name and works, often behind the scenes, to direct her legacy and shape right-wing debate.

 

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ayn Rand in 1957. Courtesy of Wikipedia.[end-div]