All posts by Mike

Two Nations Divided by Book Covers

“England and America are two countries separated by the same language”. This oft used quote is usually attributed to Oscar Wilde or GBS (George Bernard Shaw). Regardless of who originated the phrase both authors would not be surprised to see that book covers are divided by the Atlantic Ocean as well. The Millions continues its fascinating annual comparative analysis.

American book covers on the left, British book covers on the right.

[div class=attrib]From The Millions:[end-div]

As we’ve done for several years now, we thought it might be fun to compare the U.S. and U.K. book cover designs of this year’s Morning News Tournament of Books contenders. Book cover art is an interesting element of the literary world — sometimes fixated upon, sometimes ignored — but, as readers, we are undoubtedly swayed by the little billboard that is the cover of every book we read. And, while many of us no longer do most of our reading on physical books with physical covers, those same cover images now beckon us from their grids in the various online bookstores. From my days as a bookseller, when import titles would sometimes find their way into our store, I’ve always found it especially interesting that the U.K. and U.S. covers often differ from one another. This would seem to suggest that certain layouts and imagery will better appeal to readers on one side of the Atlantic rather than the other. These differences are especially striking when we look at the covers side by side. The American covers are on the left, and the UK are on the right. Your equally inexpert analysis is encouraged in the comments.

[div class=attrib]Read the entire article and see more book covers after the jump.[end-div]

[div class=atrrib]Book cover images courtesy of The Millions and their respective authors and publishers.[end-div]

Chocolate for the Soul and Mind (But Not Body)

Hot on the heels of the recent research finding that the Mediterranean diet improves heart health, come news that choc-a-holics the world over have been anxiously awaiting — chocolate improves brain function.

Researchers have found that chocolate rich in compounds known as flavanols can improve cognitive function. Now, before you rush out the door to visit the local grocery store to purchase a mountain of Mars bars (perhaps not coincidentally, Mars, Inc., partly funded the research study), Godiva pralines, Cadbury flakes or a slab of Dove, take note that all chocolate is not created equally. Flavanols tend to be found in highest concentrations in raw cocoa. In fact, during the process of making most chocolate, including the dark kind, most flavanols tend to be removed or destroyed. Perhaps the silver lining here is that to replicate the dose of flavanols found to have a positive effect on brain function, you would have to eat around 20 bars of chocolate per day for several months. This may be good news for your brain, but not your waistline!

[div class=attrib]From Scientific American:[end-div]

It’s news chocolate lovers have been craving: raw cocoa may be packed with brain-boosting compounds. Researchers at the University of L’Aquila in Italy, with scientists from Mars, Inc., and their colleagues published findings last September that suggest cognitive function in the elderly is improved by ingesting high levels of natural compounds found in cocoa called flavanols. The study included 90 individuals with mild cognitive impairment, a precursor to Alzheimer’s disease. Subjects who drank a cocoa beverage containing either moderate or high levels of flavanols daily for eight weeks demonstrated greater cognitive function than those who consumed low levels of flavanols on three separate tests that measured factors that included verbal fluency, visual searching and attention.

Exactly how cocoa causes these changes is still unknown, but emerging research points to one flavanol in particular: (-)-epicatechin, pronounced “minus epicatechin.” Its name signifies its structure, differentiating it from other catechins, organic compounds highly abundant in cocoa and present in apples, wine and tea. The graph below shows how (-)-epicatechin fits into the world of brain-altering food molecules. Other studies suggest that the compound supports increased circulation and the growth of blood vessels, which could explain improvements in cognition, because better blood flow would bring the brain more oxygen and improve its function.

Animal research has already demonstrated how pure (-)-epicatechin enhances memory. Findings published last October in the Journal of Experimental Biology note that snails can remember a trained task—such as holding their breath in deoxygenated water—for more than a day when given (-)-epicatechin but for less than three hours without the flavanol. Salk Institute neuroscientist Fred Gage and his colleagues found previously that (-)-epicatechin improves spatial memory and increases vasculature in mice. “It’s amazing that a single dietary change could have such profound effects on behavior,” Gage says. If further research confirms the compound’s cognitive effects, flavanol supplements—or raw cocoa beans—could be just what the doctor ordered.

So, Can We Binge on Chocolate Now?

Nope, sorry. A food’s origin, processing, storage and preparation can each alter its chemical composition. As a result, it is nearly impossible to predict which flavanols—and how many—remain in your bonbon or cup of tea. Tragically for chocoholics, most methods of processing cocoa remove many of the flavanols found in the raw plant. Even dark chocolate, touted as the “healthy” option, can be treated such that the cocoa darkens while flavanols are stripped.

Researchers are only beginning to establish standards for measuring flavanol content in chocolate. A typical one and a half ounce chocolate bar might contain about 50 milligrams of flavanols, which means you would need to consume 10 to 20 bars daily to approach the flavanol levels used in the University of L’Aquila study. At that point, the sugars and fats in these sweet confections would probably outweigh any possible brain benefits. Mars Botanical nutritionist and toxicologist Catherine Kwik-Uribe, an author on the University of L’Aquila study, says, “There’s now even more reasons to enjoy tea, apples and chocolate. But diversity and variety in your diet remain key.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Video Game. But is it Art?

Only yesterday we posted a linguist’s claim that text-speak is an emerging language. You know, text-speak is that cryptic communication process that most teenagers engage in with their smartphones. Leaving aside the merits of including text-speak in the catalog of around 6,600 formal human languages, one thing is clear — text-speak is not Shakespearean English. So, don’t expect to see a novel written in it win the Nobel Prize for Literature, yet.

Strangely though, the same cannot be said for another recent phenomenon, the video game. Increasingly, some video games are being described in the same language that critics would normally reserve for a contemporary painting on canvas. Yes, welcome to the world of video game as art. If you have ever played the immersive game Myst, or its sequel Riven (the original games came on CDROM), you will see why many classify the beautifully designed and rendered aesthetics as art. MoMA (Museum of Modern Art) in New York thinks so too.

[div class=attrib]From the Guardian:[end-div]

New York’s Museum of Modern Art will be home to something more often associated with pasty teens and bar scenes when it opens an exhibit on video games on Friday.

Tetris, Pac-Man and the Sims are just a few of the classic games that will be housed inside a building that also displays works by Vincent Van Gogh, Claude Monet and Frida Kahlo. And though some may question whether video games are even art, the museum is incorporating the games into its Applied Design installation.

MoMA consulted scholars, digital experts, historians and critics to select games for the gallery based on their aesthetic quality – including the programming language used to create them. MoMA’s senior curator for architecture and design, Paola Antonelli, said the material used to create games is important in the same way the wood used to create a stool is.

With that as the focus, games are presented in their original formats, absent the consoles that often define them. Some will be playable with controllers, and more complex, long-running games like SimCity 2000 are presented as specially designed walkthroughs and demos.

MoMA’s curatorial team tailored controls especially for each of the playable games, including a customized joystick created for the Tetris game.

Some of the older games, which might have fragile or rare cartridges, will be displayed as “interactive emulation”, with a programmer translating the game code to something that will work on a newer computer system.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Myst, Cyan Inc. Courtesy of Cyan, Inc / Wikipedia.[end-div]

Txt-Speak: Linguistic Scourge or Beautiful New Language?

OMG! DYK wot Ur Teen is txtng?

[tube]yoF2vdLxsVQ[/tube]

Most parents of teenagers would undoubtedly side with the first characterization: texting is a disaster for the English language — and any other texted language for that matter. At first glance it would seem that most linguists and scholars of language would agree. After all, with seemingly non-existent grammar, poor syntax, complete disregard for spelling, substitution of symbols for words, and emphasis on childish phonetics, how can texting be considered anything more than a regression to a crude form of proto-human language?

Well, linguist John McWhorter holds that texting is actually a new form of speech, and for that matter, it’s rather special and evolving in real-time. LOL? Read on and you will be 😮 (surprised). Oh, and if you still need help with texting translation, check-out dtxtr.

[div class=attrib]From ars technica:[end-div]

Is texting shorthand a convenience, a catastrophe for the English language, or actually something new and special? John McWhorter, a linguist at Columbia University, sides with the latter. According to McWhorter, texting is actually a new form of speech, and he outlined the reasons why today at the TED2013 conference in Southern California.

We often hear that “texting is a scourge,” damaging the literacy of the young. But it’s “actually a miraculous thing,” McWhorter said. Texting, he argued, is not really writing at all—not in the way we have historically thought about writing. To explain this, he drew an important distinction between speech and writing as functions of language. Language was born in speech some 80,000 years ago (at least). Writing, on the other hand, is relatively new (5,000 or 6,000 years old). So humanity has been talking for longer than it has been writing, and this is especially true when you consider that writing skills have hardly been ubiquitous in human societies.

Furthermore, writing is typically not a reflection of casual speech. “We speak in word packets of seven to 10 words. It’s much more loose, much more telegraphic,” McWhorter said. Of course, speech can imitate writing, particularly in formal contexts like speechmaking. He pointed out that in those cases you might speak like you write, but it’s clearly not a natural way of speaking.

But what about writing like you speak? Historically this has been difficult. Speed is a key issue. “[Texting is] fingered-speech. Now we can write the way we talk,” McWhorter said. Yet we view this as some kind of decline. We don’t capitalize words, obey grammar or spelling rules, and the like. Yet there is an “emerging complexity…with new structure” at play. To McWhorter, this structure facilitates the speed and packeted nature of real speech.

Take “LOL,” for instance. It used to mean “laughing out loud,” but its meaning has changed. People aren’t guffawing every time they write it. Now “it’s a marker of empathy, a pragmatic particle,” he said. “It’s a way of using the language between actual people.”

This is just one example of a new battery of conventions McWhorter sees in texting. They are conventions that enable writing like we speak. Consider the rules of grammar. When you talk, you don’t think about capitalizing names or putting commas and question marks where they belong. You produce sounds, not written language. Texting leaves out many of these conventions, particularly among the young, who make extensive use of electronic communication tools.

McWhorter thinks what we are experiencing is a whole new way of writing that young people are using alongside their normal writing skills. It is a “balancing act… an expansion of their linguistic repertoire,” he argued.

The result is a whole new language, one that wouldn’t be intelligible to people in the year 1993 or 1973. And where it’s headed, it will likely be unintelligible to us were we to jump ahead 20 years in time. Nevertheless, McWhorter wants us to appreciate it now: “It’s a linguistic miracle happening right under our noses,” he said.

Forget the “death of writing” talk. Txt-speak is a new, rapidly evolving form of speech.

[div class=attrib]Follow the entire article after the jump.[end-div]

[div class=attrib]Video: John McWhorter courtesy of TED.[end-div]

Your Tax Dollars at Work

Naysayers would say that government, and hence taxpayer dollars, should not be used to fund science initiatives. After all academia and business seem to do a fairly good job of discovery and innovation without a helping hand pilfering from the public purse. And, without a doubt, and money aside, government funded projects do raise a number of thorny questions: On what should our hard-earned income tax be spent? Who decides on the priorities? How is progress to be measured? Do taxpayers get any benefit in return? After many of us cringe at the thought of an unelected bureaucrat or a committee of such spending millions if not billions of our dollars. Why not just spend the money on fixing our national potholes?

But despite our many human flaws and foibles we are at heart explorers. We seek to know more about ourselves, our world and our universe. Those who seek answers to fundamental questions of consciousness, aging, and life are pioneers in this quest to expand our domain of understanding and knowledge. These answers increasingly aid our daily lives through continuous improvement in medical science, and innovation in materials science. And, our collective lives are enriched as we increasingly learn more about the how and the why of our and our universe’s existence.

So, some of our dollars have gone towards big science at the Large Hadron Collider (LHC) beneath Switzerland looking for constituents of matter, the wild laser experiment at the National Ignition Facility designed to enable controlled fusion reactions, and the Curiosity rover exploring Mars. Yet more of our dollars have gone to research and development into enhanced radar, graphene for next generation circuitry, online courseware, stress in coral reefs, sensors to aid the elderly, ultra-high speed internet for emergency response, erosion mitigation, self-cleaning surfaces, flexible solar panels.

Now comes word that the U.S. government wants to spend $3 billion dollars — over 10 years — on building a comprehensive map of the human brain. The media has dubbed this the “connectome” following similar efforts to map our human DNA, the genome. While this is the type of big science that may yield tangible results and benefits only decades from now, it ignites the passion and curiosity of our children to continue to seek and to find answers. So, this is good news for science and the explorer who lurks within us all.

[div class=attrib]From ars technica:[end-div]

Over the weekend, The New York Times reported that the Obama administration is preparing to launch biology into its first big project post-genome: mapping the activity and processes that power the human brain. The initial report suggested that the project would get roughly $3 billion dollars over 10 years to fund projects that would provide an unprecedented understanding of how the brain operates.

But the report was remarkably short on the scientific details of what the studies would actually accomplish or where the money would actually go. To get a better sense, we talked with Brown University’s John Donoghue, who is one of the academic researchers who has been helping to provide the rationale and direction for the project. Although he couldn’t speak for the administration’s plans, he did describe the outlines of what’s being proposed and why, and he provided a glimpse into what he sees as the project’s benefits.

What are we talking about doing?

We’ve already made great progress in understanding the behavior of individual neurons, and scientists have done some excellent work in studying small populations of them. On the other end of the spectrum, decades of anatomical studies have provided us with a good picture of how different regions of the brain are connected. “There’s a big gap in our knowledge because we don’t know the intermediate scale,” Donaghue told Ars. The goal, he said, “is not a wiring diagram—it’s a functional map, an understanding.”

This would involve a combination of things, including looking at how larger populations of neurons within a single structure coordinate their activity, as well as trying to get a better understanding of how different structures within the brain coordinate their activity. What scale of neuron will we need to study? Donaghue answered that question with one of his own: “At what point does the emergent property come out?” Things like memory and consciousness emerge from the actions of lots of neurons, and we need to capture enough of those to understand the processes that let them emerge. Right now, we don’t really know what that level is. It’s certainly “above 10,” according to Donaghue. “I don’t think we need to study every neuron,” he said. Beyond that, part of the project will focus on what Donaghue called “the big question”—what emerges in the brain at these various scales?”

While he may have called emergence “the big question,” it quickly became clear he had a number of big questions in mind. Neural activity clearly encodes information, and we can record it, but we don’t always understand the code well enough to understand the meaning of our recordings. When I asked Donaghue about this, he said, “This is it! One of the big goals is cracking the code.”

Donaghue was enthused about the idea that the different aspects of the project would feed into each other. “They go hand in hand,” he said. “As we gain more functional information, it’ll inform the connectional map and vice versa.” In the same way, knowing more about neural coding will help us interpret the activity we see, while more detailed recordings of neural activity will make it easier to infer the code.

As we build on these feedbacks to understand more complex examples of the brain’s emergent behaviors, the big picture will emerge. Donaghue hoped that the work will ultimately provide “a way of understanding how you turn thought into action, how you perceive, the nature of the mind, cognition.”

How will we actually do this?

Perception and the nature of the mind have bothered scientists and philosophers for centuries—why should we think we can tackle them now? Donaghue cited three fields that had given him and his collaborators cause for optimism: nanotechnology, synthetic biology, and optical tracers. We’ve now reached the point where, thanks to advances in nanotechnology, we’re able to produce much larger arrays of electrodes with fine control over their shape, allowing us to monitor much larger populations of neurons at the same time. On a larger scale, chemical tracers can now register the activity of large populations of neurons through flashes of fluorescence, giving us a way of monitoring huge populations of cells. And Donaghue suggested that it might be possible to use synthetic biology to translate neural activity into a permanent record of a cell’s activity (perhaps stored in DNA itself) for later retrieval.

Right now, in Donaghue’s view, the problem is that the people developing these technologies and the neuroscience community aren’t talking enough. Biologists don’t know enough about the tools already out there, and the materials scientists aren’t getting feedback from them on ways to make their tools more useful.

Since the problem is understanding the activity of the brain at the level of large populations of neurons, the goal will be to develop the tools needed to do so and to make sure they are widely adopted by the bioscience community. Each of these approaches is limited in various ways, so it will be important to use all of them and to continue the technology development.

Assuming the information can be recorded, it will generate huge amounts of data, which will need to be shared in order to have the intended impact. And we’ll need to be able to perform pattern recognition across these vast datasets in order to identify correlations in activity among different populations of neurons. So there will be a heavy computational component as well.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: White matter fiber architecture of the human brain. Courtesy of the Human Connectome Project.[end-div]

Yourself, The Illusion

A growing body of evidence suggests that our brains live in the future, construct explanations for the past and that our notion of the present is an entirely fictitious concoction. On the surface this makes our lives seem like nothing more than a construction taken right out of The Matrix movies. However, while we may not be pawns in an illusion constructed by malevolent aliens, our perception of “self” does appear to be illusory. As researchers delve deeper into the inner workings of the brain it becomes clearer that our conscious selves are a beautifully derived narrative, built by the brain to make sense of the past and prepare for our future actions.

[div class=attrib]From the New Scientist:[end-div]

It seems obvious that we exist in the present. The past is gone and the future has not yet happened, so where else could we be? But perhaps we should not be so certain.

Sensory information reaches usMovie Camera at different speeds, yet appears unified as one moment. Nerve signals need time to be transmitted and time to be processed by the brain. And there are events – such as a light flashing, or someone snapping their fingers – that take less time to occur than our system needs to process them. By the time we become aware of the flash or the finger-snap, it is already history.

Our experience of the world resembles a television broadcast with a time lag; conscious perception is not “live”. This on its own might not be too much cause for concern, but in the same way the TV time lag makes last-minute censorship possible, our brain, rather than showing us what happened a moment ago, sometimes constructs a present that has never actually happened.

Evidence for this can be found in the “flash-lag” illusion. In one version, a screen displays a rotating disc with an arrow on it, pointing outwards (see “Now you see it…”). Next to the disc is a spot of light that is programmed to flash at the exact moment the spinning arrow passes it. Yet this is not what we perceive. Instead, the flash lags behind, apparently occuring after the arrow has passed.

One explanation is that our brain extrapolates into the future. Visual stimuli take time to process, so the brain compensates by predicting where the arrow will be. The static flash – which it can’t anticipate – seems to lag behind.

Neat as this explanation is, it cannot be right, as was shown by a variant of the illusion designed by David Eagleman of the Baylor College of Medicine in Houston, Texas, and Terrence Sejnowski of the Salk Institute for Biological Studies in La Jolla, California.

If the brain were predicting the spinning arrow’s trajectory, people would see the lag even if the arrow stopped at the exact moment it was pointing at the spot. But in this case the lag does not occur. What’s more, if the arrow starts stationary and moves in either direction immediately after the flash, the movement is perceived before the flash. How can the brain predict the direction of movement if it doesn’t start until after the flash?

The explanation is that rather than extrapolating into the future, our brain is interpolating events in the past, assembling a story of what happened retrospectively (Science, vol 287, p 2036). The perception of what is happening at the moment of the flash is determined by what happens to the disc after it. This seems paradoxical, but other tests have confirmed that what is perceived to have occurred at a certain time can be influenced by what happens later.

All of this is slightly worrying if we hold on to the common-sense view that our selves are placed in the present. If the moment in time we are supposed to be inhabiting turns out to be a mere construction, the same is likely to be true of the self existing in that present.

[div class=attrib]Read the entire article after the jump.[end-div]

Engineering Your Food Addiction

Fast food, snack foods and all manner of processed foods are a multi-billion dollar global industry. So, it’s no surprise that companies collectively spend $100s of millions each year to perfect the perfect bite. Importantly, part of this perfection (for the businesses) is to ensure that you keep coming back for more.

By all accounts the “cheeto” is as close to processed-food-addiction-heaven as we can get — so far. It has just the right amount of salt (too much) and fat (too much), crunchiness, and something known as vanishing caloric density (melts in the mouth at the optimum rate). Aesthetically sad, but scientifically true.

[div class=attrib]From the New York Times:[end-div]

On the evening of April 8, 1999, a long line of Town Cars and taxis pulled up to the Minneapolis headquarters of Pillsbury and discharged 11 men who controlled America’s largest food companies. Nestlé was in attendance, as were Kraft and Nabisco, General Mills and Procter & Gamble, Coca-Cola and Mars. Rivals any other day, the C.E.O.’s and company presidents had come together for a rare, private meeting. On the agenda was one item: the emerging obesity epidemic and how to deal with it. While the atmosphere was cordial, the men assembled were hardly friends. Their stature was defined by their skill in fighting one another for what they called “stomach share” — the amount of digestive space that any one company’s brand can grab from the competition.

James Behnke, a 55-year-old executive at Pillsbury, greeted the men as they arrived. He was anxious but also hopeful about the plan that he and a few other food-company executives had devised to engage the C.E.O.’s on America’s growing weight problem. “We were very concerned, and rightfully so, that obesity was becoming a major issue,” Behnke recalled. “People were starting to talk about sugar taxes, and there was a lot of pressure on food companies.” Getting the company chiefs in the same room to talk about anything, much less a sensitive issue like this, was a tricky business, so Behnke and his fellow organizers had scripted the meeting carefully, honing the message to its barest essentials. “C.E.O.’s in the food industry are typically not technical guys, and they’re uncomfortable going to meetings where technical people talk in technical terms about technical things,” Behnke said. “They don’t want to be embarrassed. They don’t want to make commitments. They want to maintain their aloofness and autonomy.”

A chemist by training with a doctoral degree in food science, Behnke became Pillsbury’s chief technical officer in 1979 and was instrumental in creating a long line of hit products, including microwaveable popcorn. He deeply admired Pillsbury but in recent years had grown troubled by pictures of obese children suffering from diabetes and the earliest signs of hypertension and heart disease. In the months leading up to the C.E.O. meeting, he was engaged in conversation with a group of food-science experts who were painting an increasingly grim picture of the public’s ability to cope with the industry’s formulations — from the body’s fragile controls on overeating to the hidden power of some processed foods to make people feel hungrier still. It was time, he and a handful of others felt, to warn the C.E.O.’s that their companies may have gone too far in creating and marketing products that posed the greatest health concerns.

 

In This Article:
• ‘In This Field, I’m a Game Changer.’
• ‘Lunchtime Is All Yours’
• ‘It’s Called Vanishing Caloric Density.’
• ‘These People Need a Lot of Things, but They Don’t Need a Coke.’

 

The discussion took place in Pillsbury’s auditorium. The first speaker was a vice president of Kraft named Michael Mudd. “I very much appreciate this opportunity to talk to you about childhood obesity and the growing challenge it presents for us all,” Mudd began. “Let me say right at the start, this is not an easy subject. There are no easy answers — for what the public health community must do to bring this problem under control or for what the industry should do as others seek to hold it accountable for what has happened. But this much is clear: For those of us who’ve looked hard at this issue, whether they’re public health professionals or staff specialists in your own companies, we feel sure that the one thing we shouldn’t do is nothing.”

As he spoke, Mudd clicked through a deck of slides — 114 in all — projected on a large screen behind him. The figures were staggering. More than half of American adults were now considered overweight, with nearly one-quarter of the adult population — 40 million people — clinically defined as obese. Among children, the rates had more than doubled since 1980, and the number of kids considered obese had shot past 12 million. (This was still only 1999; the nation’s obesity rates would climb much higher.) Food manufacturers were now being blamed for the problem from all sides — academia, the Centers for Disease Control and Prevention, the American Heart Association and the American Cancer Society. The secretary of agriculture, over whom the industry had long held sway, had recently called obesity a “national epidemic.”

Mudd then did the unthinkable. He drew a connection to the last thing in the world the C.E.O.’s wanted linked to their products: cigarettes. First came a quote from a Yale University professor of psychology and public health, Kelly Brownell, who was an especially vocal proponent of the view that the processed-food industry should be seen as a public health menace: “As a culture, we’ve become upset by the tobacco companies advertising to children, but we sit idly by while the food companies do the very same thing. And we could make a claim that the toll taken on the public health by a poor diet rivals that taken by tobacco.”

“If anyone in the food industry ever doubted there was a slippery slope out there,” Mudd said, “I imagine they are beginning to experience a distinct sliding sensation right about now.”

Mudd then presented the plan he and others had devised to address the obesity problem. Merely getting the executives to acknowledge some culpability was an important first step, he knew, so his plan would start off with a small but crucial move: the industry should use the expertise of scientists — its own and others — to gain a deeper understanding of what was driving Americans to overeat. Once this was achieved, the effort could unfold on several fronts. To be sure, there would be no getting around the role that packaged foods and drinks play in overconsumption. They would have to pull back on their use of salt, sugar and fat, perhaps by imposing industrywide limits. But it wasn’t just a matter of these three ingredients; the schemes they used to advertise and market their products were critical, too. Mudd proposed creating a “code to guide the nutritional aspects of food marketing, especially to children.”

“We are saying that the industry should make a sincere effort to be part of the solution,” Mudd concluded. “And that by doing so, we can help to defuse the criticism that’s building against us.”

What happened next was not written down. But according to three participants, when Mudd stopped talking, the one C.E.O. whose recent exploits in the grocery store had awed the rest of the industry stood up to speak. His name was Stephen Sanger, and he was also the person — as head of General Mills — who had the most to lose when it came to dealing with obesity. Under his leadership, General Mills had overtaken not just the cereal aisle but other sections of the grocery store. The company’s Yoplait brand had transformed traditional unsweetened breakfast yogurt into a veritable dessert. It now had twice as much sugar per serving as General Mills’ marshmallow cereal Lucky Charms. And yet, because of yogurt’s well-tended image as a wholesome snack, sales of Yoplait were soaring, with annual revenue topping $500 million. Emboldened by the success, the company’s development wing pushed even harder, inventing a Yoplait variation that came in a squeezable tube — perfect for kids. They called it Go-Gurt and rolled it out nationally in the weeks before the C.E.O. meeting. (By year’s end, it would hit $100 million in sales.)

According to the sources I spoke with, Sanger began by reminding the group that consumers were “fickle.” (Sanger declined to be interviewed.) Sometimes they worried about sugar, other times fat. General Mills, he said, acted responsibly to both the public and shareholders by offering products to satisfy dieters and other concerned shoppers, from low sugar to added whole grains. But most often, he said, people bought what they liked, and they liked what tasted good. “Don’t talk to me about nutrition,” he reportedly said, taking on the voice of the typical consumer. “Talk to me about taste, and if this stuff tastes better, don’t run around trying to sell stuff that doesn’t taste good.”

To react to the critics, Sanger said, would jeopardize the sanctity of the recipes that had made his products so successful. General Mills would not pull back. He would push his people onward, and he urged his peers to do the same. Sanger’s response effectively ended the meeting.

“What can I say?” James Behnke told me years later. “It didn’t work. These guys weren’t as receptive as we thought they would be.” Behnke chose his words deliberately. He wanted to be fair. “Sanger was trying to say, ‘Look, we’re not going to screw around with the company jewels here and change the formulations because a bunch of guys in white coats are worried about obesity.’ ”

The meeting was remarkable, first, for the insider admissions of guilt. But I was also struck by how prescient the organizers of the sit-down had been. Today, one in three adults is considered clinically obese, along with one in five kids, and 24 million Americans are afflicted by type 2 diabetes, often caused by poor diet, with another 79 million people having pre-diabetes. Even gout, a painful form of arthritis once known as “the rich man’s disease” for its associations with gluttony, now afflicts eight million Americans.

The public and the food companies have known for decades now — or at the very least since this meeting — that sugary, salty, fatty foods are not good for us in the quantities that we consume them. So why are the diabetes and obesity and hypertension numbers still spiraling out of control? It’s not just a matter of poor willpower on the part of the consumer and a give-the-people-what-they-want attitude on the part of the food manufacturers. What I found, over four years of research and reporting, was a conscious effort — taking place in labs and marketing meetings and grocery-store aisles — to get people hooked on foods that are convenient and inexpensive. I talked to more than 300 people in or formerly employed by the processed-food industry, from scientists to marketers to C.E.O.’s. Some were willing whistle-blowers, while others spoke reluctantly when presented with some of the thousands of pages of secret memos that I obtained from inside the food industry’s operations. What follows is a series of small case studies of a handful of characters whose work then, and perspective now, sheds light on how the foods are created and sold to people who, while not powerless, are extremely vulnerable to the intensity of these companies’ industrial formulations and selling campaigns.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Cheeto puffs. Courtesy of tumblr.[end-div]

2013: Mississippi Officially Abolishes Slavery

The 13th Amendment to the United States Constitution was enacted in December 1865. It abolished slavery.

But, it seems that someone in Mississippi did not follow the formal process. So, the law was officially ratified only a couple of weeks ago — 147 years late. Thanks go to two enterprising scholars and the movie Lincoln.

[div class=attrib]From the Guardian:[end-div]

Mississippi has officially ratified the 13th amendment to the US constitution, which abolishes slavery and which was officially noted in the constitution on 6 December 1865. All 50 states have now ratified the amendment.

Mississippi’s tardiness has been put down to an oversight that was only corrected after two academics embarked on research prompted by watching Lincoln, Steven Spielberg’s Oscar-nominated film about president Abraham Lincoln’s efforts to secure the amendment.

Dr Ranjan Batra, a professor in the department of neurobiology and anatomical sciences at the University of Mississippi Medical Center, saw Spielberg’s film and wondered about the implementation of the 13th amendment after the Civil War. He discussed the issue with Ken Sullivan, an anatomical material specialist at UMC, who began to research the matter.

Sullivan, a longtime resident of the Mississippi, remembered that a 1995 move to ratify the 13th amendment had passed the state Senate and House. He tracked down a copy of the bill and learned that its last paragraph required the secretary of state to send a copy to the office of the federal register, to officially sign it into law. That copy was never sent.

Sullivan contacted the current Mississippi secretary of state, Delbert Hosemann, who filed the paperwork for the passage of the bill on 30 January. The bill passed on 7 February. Hosemann said the passage of the bill “was long overdue”.

 

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Seal of the State of Mississippi. Courtesy of Wikipedia.[end-div]

Introverts: Misunderstood, Oppressed

It’s time for Occupy Extroverts. Finally, this would give introverts of the world the opportunity to be understood and valued. Now, will the introverts rise up to challenge the extroverts, insert one or two words in to a conversation and take their rightful place? Hmm, perhaps not, it may require too much attention and/or talking. What a loss — the world could learn so much from us.

[div class=attrib]From the Atlantic:[end-div]

Do you know someone who needs hours alone every day? Who loves quiet conversations about feelings or ideas, and can give a dynamite presentation to a big audience, but seems awkward in groups and maladroit at small talk? Who has to be dragged to parties and then needs the rest of the day to recuperate? Who growls or scowls or grunts or winces when accosted with pleasantries by people who are just trying to be nice?

If so, do you tell this person he is “too serious,” or ask if he is okay? Regard him as aloof, arrogant, rude? Redouble your efforts to draw him out?

If you answered yes to these questions, chances are that you have an introvert on your hands—and that you aren’t caring for him properly. Science has learned a good deal in recent years about the habits and requirements of introverts. It has even learned, by means of brain scans, that introverts process information differently from other people (I am not making this up). If you are behind the curve on this important matter, be reassured that you are not alone. Introverts may be common, but they are also among the most misunderstood and aggrieved groups in America, possibly the world.

I know. My name is Jonathan, and I am an introvert.

Oh, for years I denied it. After all, I have good social skills. I am not morose or misanthropic. Usually. I am far from shy. I love long conversations that explore intimate thoughts or passionate interests. But at last I have self-identified and come out to my friends and colleagues. In doing so, I have found myself liberated from any number of damaging misconceptions and stereotypes. Now I am here to tell you what you need to know in order to respond sensitively and supportively to your own introverted family members, friends, and colleagues. Remember, someone you know, respect, and interact with every day is an introvert, and you are probably driving this person nuts. It pays to learn the warning signs.

What is introversion? In its modern sense, the concept goes back to the 1920s and the psychologist Carl Jung. Today it is a mainstay of personality tests, including the widely used Myers-Briggs Type Indicator. Introverts are not necessarily shy. Shy people are anxious or frightened or self-excoriating in social settings; introverts generally are not. Introverts are also not misanthropic, though some of us do go along with Sartre as far as to say “Hell is other people at breakfast.” Rather, introverts are people who find other people tiring.

Extroverts are energized by people, and wilt or fade when alone. They often seem bored by themselves, in both senses of the expression. Leave an extrovert alone for two minutes and he will reach for his cell phone. In contrast, after an hour or two of being socially “on,” we introverts need to turn off and recharge. My own formula is roughly two hours alone for every hour of socializing. This isn’t antisocial. It isn’t a sign of depression. It does not call for medication. For introverts, to be alone with our thoughts is as restorative as sleeping, as nourishing as eating. Our motto: “I’m okay, you’re okay—in small doses.”

How many people are introverts? I performed exhaustive research on this question, in the form of a quick Google search. The answer: About 25 percent. Or: Just under half. Or—my favorite—”a minority in the regular population but a majority in the gifted population.”

Are introverts misunderstood? Wildly. That, it appears, is our lot in life. “It is very difficult for an extrovert to understand an introvert,” write the education experts Jill D. Burruss and Lisa Kaenzig. (They are also the source of the quotation in the previous paragraph.) Extroverts are easy for introverts to understand, because extroverts spend so much of their time working out who they are in voluble, and frequently inescapable, interaction with other people. They are as inscrutable as puppy dogs. But the street does not run both ways. Extroverts have little or no grasp of introversion. They assume that company, especially their own, is always welcome. They cannot imagine why someone would need to be alone; indeed, they often take umbrage at the suggestion. As often as I have tried to explain the matter to extroverts, I have never sensed that any of them really understood. They listen for a moment and then go back to barking and yipping.

Are introverts oppressed? I would have to say so. For one thing, extroverts are overrepresented in politics, a profession in which only the garrulous are really comfortable. Look at George W. Bush. Look at Bill Clinton. They seem to come fully to life only around other people. To think of the few introverts who did rise to the top in politics—Calvin Coolidge, Richard Nixon—is merely to drive home the point. With the possible exception of Ronald Reagan, whose fabled aloofness and privateness were probably signs of a deep introverted streak (many actors, I’ve read, are introverts, and many introverts, when socializing, feel like actors), introverts are not considered “naturals” in politics.

Extroverts therefore dominate public life. This is a pity. If we introverts ran the world, it would no doubt be a calmer, saner, more peaceful sort of place. As Coolidge is supposed to have said, “Don’t you know that four fifths of all our troubles in this life would disappear if we would just sit down and keep still?” (He is also supposed to have said, “If you don’t say anything, you won’t be called on to repeat it.” The only thing a true introvert dislikes more than talking about himself is repeating himself.)

With their endless appetite for talk and attention, extroverts also dominate social life, so they tend to set expectations. In our extrovertist society, being outgoing is considered normal and therefore desirable, a mark of happiness, confidence, leadership. Extroverts are seen as bighearted, vibrant, warm, empathic. “People person” is a compliment. Introverts are described with words like “guarded,” “loner,” “reserved,” “taciturn,” “self-contained,” “private”—narrow, ungenerous words, words that suggest emotional parsimony and smallness of personality. Female introverts, I suspect, must suffer especially. In certain circles, particularly in the Midwest, a man can still sometimes get away with being what they used to call a strong and silent type; introverted women, lacking that alternative, are even more likely than men to be perceived as timid, withdrawn, haughty.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]This Man Was Talked to Death. Artist: John Cameron / Currier & Ives c1983. Library of Congress.[end-div]

Intelligenetics

Intelligenetics isn’t recognized as a real word by Websters or the Oxford English dictionary. We just coined a term that might best represent the growing field of research examining the genetic basis for human intelligence. Of course, it’s not a new subject and comes with many cautionary tales. Past research into the genetic foundations of intelligence has often been misused by one group seeking racial, ethnic or political power over another. However, with strong and appropriate safeguards in place science does have a legitimate place in uncovering what makes some brains excel while others do not.

[div class=attrib]From the Wall Street Journal:[end-div]

At a former paper-printing factory in Hong Kong, a 20-year-old wunderkind named Zhao Bowen has embarked on a challenging and potentially controversial quest: uncovering the genetics of intelligence.

Mr. Zhao is a high-school dropout who has been described as China’s Bill Gates. He oversees the cognitive genomics lab at BGI, a private company that is partly funded by the Chinese government.

At the Hong Kong facility, more than 100 powerful gene-sequencing machines are deciphering about 2,200 DNA samples, reading off their 3.2 billion chemical base pairs one letter at a time. These are no ordinary DNA samples. Most come from some of America’s brightest people—extreme outliers in the intelligence sweepstakes.

The majority of the DNA samples come from people with IQs of 160 or higher. By comparison, average IQ in any population is set at 100. The average Nobel laureate registers at around 145. Only one in every 30,000 people is as smart as most of the participants in the Hong Kong project—and finding them was a quest of its own.

“People have chosen to ignore the genetics of intelligence for a long time,” said Mr. Zhao, who hopes to publish his team’s initial findings this summer. “People believe it’s a controversial topic, especially in the West. That’s not the case in China,” where IQ studies are regarded more as a scientific challenge and therefore are easier to fund.

The roots of intelligence are a mystery. Studies show that at least half of the variation in intelligence quotient, or IQ, is inherited. But while scientists have identified some genes that can significantly lower IQ—in people afflicted with mental retardation, for example—truly important genes that affect normal IQ variation have yet to be pinned down.

The Hong Kong researchers hope to crack the problem by comparing the genomes of super-high-IQ individuals with the genomes of people drawn from the general population. By studying the variation in the two groups, they hope to isolate some of the hereditary factors behind IQ.

Their conclusions could lay the groundwork for a genetic test to predict a person’s inherited cognitive ability. Such a tool could be useful, but it also might be divisive.

“If you can identify kids who are going to have trouble learning, you can intervene” early on in their lives, through special schooling or other programs, says Robert Plomin, a professor of behavioral genetics at King’s College, London, who is involved in the BGI project.

[div class=attrib]Read the entire article following the jump.[end-div]

The Police Drones Next Door

You might expect to find police drones in the pages of a science fiction novel by Philip K. Dick or Iain M. Banks. But by 2015, citizens of the United States may well see these unmanned flying machines patrolling the skies over the homeland. The U.S. government recently pledged to loosen Federal Aviation Administration (FAA) restrictions that would allow local law enforcement agencies to use drones in just a few short years. So, soon the least of your worries will be traffic signal cameras and the local police officer armed with a radar gun. Our home-grown drones are likely to be deployed first for surveillance. But, undoubtedly armaments will follow. Hellfire missiles over Helena, Montana anyone?

[div class=attrib]From National Geographic:[end-div]

At the edge of a stubbly, dried-out alfalfa field outside Grand Junction, Colorado, Deputy Sheriff Derek Johnson, a stocky young man with a buzz cut, squints at a speck crawling across the brilliant, hazy sky. It’s not a vulture or crow but a Falcon—a new brand of unmanned aerial vehicle, or drone, and Johnson is flying it. The sheriff ’s office here in Mesa County, a plateau of farms and ranches corralled by bone-hued mountains, is weighing the Falcon’s potential for spotting lost hikers and criminals on the lam. A laptop on a table in front of Johnson shows the drone’s flickering images of a nearby highway.

Standing behind Johnson, watching him watch the Falcon, is its designer, Chris Miser. Rock-jawed, arms crossed, sunglasses pushed atop his shaved head, Miser is a former Air Force captain who worked on military drones before quitting in 2007 to found his own company in Aurora, Colorado. The Falcon has an eight-foot wingspan but weighs just 9.5 pounds. Powered by an electric motor, it carries two swiveling cameras, visible and infrared, and a GPS-guided autopilot. Sophisticated enough that it can’t be exported without a U.S. government license, the Falcon is roughly comparable, Miser says, to the Raven, a hand-launched military drone—but much cheaper. He plans to sell two drones and support equipment for about the price of a squad car.

A law signed by President Barack Obama in February 2012 directs the Federal Aviation Administration (FAA) to throw American airspace wide open to drones by September 30, 2015. But for now Mesa County, with its empty skies, is one of only a few jurisdictions with an FAA permit to fly one. The sheriff ’s office has a three-foot-wide helicopter drone called a Draganflyer, which stays aloft for just 20 minutes.

The Falcon can fly for an hour, and it’s easy to operate. “You just put in the coordinates, and it flies itself,” says Benjamin Miller, who manages the unmanned aircraft program for the sheriff ’s office. To navigate, Johnson types the desired altitude and airspeed into the laptop and clicks targets on a digital map; the autopilot does the rest. To launch the Falcon, you simply hurl it into the air. An accelerometer switches on the propeller only after the bird has taken flight, so it won’t slice the hand that launches it.

The stench from a nearby chicken-processing plant wafts over the alfalfa field. “Let’s go ahead and tell it to land,” Miser says to Johnson. After the deputy sheriff clicks on the laptop, the Falcon swoops lower, releases a neon orange parachute, and drifts gently to the ground, just yards from the spot Johnson clicked on. “The Raven can’t do that,” Miser says proudly.

Offspring of 9/11

A dozen years ago only two communities cared much about drones. One was hobbyists who flew radio-controlled planes and choppers for fun. The other was the military, which carried out surveillance missions with unmanned aircraft like the General Atomics Predator.

Then came 9/11, followed by the U.S. invasions of Afghanistan and Iraq, and drones rapidly became an essential tool of the U.S. armed forces. The Pentagon armed the Predator and a larger unmanned surveillance plane, the Reaper, with missiles, so that their operators—sitting in offices in places like Nevada or New York—could destroy as well as spy on targets thousands of miles away. Aerospace firms churned out a host of smaller drones with increasingly clever computer chips and keen sensors—cameras but also instruments that measure airborne chemicals, pathogens, radioactive materials.

The U.S. has deployed more than 11,000 military drones, up from fewer than 200 in 2002. They carry out a wide variety of missions while saving money and American lives. Within a generation they could replace most manned military aircraft, says John Pike, a defense expert at the think tank GlobalSecurity.org. Pike suspects that the F-35 Lightning II, now under development by Lockheed Martin, might be “the last fighter with an ejector seat, and might get converted into a drone itself.”

At least 50 other countries have drones, and some, notably China, Israel, and Iran, have their own manufacturers. Aviation firms—as well as university and government researchers—are designing a flock of next-generation aircraft, ranging in size from robotic moths and hummingbirds to Boeing’s Phantom Eye, a hydrogen-fueled behemoth with a 150-foot wingspan that can cruise at 65,000 feet for up to four days.

More than a thousand companies, from tiny start-ups like Miser’s to major defense contractors, are now in the drone business—and some are trying to steer drones into the civilian world. Predators already help Customs and Border Protection agents spot smugglers and illegal immigrants sneaking into the U.S. NASA-operated Global Hawks record atmospheric data and peer into hurricanes. Drones have helped scientists gather data on volcanoes in Costa Rica, archaeological sites in Russia and Peru, and flooding in North Dakota.

So far only a dozen police departments, including ones in Miami and Seattle, have applied to the FAA for permits to fly drones. But drone advocates—who generally prefer the term UAV, for unmanned aerial vehicle—say all 18,000 law enforcement agencies in the U.S. are potential customers. They hope UAVs will soon become essential too for agriculture (checking and spraying crops, finding lost cattle), journalism (scoping out public events or celebrity backyards), weather forecasting, traffic control. “The sky’s the limit, pun intended,” says Bill Borgia, an engineer at Lockheed Martin. “Once we get UAVs in the hands of potential users, they’ll think of lots of cool applications.”

The biggest obstacle, advocates say, is current FAA rules, which tightly restrict drone flights by private companies and government agencies (though not by individual hobbyists). Even with an FAA permit, operators can’t fly UAVs above 400 feet or near airports or other zones with heavy air traffic, and they must maintain visual contact with the drones. All that may change, though, under the new law, which requires the FAA to allow the “safe integration” of UAVs into U.S. airspace.

If the FAA relaxes its rules, says Mark Brown, the civilian market for drones—and especially small, low-cost, tactical drones—could soon dwarf military sales, which in 2011 totaled more than three billion dollars. Brown, a former astronaut who is now an aerospace consultant in Dayton, Ohio, helps bring drone manufacturers and potential customers together. The success of military UAVs, he contends, has created “an appetite for more, more, more!” Brown’s PowerPoint presentation is called “On the Threshold of a Dream.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Unmanned drone used to patrol the U.S.-Canadian border. (U.S. Customs and Border Protection/AP).[end-div]

Measuring Antifragility

Nassim Nicholas Taleb, one of our favorite thinkers and writers over here at theDiagonal recently published Antifragile, his follow-up to his successful “black swan” title Black Swan. In Antifragile Taleb argues that some things thrive when subjected to volatility, disorder and uncertainty. He labels the positive reaction to these external stressors, antifragility. (Ironically, this book was published by Random House).

In his essay, excerpted below, Taleb summarizes the basic tenets of antifragility and the payoff that we would gain from its empirical measurement. This would certainly represent a leap forward, from our persistent and misguided focus on luck in research, relationships and business.

[div class=attrib]From Edge.org:[end-div]

Something central, very central, is missing in historical accounts of scientific and technological discovery. The discourse and controversies focus on the role of luck as opposed to teleological programs (from telos, “aim”), that is, ones that rely on pre-set direction from formal science. This is a faux-debate: luck cannot lead to formal research policies; one cannot systematize, formalize, and program randomness. The driver is neither luck nor direction, but must be in the asymmetry (or convexity) of payoffs, a simple mathematical property that has lied hidden from the discourse, and the understanding of which can lead to precise research principles and protocols.

MISSING THE ASYMMETRY

The luck versus knowledge story is as follows. Ironically, we have vastly more evidence for results linked to luck than to those coming from the teleological, outside physics—even after discounting for the sensationalism. In some opaque and nonlinear fields, like medicine or engineering, the teleological exceptions are in the minority, such as a small number of designer drugs. This makes us live in the contradiction that we largely got here to where we are thanks to undirected chance, but we build research programs going forward based on direction and narratives. And, what is worse, we are fully conscious of the inconsistency.

The point we will be making here is that logically, neither trial and error nor “chance” and serendipity can be behind the gains in technology and empirical science attributed to them. By definition chance cannot lead to long term gains (it would no longer be chance); trial and error cannot be unconditionally effective: errors cause planes to crash, buildings to collapse, and knowledge to regress.

The beneficial properties have to reside in the type of exposure, that is, the payoff function and not in the “luck” part: there needs to be a significant asymmetry between the gains (as they need to be large) and the errors (small or harmless), and it is from such asymmetry that luck and trial and error can produce results. The general mathematical property of this asymmetry is convexity (which is explained in Figure 1); functions with larger gains than losses are nonlinear-convex and resemble financial options. Critically, convex payoffs benefit from uncertainty and disorder. The nonlinear properties of the payoff function, that is, convexity, allow us to formulate rational and rigorous research policies, and ones that allow the harvesting of randomness.

OPAQUE SYSTEMS AND OPTIONALITY

Further, it is in complex systems, ones in which we have little visibility of the chains of cause-consequences, that tinkering, bricolage, or similar variations of trial and error have been shown to vastly outperform the teleological—it is nature’s modus operandi. But tinkering needs to be convex; it is imperative. Take the most opaque of all, cooking, which relies entirely on the heuristics of trial and error, as it has not been possible for us to design a dish directly from chemical equations or reverse-engineer a taste from nutritional labels. We take hummus, add an ingredient, say a spice, taste to see if there is an improvement from the complex interaction, and retain if we like the addition or discard the rest. Critically we have the option, not the obligation to keep the result, which allows us to retain the upper bound and be unaffected by adverse outcomes.

This “optionality” is what is behind the convexity of research outcomes. An option allows its user to get more upside than downside as he can select among the results what fits him and forget about the rest (he has the option, not the obligation). Hence our understanding of optionality can be extended to research programs — this discussion is motivated by the fact that the author spent most of his adult life as an option trader. If we translate François Jacob’s idea into these terms, evolution is a convex function of stressors and errors —genetic mutations come at no cost and are retained only if they are an improvement. So are the ancestral heuristics and rules of thumbs embedded in society; formed like recipes by continuously taking the upper-bound of “what works”. But unlike nature where choices are made in an automatic way via survival, human optionality requires the exercise of rational choice to ratchet up to something better than what precedes it —and, alas, humans have mental biases and cultural hindrances that nature doesn’t have. Optionality frees us from the straightjacket of direction, predictions, plans, and narratives. (To use a metaphor from information theory, if you are going to a vacation resort offering you more options, you can predict your activities by asking a smaller number of questions ahead of time.)

While getting a better recipe for hummus will not change the world, some results offer abnormally large benefits from discovery; consider penicillin or chemotherapy or potential clean technologies and similar high impact events (“Black Swans”). The discovery of the first antimicrobial drugs came at the heel of hundreds of systematic (convex) trials in the 1920s by such people as Domagk whose research program consisted in trying out dyes without much understanding of the biological process behind the results. And unlike an explicit financial option for which the buyer pays a fee to a seller, hence tend to trade in a way to prevent undue profits, benefits from research are not zero-sum.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Antifragile by Naseem Nicholas Taleb, book cover. Courtesy of the author / Random House / Barnes & Noble.[end-div]

Distance to Europa: $2 billion and 14 years

Europa is Jupiter’s gravitationally tortured moon. It has liquid oceans underneath an icy surface. This makes Europa a very interesting target for future missions to the solar system — missions looking for life beyond our planet. Unfortunately, NASA’s planned mission has yet to be funded. But should the agency (and taxpayers) come up with the estimated $2 billion to fund a spacecraft, we could well have a probe circling Europa by 2027.

[div class=attrib]From the Guardian:[end-div]

Nasa scientists have drawn up plans for a mission that could look for life on Europa, a moon of Jupiter that is covered in vast oceans of water under a thick layer of ice.

The Europa Clipper would be the first dedicated mission to the waterworld moon, if it gets approval for funding from Nasa. The project is set to cost $2bn.

“On Earth, everywhere where there’s liquid water, we find life,” said Robert Pappalardo, a senior research scientist at Nasa’s jet propulsion laboratory in California, who led the design of the Europa Clipper.

“The search for life in our solar system somewhat equates to the search for liquid water. When we ask the question where are the water worlds, we have to look to the outer solar system because there are oceans beneath the icy shells of the moons.”

Jupiter’s biggest moons such as Ganymede, Callisto and Europa are too far from the sun to gain much warmth from it, but have liquid oceans beneath their blankets of ice because the moons are squeezed and warmed up as they orbit the planet.

“We generally focus down on Europa as the most promising in terms of potential habitability because of its relatively thick ice shell, an ocean that is in contact with rock below, and that it’s probably geologically active today,” Pappalardo said at the annual meeting of the American Association for the Advancement of Science in Boston.

In addition, because Europa is bombarded by extreme levels of radiation, the moon is likely to be covered in oxidants at its surface. These molecules are created when water is ripped apart by energetic radiation and could be used by lifeforms as a type of fuel.

For several years scientists have been considering plans for a spacecraft that could orbit Europa, but this turned out to be too expensive for Nasa’s budgets. Over the past year Pappalardo has worked with colleagues at the applied physics lab at Johns Hopkins University to come up with the Europa Clipper.

The spacecraft would orbit Jupiter and make several flybys of Europa, in the same way that the successful Cassini probe did for Saturn’s moon Titan.

“That way we can get effectively global coverage of Europa – not quite as good as an orbiter but not bad for half the cost . We have a validated cost of $2bn over the lifetime of the mission, excluding the launch,” Pappalardo said.

A probe could be readied in time for launch around 2021 and would take between three to six years to arrive at Europa, depending on the rockets used.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Complex and beautiful patterns adorn the icy surface of Jupiter’s moon Europa, as seen in this color image intended to approximate how the satellite might appear to the human eye. Image Credit: NASA/JPL/Ted Stryk.[end-div]

RIP: Chief Innovation Officer

“Innovate or die” goes the business mantra. Embrace creativity or you and your company will fall by the wayside and wither into insignificance.

A leisurely skim through a couple of dozen TV commercials, print ads and online banners will reinforce the notion — we are surrounded by innovators.

Absolutely everyone is innovating: Subway innovates with a new type of sandwich; Campbell Soup innovates by bringing a new blend to market more quickly; Skyy vodka innovates by adding a splash of lemon flavoring; Mercedes innovates by adding blind spot technology in its car door mirrors; Delta Airlines innovates by adding an inch more legroom for weary fliers; Bank of America innovates by communicating with customers via Twitter; L’Oreal innovates by boosting lashes. Innovation is everywhere and all the time.

Or is it?

There was a time when innovation meant radical, disruptive change: think movable type, printing, telegraphy, light bulb, mass production, photographic film, transistor, frozen food processing, television.

Now, the word innovation is liberally applied to just about anything. Marketers and advertisers have co-opted the word in service of coolness and an entrepreneurial halo. But, overuse of the label and its attachment to most new products and services in general has ensured that its value has become greatly diminished. Rather than connoting disruptive change, innovation in business is no more than a corporate cliché designed to market the coolness or an incremental improvement. So, who needs a Chief Innovation Officer anymore? After all, we are now all innovators.

[div class=attrib]From the Wall Street Journal:[end-div]

Got innovation? Just about every company says it does.

Businesses throw around the term to show they’re on the cutting edge of everything from technology and medicine to snacks and cosmetics. Companies are touting chief innovation officers, innovation teams, innovation strategies and even innovation days.

But that doesn’t mean the companies are actually doing any innovating. Instead they are using the word to convey monumental change when the progress they’re describing is quite ordinary.

Like the once ubiquitous buzzwords “synergy” and “optimization,” innovation is in danger of becoming a cliché—if it isn’t one already.

“Most companies say they’re innovative in the hope they can somehow con investors into thinking there is growth when there isn’t,” says Clayton Christensen, a professor at Harvard Business School and the author of the 1997 book, “The Innovator’s Dilemma.”

A search of annual and quarterly reports filed with the Securities and Exchange Commission shows companies mentioned some form of the word “innovation” 33,528 times last year, which was a 64% increase from five years before that.

More than 250 books with “innovation” in the title have been published in the last three months, most of them dealing with business, according to a search of Amazon.com.

The definition of the term varies widely depending on whom you ask. To Bill Hickey, chief executive of Bubble Wrap’s maker, Sealed Air Corp., it means inventing a product that has never existed, such as packing material that inflates on delivery.

To Ocean Spray Cranberries Inc. CEO Randy Papadellis, it is turning an overlooked commodity, such as leftover cranberry skins, into a consumer snack like Craisins.

To Pfizer Inc.’s PFE +0.85% research and development head, Mikael Dolsten, it is extending a product’s scope and application, such as expanding the use of a vaccine for infants that is also effective in older adults.

Scott Berkun, the author of the 2007 book “The Myths of Innovation,” which warns about the dilution of the word, says that what most people call an innovation is usually just a “very good product.”

He prefers to reserve the word for civilization-changing inventions like electricity, the printing press and the telephone—and, more recently, perhaps the iPhone.

Mr. Berkun, now an innovation consultant, advises clients to ban the word at their companies.

“It is a chameleon-like word to hide the lack of substance,” he says.

Mr. Berkun tracks innovation’s popularity as a buzzword back to the 1990s, amid the dot-com bubble and the release of James M. Utterback’s “Mastering the Dynamics of Innovation” and Mr. Christensen’s “Dilemma.”

The word appeals to large companies because it has connotations of being agile and “cool,” like start-ups and entrepreneurs, he says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Draisine, also called Laufmaschine (“running machine”), from around 1820. The Laufmaschine was invented by the German Baron Karl von Drais in Mannheim in 1817. Being the first means of transport to make use of the two-wheeler principle, the Laufmaschine is regarded as the archetype of the bicycle. Courtesy of Wikipedia.[end-div]

Anxiety, Fear and Wisdom

In a recent essay author Jana Richman weaves her personal stories about anxiety with Bertrand Russell’s salient observations on fear, and the desert Southwest is her colorful backdrop.

[div class=attrib]From the New York Times:[end-div]

On a cold, sunny day in early March, my husband, Steve, and I layered up and took ourselves out to our backyard: Grand Staircase Escalante National Monument. For a few days we had been spiraling downward through a series of miscommunications and tensions — the culmination of my rigorous dedication to fear, or what Bertrand Russell aptly coined “the tyranny of the habit of fear.”  A fresh storm had dropped 10 inches of snow with little moisture giving it an airy, crystallized texture that sprayed out in an arc with each footstep and made a shushing sound, as if it were speaking directly to me. Shush. Shush. Shush.

Moving into the elegant world of white-draped red rock is usually enough to strip our minds of the qualms that harass us, but on this particular day, Steve and I both stomped into the desert bearing a commitment to hang onto the somber roles we had adopted. Solemnity is difficult, however, when one is tumbling down hills of snow-covered, deep sand and slipping off steep angles of slickrock on one’s backside. Still, it took a good half-mile before we were convinced of our absurdity.

Such is the nature of the desert. If you persist in your gravity, the desert will take full advantage — it will have you falling over yourself as you trudge along carrying your blame and angst and fear; it will mock you until you literally and figuratively lighten up and conform to the place. The place will never conform to you. We knew that; that’s why we went. That’s why we always go to the desert when we’re stuck in a cycle of self-induced wretchedness.

“Fear,” Russell writes, “makes man unwise in the three great departments of human conduct: his dealings with nature, his dealings with other men, and his dealings with himself.”

I can attest to the truth of Russell’s words. I’ve spent many lifetime hours processing fear, and I’ve brought fear’s oppression into my marriage. Because fear is the natural state of my mind, I often don’t realize I’m spewing it into the atmosphere with my words and actions. The incident that drove us into the desert on that particular day was, in my mind, a simple expression of concern, a few “what will happen ifs”; in Steve’s mind, a paranoid rant. Upon reflection, I have to agree with his version.

A few months prior, Steve and I had decided upon a change in our lives: certainty in the form of a bi-weekly paycheck was traded for joy in the form writing time. It wasn’t a rash decision; it was five years in the making. Yet, from the moment the last check was cashed, my fear began roiling, slowly at first, but soon popping and splashing out of its shallow container. My voiced concerns regarding homelessness and insolvency went considerably beyond probable, falling to the far side of remotely possible. In my world, that’s enough for worry, discussion, obsession, more discussion, and several nights of insomnia.

We had parked the truck at the “head of the rocks,” an understated description of a spot that allows a 360-degree view of red and white slickrock cut with deep gulches and painted with the sweeping wear of wind and water. The Grand Staircase Escalante National Monument is 1.9 million acres of land, much of it devoid of human intrusion on any given day. Before we moved to the small town of Escalante on the Monument’s border, we came here from our city home five hours away — alone or together — whenever life threatened to shut us down.

From the head of the rocks, we followed the old cream cellar road, a wagon trail of switchbacks carved into stone in the early 1900s. We could see our destination about two miles out — a smooth, jutting wall with a level run of sand at its base that would allow us to sit with our faces to the sun and our backs against the wall — a fitting spot.

Steve walked behind me in silence, but I knew his thoughts. My fear perplexes and disparages him. His acts of heroism should dispel my anxiety, but it persists beyond the reach of his love.  Yet, his love, too, persists.

Knowing I’ll pick up and read anything placed in my path, Steve had left on the butcher block where I eat breakfast Russell’s timeless collection of essays, “New Hopes for a Changing World,” published in 1951, five years before I was born. I skimmed the table of contents until I reached three essays entitled, “Fear,” “Fortitude,” and “Life Without Fear,” in which Russell writes about the pervasive and destructive nature of fear. One of the significant fears Russell writes about — a fear close to his own heart — is the fear of being unlovable, which, he writes, is self-fulfilling unless one gets out from under fear’s dominion.  I’ve been testing Russell’s theory for the past eight years.

I’ve heard it said that all fear stems from the knowledge of our own mortality, and indeed, many of our social systems thrive by exploiting our fear of death and our desire to thwart it. But fear of death has never been my problem. To me, life, not death, holds the promise of misery.  When life is lived as a problem to be solved, death offers the ultimate resolution, the release of all fears, the moment of pure peace.

[div class=attrib]Read the entire article following the jump.[end-div]

Cluttered Desk, Cluttered Mind

Life coach Jayne Morris suggests that de-cluttering your desk, attic or garage can add positive energy to your personal and business life. Morris has coached numerous business leaders and celebrities in the art of clearing clutter.

[div class=attrib]From the Telegraph:[end-div]

According to a leading expert, having a cluttered environment reflects a cluttered mind and the act of tidying up can help you be more successful.

The advice comes from Jayne Morris, the resident “life coach” for NHS Online, who said it is no good just moving the mess around.

In order to clear the mind, unwanted items must be thrown away to free your “internal world”, she said.

Ms Morris, who claims to have coached celebrities to major business figures, said: “Clearing clutter from your desk has the power to transform you business.

“How? Because clutter in your outer environment is the physical manifestation of all the clutter going on inside of you.

“Clearing clutter has a ripple effect across your entire life, including your work.

“Having an untidy desk covered in clutter could be stopping you achieving the business success you want.”

She is adamant cleaning up will be a boon even though some of history’s biggest achievers lived and worked in notoriously messy conditions.

Churchill was considered untidy from a boy throughout his life, from his office to his artist’s studio, and the lab where Alexander Fleming discovered penicillin was famously dishevelled.

Among the recommendations is that the simply tidying a desk at work and an overflowing filing cabinet will instantly have a positive impact on “your internal world.”

Anything that is no longer used should not be put into storage but thrown away completely.

Keeping something in the loft, garage or other part of the house, does not help because it is still connected to the person “by tiny energetic cords” she claims.

She said: “The things in your life that are useful to you, that add value to your life, that serve a current purpose are charged with positive energy that replenishes you and enriches your life.

“But the things that you are holding on to that you don’t really like, don’t ever use and don’t need anymore have the opposite effect on your energy. Things that no longer fit or serve you, drain your energy.”

Briton has long been a nation of hoarders and a survey showed that more than a million are compulsive about their keeping their stuff.

Brains scans have also confirmed that victims of hoarding disorder have abnormal activity in regions of the brain involved in decision making – particularly in what to do with objects that belong to them.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Still from Buried Alive Season 3, TLC.[end-div]

Psst! AIDS Was Created by the U.S. Government

Some believe that AIDS was created by the U.S. Government or bestowed by a malevolent god. Some believe that Neil Armstrong never set foot on the moon, while others believe that Nazis first established a moon base in 1942. Some believe that recent tsunamis were caused by the U.S. military, and that said military is hiding evidence of alien visits in Area 51, Nevada. The latest of course is the great conspiracy of climate change, which apparently is created by socialists seeking to destroy the United States. This conspiratorial thinking makes for good reality-TV, and presents wonderful opportunities for psychological research. Why after all, in the face of seemingly insurmountable evidence, widespread common consensus and fundamental scientific reasoning, do such ideas, and their believers persist?

[div class=attrib]From Skeptical Science:[end-div]

There is growing evidence that conspiratorial thinking, also known as conspiracist ideation, is often involved in the rejection of scientific propositions. Conspiracist ideations tend to invoke alternative explanations for the nature or source of the scientific evidence. For example, among people who reject the link between HIV and AIDS, common ideations involve the beliefs that AIDS was created by the U.S. Government.

My colleagues and I published a paper recently that found evidence for the involvement of conspiracist ideation in the rejection of scientific propositions—from climate change to the link between tobacco and lung cancer, and between HIV and AIDS—among visitors to climate blogs. This was a fairly unsurprising result because it meshed well with previous research and the existing literature on the rejection of science. Indeed, it would have been far more surprising, from a scientific perspective, if the article had not found a link between conspiracist ideation and rejection of science.

Nonetheless, as some readers of this blog may remember, this article engendered considerable controversy.

The article also generated data.

Data, because for social scientists, public statements and publically-expressed ideas constitute data for further research. Cognitive scientists sometimes apply something called “narrative analysis” to understand how people, groups, or societies are organized and how they think.

In the case of the response to our earlier paper, we were struck by the way in which some of the accusations leveled against our paper were, well, somewhat conspiratorial in nature. We therefore decided to analyze the public response to our first paper with the hypothesis in mind that this response might also involve conspiracist ideation. We systematically collected utterances by bloggers and commenters, and we sought to classify them into various hypotheses leveled against our earlier paper. For each hypothesis, we then compared the public statements against a list of criteria for conspiracist ideation that was taken from the previous literature.

This follow-up paper was accepted a few days ago by Frontiers in Psychology, and a preliminary version of the paper is already available, for open access, here.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Area 51 – Warning sign near secret Area 51 base in Nevada. Courtesy of Wikipedia.[end-div]

First, Build A Blue Box; Second, Build Apple

Edward Tufte built the first little blue box in 1962. The blue box contained home-made circuitry and a tone generator that could place free calls over the phone network to anywhere in the world.

This electronic revelation spawned groups of “phone phreaks” (hackers) who would build their own blue boxes to fight MaBell (AT&T), illegally of course. The phreaks assumed suitably disguised names, such as Captain Crunch and Cheshire Cat, to hide from the long-arm of the FBI.

This later caught the attention of a pair of new recruits to the subversive cause, Berkeley Blue and Oaf Tobar, who would go on to found Apple under their more common pseudonyms, Steve Wozniak and Steve Jobs. The rest, as the saying goes, is history.

Put it down to curiosity, an anti-authoritarian streak and a quest to ever-improve.

[div class=attrib]From Slate:[end-div]

One of the most heartfelt—and unexpected—remembrances of Aaron Swartz, who committed suicide last month at the age of 26, came from Yale professor Edward Tufte. During a speech at a recent memorial service for Swartz in New York City, Tufte reflected on his secret past as a hacker—50 years ago.

“In 1962, my housemate and I invented the first blue box,” Tufte said to the crowd. “That’s a device that allows for undetectable, unbillable long distance telephone calls. We played around with it and the end of our research came when we completed what we thought was the longest long-distance phone call ever made, which was from Palo Alto to New York … via Hawaii.”

Tufte was never busted for his youthful forays into phone hacking, also known as phone phreaking. He rose to become one of Yale’s most famous professors, a world authority on data visualization and information design. One can’t help but think that Swartz might have followed in the distinguished footsteps of a professor like Tufte, had he lived.

Swartz faced 13 felony charges and up to 35 years in prison for downloading 4.8 million academic articles from the digital repository JSTOR, using MIT’s network. In the face of the impending trial, Swartz—a brilliant young hacker and activist who was a key force behind many worthy projects, including the RSS 1.0 specification and Creative Commons—killed himself on Jan. 11.

“Aaron’s unique quality was that he was marvelously and vigorously different,” Tufte said, a tear in his eye, as he closed his speech. “There is a scarcity of that. Perhaps we can all be a little more different, too.”

Swartz was too young to be a phone phreak like Tufte. In our present era of Skype and smartphones, the old days of outsmarting Ma Bell with 2600 Hertz sine wave tones and homemade “blue boxes” seems quaint, charmingly retro. But there is a thread that connects these old-school phone hackers to Swartz—common traits that Tufte recognized. It’s not just that, like Swartz, many phone phreaks faced trumped-up charges (wire fraud, in their cases). The best of these proto-computer hackers possessed Swartz’s enterprising spirit, his penchant for questioning authority, and his drive to figure out how a complicated system works from the inside. They were nerds, they were misfits; like Swartz, they were a little more different.

In his new history of phone phreaking, Exploding the Phone, engineer and consultant Phil Lapsley details the story of the 1960s and 1970s culture of hackers who, like Tufte, devised numerous ways to outwit the phone system. The foreword of the book is by Steve Wozniak, co-founder of Apple—and, as it happens, an old-school hacker himself. Before Wozniak and Steve Jobs built Apple in the 1970s, they were phone phreaks. (Wozniak’s hacker name was Berkeley Blue; Jobs’ handle was Oaf Tobar.)

In 1971, Esquire published an article about phone phreaking called “Secrets of the Little Blue Box,” by Ron Rosenbaum (a Slate columnist). It chronicled a ragtag crew sporting names like Captain Crunch and the Cheshire Cat, who prided themselves on using ingenuity and rudimentary electronics to outsmart the many-tentacled monstrosities of Ma Bell and the FBI. A blind 22-year-old named Joe Engressia was one of the scene’s heroes; according to Rosenbaum, Engressia could whistle at exactly the right frequency to place a free phone call.

Wozniak, age 20 in ’71, devoured the now-legendary article. “You know how some articles just grab you from the first paragraph?” he wrote in his 2006 memoir, iWoz, quoted in Lapsley’s book. “Well, it was one of those articles. It was the most amazing article I’d ever read!” Wozniak was entranced by the way these hackers seemed so much like himself. “I could tell that the characters being described were really tech people, much like me, people who liked to design things just to see what was possible, and for no other reason, really.” Building a blue box—a device that could generate the same tones that the phone system used to route phone calls, in a certain sequence—required technical smarts, and Wozniak loved nerdy challenges. Plus, the payoff—and the potential for epic pranks—was irresistible. (Wozniak once used a blue box to call the Vatican; impersonating Henry Kissinger he asked to talk to the pope.)

Wozniak immediately called Jobs, who was then a 17-year-old senior in high school. The friends drove to the technical library at Stanford’s Linear Accelerator Center to find a phone manual that listed tone frequencies. That same day, as Lapsley details in the book, Wozniak and Jobs bought analog tone generator kits, but were soon frustrated that the generators weren’t good enough for really high-quality phone phreaking.

Wozniak had a better, geekier idea: They needed to build their own blue boxes, but make them with digital circuits, which were more precise and easier to control than the usual analog ones. Wozniak and Jobs didn’t just build one blue box—they went on to build dozens of them, which they sold for about $170 apiece. In a way, their sophisticated, compact design foreshadowed the Apple products to come. Their digital circuitry incorporated several smart tricks, including a method to make the battery last longer. “I have never designed a circuit I was prouder of,” Wozniak says.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Exploding the Phone by Phil Lapsley, book cover. Courtesy of Barnes & Noble.[end-div]

Nordic Noir and Scandinavian Cool

Apparently the world once thought of the countries that make up the Scandinavian region as dull and boring. Nothing much happened in Norway, Sweden, Finland and Denmark besides endless winters, ABBA, Volvo and utopian socialist experiments. Not any longer. Over the last couple of decades this region has become a hotbed of artistic, literary and business creativity.

[div class=attrib]From the Economist:[end-div]

TWENTY YEARS AGO the Nordic region was a cultural backwater. Even the biggest cities were dead after 8pm. The restaurants offered meatballs or pale versions of Italian or French favourites. The region did come up with a few cultural icons such as Ingmar Bergman and Abba, and managed to produce world-class architects and designers even at the height of post-war brutalism. But the few successes served only to emphasise the general dullness.

The backwater has now turned into an entrepot. Stockholm relishes its reputation as one of the liveliest cities in Europe (and infuriates its neighbours by billing itself as “the capital of Scandinavia”). Scandinavian crime novels have become a genre in their own right. Danish television shows such as “The Killing” and “Borgen” are syndicated across the world. Swedish music producers are fixtures in Hollywood. Copenhagen’s Noma is one of the world’s most highly rated restaurants and has brought about a food renaissance across the region.

Why has the land of the bland become a cultural powerhouse? Jonas Bonnier, CEO of the Bonnier Group, Sweden’s largest media company, thinks that it is partly because new technologies are levelling the playing field. Popular music was once dominated by British and American artists who were able to use all sorts of informal barriers to protect their position. Today, thanks to the internet, somebody sitting in a Stockholm attic can reach the world. Rovio’s Michael Hed suggests that network effects are much more powerful in small countries: as soon as one writer cracks the global detective market, dozens of others quickly follow.

All true. But there is no point in giving people microphones if they have nothing to say. The bigger reason why the region’s writers and artists—and indeed chefs and game designers—are catching the world’s attention is that they are so full of vim. They are reinventing old forms such as the detective story or the evening meal but also coming up with entirely new forms such as video games for iPads.

The cultural renaissance is thus part of the other changes that have taken place in the region. A closed society that was dominated by a single political orthodoxy (social democracy) and by a narrow definition of national identity (say, Swedishness or Finnishness) is being shaken up by powerful forces such as globalisation and immigration. All the Nordics are engaged in a huge debate about their identity in a post-social democratic world. Think-tanks such as Denmark’s Cepos flaunt pictures of Milton Friedman in the same way that student radicals once flaunted pictures of Che Guevara. Writers expose the dark underbelly of the old social democratic regime. Chefs will prepare anything under the sun as long as it is not meatballs.

The region’s identity crisis is creating a multicultural explosion. The Nordics are scavenging the world for ideas. They continue to enjoy a love-hate relationship with America. They are discovering inspiration from their growing ethnic minorities but are also reaching back into their own cultural traditions. Swedish crime writers revel in the peculiarities of their culture. Danish chefs refuse to use foreign ingredients. A region that has often felt the need to apologise for its culture—those bloodthirsty Vikings! Those toe-curling Abba lyrics! Those naff fishermen’s jumpers!—is enjoying a surge of regional pride.

Blood and snow

Over the past decade Scandinavia has become the world’s leading producer of crime novels. The two Swedes who did more than anyone else to establish Nordic noir—Stieg Larsson and Henning Mankell—have both left the scene of crime. Larsson died of a heart attack in 2004 before his three books about a girl with a dragon tattoo became a global sensation. Mr Mankell consigned his hero, Kurt Wallander, to Alzheimer’s after a dozen bestsellers. But their books continue to be bought in their millions: “Dragon Tattoo” has sold more than 50m, and the Wallander books collectively even more.

A group of new writers, such as Jo Nesbo in Norway and Camilla Lackberg in Sweden, are determined to keep the flame burning. And the crime wave is spreading beyond adult fiction and the written word. Sweden’s Martin Widmark writes detective stories for children. Swedish and British television producers compete to make the best version of Wallander. “The Killing” established a new standard for televised crime drama.

The region has a long tradition of crime writing. Per Wahloo and Maj Sjowall, a Swedish husband-and-wife team, earned a dedicated following among aficionados with their police novels in the 1960s and 1970s. They also established two of Nordic noir’s most appealing memes. Martin Beck is an illness-prone depressive who gets to the truth by dint of relentless plodding. The ten Martin Beck novels present Sweden as a capitalist hellhole that can be saved only by embracing Soviet-style communism (the crime at the heart of the novels is the social democratic system’s betrayal of its promise).

Today’s crime writers continue to profit from these conventions. Larsson’s Sweden, for example, is a crypto-fascist state run by a conspiracy of psychopathic businessmen and secret-service agents. But today’s Nordic crime writers have two advantages over their predecessors. The first is that their hitherto homogenous culture is becoming more variegated and their peaceful society has suffered inexplicable bouts of violence (such as the assassination in 1986 of Sweden’s prime minister, Olof Palme, and in 2003 of its foreign minister, Anna Lindh, and Anders Breivik’s murderous rampage in Norway in 2011). Nordic noir is in part an extended meditation on the tension between the old Scandinavia, with its low crime rate and monochrome culture, and the new one, with all its threats and possibilities. Mr Mankell is obsessed by the disruption of small-town life by global forces such as immigration and foreign criminal gangs. Each series of “The Killing” focuses as much on the fears—particularly of immigrant minorities—that the killing exposes as it does on the crime itself.

The second advantage is something that Wahloo and Sjowall would have found repulsive: a huge industry complete with support systems and the promise of big prizes. Ms Lackberg began her career in an all-female crime-writing class. Mr Mankell wrote unremunerative novels and plays before turning to a life of crime. Thanks in part to Larsson, crime fiction is one of the region’s biggest exports: a brand that comes with a guarantee of quality and a distribution system that stretches from Stockholm to Hollywood.

Dinner in Copenhagen can come as a surprise to even the most jaded foodie. The dishes are more likely to be served on slabs of rock or pieces of wood than on plates. The garnish often takes the form of leaves or twigs. Many ingredients, such as sea cabbage or wild flowers, are unfamiliar, and the more familiar sort, such as pike, are often teamed with less familiar ones, such as unripe elderberries.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: ABBA, Eurovision, 1974. Courtesy of Time.[end-div]

MondayPoem: Wild Nights – Wild Nights!

Emily Dickinson has been much written about, but still remains enigmatic. Many of her peers thought her to be eccentric and withdrawn. Only after her death did the full extent of her prolific writing become apparent. To this day, her unique poetry is regarded as having ushered in a new era of personal observation and expression.

By Emily Dickinson

– Wild Nights – Wild Nights!

Wild nights – Wild nights!
Were I with thee
Wild nights should be
Our luxury!

Futile – the winds –
To a Heart in port –
Done with the Compass –
Done with the Chart!

Rowing in Eden –
Ah – the Sea!
Might I but moor – tonight –
In thee!

[div class=attrib]Image: Daguerreotype of the poet Emily Dickinson, taken circa 1848. Courtesy of the Todd-Bingham Picture Collection and Family Papers, Yale University.[end-div]

Your Brain and Politics

New research out of the University of Exeter in Britain and the University of California, San Diego, shows that liberals and conservatives really do have different brains. In fact, activity in specific areas of the brain can be used to predict whether a person leans to the left or to the right with an accuracy of just under 83 percent. This means that a brain scan could more accurately predict your politics than the political persuasions of your parents (accurate around 70 percent of the time).

[div class=attrib]From Smithsonian:[end-div]

If you want to know people’s politics, tradition said to study their parents. In fact, the party affiliation of someone’s parents can predict the child’s political leanings about around 70 percent of the time.

But new research, published yesterday in the journal PLOS ONE, suggests what mom and dad think isn’t the endgame when it comes to shaping a person’s political identity. Ideological differences between partisans may reflect distinct neural processes, and they can predict who’s right and who’s left of center with 82.9 percent accuracy, outperforming the “your parents pick your party” model. It also out-predicts another neural model based on differences in brain structure, which distinguishes liberals from conservatives with 71.6 percent accuracy.

The study matched publicly available party registration records with the names of 82 American participants whose risk-taking behavior during a gambling experiment was monitored by brain scans. The researchers found that liberals and conservatives don’t differ in the risks they do or don’t take, but their brain activity does vary while they’re making decisions.

The idea that the brains of Democrats and Republicans may be hard-wired to their beliefs is not new. Previous research has shown that during MRI scans, areas linked to broad social connectedness, which involves friends and the world at large, light up in Democrats’ brains. Republicans, on the other hand, show more neural activity in parts of the brain associated with tight social connectedness, which focuses on family and country.

Other scans have shown that brain regions associated with risk and uncertainty, such as the fear-processing amygdala, differ in structure in liberals and conservatives. And different architecture means different behavior. Liberals tend to seek out novelty and uncertainty, while conservatives exhibit strong changes in attitude to threatening situations. The former are more willing to accept risk, while the latter tends to have more intense physical reactions to threatening stimuli.

Building on this, the new research shows that Democrats exhibited significantly greater activity in the left insula, a region associated with social and self-awareness, during the task. Republicans, however, showed significantly greater activity in the right amygdala, a region involved in our fight-or flight response system.

“If you went to Vegas, you won’t be able to tell who’s a Democrat or who’s a Republican, but the fact that being a Republican changes how your brain processes risk and gambling is really fascinating,” says lead researcher Darren Schreiber, a University of Exeter professor who’s currently teaching at Central European University in Budapest. “It suggests that politics alters our worldview and alters the way our brains process.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Sagittal brain MRI. Courtesy of Wikipedia.[end-div]

Pseudo-Science in Missouri and 2+2=5

Hot on the heels of recent successes by the Texas School Board of Education (SBOE) to revise history and science curricula, legislators in Missouri are planning to redefine commonly accepted scientific principles. Much like the situation in Texas the Missouri House is mandating that intelligent design be taught alongside evolution, in equal measure, in all the state’s schools. But, in a bid to take the lead in reversing thousands of years of scientific progress Missouri plans to redefine the actual scientific framework. So, if you can’t make “intelligent design” fit the principles of accepted science, then just change the principles themselves — first up, change the meanings of the terms “scientific hypothesis” and “scientific theory”.

We suspect that a couple of years from now, in Missouri, 2+2 will be redefined to equal 5, and that logic, deductive reasoning and experimentation will be replaced with mushy green peas.

[div class=attrib]From ars technica:[end-div]

Each year, state legislatures play host to a variety of bills that would interfere with science education. Most of these are variations on a boilerplate intended to get supplementary materials into classrooms criticizing evolution and climate change (or to protect teachers who do). They generally don’t mention creationism, but the clear intent is to sneak religious content into the science classrooms, as evidenced by previous bills introduced by the same lawmakers. Most of them die in the legislature (although the opponents of evolution have seen two successes).

The efforts are common enough that we don’t generally report on them. But, every now and then, a bill comes along veers off this script. And late last month, the Missouri House started considering one that deviates in staggering ways. Instead of being quiet about its intent, it redefines science, provides a clearer definition of intelligent design than any of the idea’s advocates ever have, and mandates equal treatment of the two. In the process, it mangles things so badly that teachers would be prohibited from discussing Mendel’s Laws.

Although even the Wikipedia entry for scientific theory includes definitions provided by the world’s most prestigious organizations of scientists, the bill’s sponsor Rick Brattin has seen fit to invent his own definition. And it’s a head-scratcher: “‘Scientific theory,’ an inferred explanation of incompletely understood phenomena about the physical universe based on limited knowledge, whose components are data, logic, and faith-based philosophy.” The faith or philosophy involved remain unspecified.

Brattin also mentions philosophy when he redefines hypothesis as, “a scientific theory reflecting a minority of scientific opinion which may lack acceptance because it is a new idea, contains faulty logic, lacks supporting data, has significant amounts of conflicting data, or is philosophically unpopular.” The reason for that becomes obvious when he turns to intelligent design, which he defines as a hypothesis. Presumably, he thinks it’s only a hypothesis because it’s philosophically unpopular, since his bill would ensure it ends up in the classrooms.

Intelligent design is roughly the concept that life is so complex that it requires a designer, but even its most prominent advocates have often been a bit wary about defining its arguments all that precisely. Not so with Brattin—he lists 11 concepts that are part of ID. Some of these are old-fashioned creationist claims, like the suggestion that mutations lead to “species degradation” and a lack of transitional fossils. But it also has some distinctive twists like the claim that common features, usually used to infer evolutionary relatedness, are actually a sign of parts re-use by a designer.

Eventually, the bill defines “standard science” as “knowledge disclosed in a truthful and objective manner and the physical universe without any preconceived philosophical demands concerning origin or destiny.” It then demands that all science taught in Missouri classrooms be standard science. But there are some problems with this that become apparent immediately. The bill demands anything taught as scientific law have “no known exceptions.” That would rule out teaching Mendel’s law, which have a huge variety of exceptions, such as when two genes are linked together on the same chromosome.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Seal of Missouri. Courtesy of Wikipedia.[end-div]

Grow Your Own… Heart

A timely article for Valentine’s Day. Researchers continue to make astonishing progress in areas of cell biology and human genomics. So, it should come as no surprise that growing a customized, replacement heart in a lab from reprogrammed cells will one day be on the horizon.

[div class=attrib]From the Guardian:[end-div]

Every two minutes someone in the UK has a heart attack. Every six minutes, someone dies from heart failure. During an attack, the heart remodels itself and dilates around the site of the injury to try to compensate, but these repairs are rarely effective. If the attack does not kill you, heart failure later frequently will.

“No matter what other clinical interventions are available, heart transplantation is the only genuine cure for this,” says Paul Riley, professor of regenerative medicine at Oxford University. “The problem is there is a dearth of heart donors.”

Transplants have their own problems – successful operations require patients to remain on toxic, immune-suppressing drugs for life and their subsequent life expectancies are not usually longer than 20 years.

The solution, emerging from the laboratories of several groups of scientists around the world, is to work out how to rebuild damaged hearts. Their weapons of choice are reprogrammed stem cells.

These researchers have rejected the more traditional path of cell therapy that you may have read about over the past decade of hope around stem cells – the idea that stem cells could be used to create batches of functioning tissue (heart or brain or whatever else) for transplant into the damaged part of the body. Instead, these scientists are trying to understand what the chemical and genetic switches are that turn something into a heart cell or muscle cell. Using that information, they hope to programme cells at will, and help the body make repairs.

It is an exciting time for a technology that no one thought possible a few years ago. In 2007, Shinya Yamanaka showed it was possible to turn adult skin cells into embryonic-like stem cells, called induced pluripotent stem cells (iPSCs), using just a few chemical factors. His technique radically advanced stem cell biology, sweeping aside years of blockages due to the ethical objections about using stem cells from embryos. He won the Nobel prize in physiology or medicine for his work in October. Researchers have taken this a step further – directly turning one mature cell type to another without going through a stem cell phase.

And politicians are taking notice. At the Royal Society in November, in his first major speech on the Treasury’s ambitions for science and technology, the chancellor, George Osborne, identified regenerative medicine as one of eight areas of technology in which he wanted the UK to become a world leader. Earlier last year, the Lords science and technology committee launched an inquiry into the potential of regenerative medicine in the UK – not only the science but what regulatory obstacles there might be to turning the knowledge into medical applications.

At Oxford, Riley has spent almost a year setting up a £2.5m lab, funded as part of the British Heart Foundation’s Mending Broken Hearts appeal, to work out how to get heart muscle to repair itself. The idea is to expand the scope of the work that got Riley into the headlines last year after a high-profile paper published in the journal Nature in which he showed a means of repairing cells damaged during a heart attack in mice. That work involved in effect turning the clock back in a layer of cells on the outside of the heart, called the epicardium, making adult cells think they were embryos again and thereby restarting their ability to repair.

During the development of the embryo, the epicardium turns into the many types of cells seen in the heart and surrounding blood vessels. After the baby is born this layer of cells loses its ability to transform. By infusing the epicardium with the protein thymosin ?4 (T?4), Riley’s team found the once-dormant layer of cells was able to produce new, functioning heart cells. Overall, the treatment led to a 25% improvement in the mouse heart’s ability to pump blood after a month compared with mice that had not received the treatment.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google Search.[end-div]

Vaccinia – Prototype Viral Cancer Killer

The illustrious Vaccinia virus may well have an Act Two in its future.

For Act One, over the last 150 years or so, it has been successfully used to vaccinate most of the world’s population against smallpox. This helped eradicate smallpox in the United States in the early 1970s.

Now, researchers are using it to target cancer.

First, take the Vaccinia virus — a relative of the smallpox virus. Second, re-engineer the virus to inhibit its growth in normal cells. Third, add a gene to the virus that stimulates the immune system. Fourth, set it to work on tumor cells and watch. While, such research has been going on for a couple of decades, this enhanced approach to attacking cancer cells with a viral immune system stimulant shows early promise.

[div class=attrib]From ars technica:[end-div]

For roughly 20 years, scientists have been working to engineer a virus that will attack cancer. The basic idea is sound, and every few years there have been some promising-looking results, with tumors shrinking dramatically in response to an infection. But the viruses never seem to go beyond small trials, and the companies making them always seem to focus on different things.

Over the weekend, Nature Medicine described some further promising results, this time with a somewhat different approach to ensuring that the virus leads to the death of cancer cells: if the virus doesn’t kill the cells directly, it revs up the immune system to attack them. It’s not clear this result will make it to a clinic, but it provides a good opportunity to review the general approach of treating cancer with viruses.

The basic idea is to leverage decades of work on some common viruses. This research has identified a variety of mutations keeping viruses from growing in normal cells. It means that if you inject the virus into a healthy individual, it won’t be able to infect any of their cells.

But cancer cells are different, as they carry a series of mutations of their own. In some cases, these mutations compensate for the problems in the virus. To give one example, the p53 protein normally induces aberrant cells to undergo an orderly death called apoptosis. It also helps shut down the growth of viruses in a cell, which is why some viruses encode a protein that inhibits p53. Cancer cells tend to damage or eliminate their copies of p53 so that it doesn’t cause them to undergo apoptosis.

So imagine a virus with its p53 inhibitor deleted. It can’t grow in normal cells since they have p53 around, but it can grow in cancer cells, which have eliminated their p53. The net result should be a cancer-killing virus. (A great idea, but this is one of the viruses that got dropped after preliminary trials.)

In the new trial, the virus in question takes a similar approach. The virus, vaccinia (a relative of smallpox used for vaccines), carries a gene that is essential for it to make copies of itself. Researchers have engineered a version without that gene, ensuring it can’t grow in normal cells (which have their equivalent of the gene shut down). Cancer cells need to reactivate the gene, meaning they present a hospitable environment for the mutant virus.

But the researchers added another trick by inserting a gene for a molecule that helps recruit immune cells (the awkwardly named granulocyte-macrophage colony-stimulating factor, or GM-CSF). The immune system plays an important role in controlling cancer, but it doesn’t always generate a full-scale response to cancer. By adding GM-CSF, the virus should help bring immune cells to the site of the cancer and activate them, creating a more aggressive immune response to any cells that survive viral infection.

The study here was simply checking the tolerance for two different doses of the virus. In general, the virus was tolerated well. Most subjects reported a short bout of flu-like symptoms, but only one subject out of 30 had a more severe response.

However, the tumors did respond. Based on placebo-controlled trials, the average survival time of patients like the ones in the trial would have been expected to be about two to four months. Instead, the low-dose group had a survival time of nearly seven months; for the higher dose group, that number went up to over a year. Two of those treated were still alive after more than two years. Imaging of tumors showed lots of dead cells, and tests of the immune system indicate the virus had generated a robust response.

[div class=attrib]Read the entire article after the leap.[end-div]

[div class=attrib]Image: An electron micrograph of a Vaccinia virus. Courtesy of Wikipedia.[end-div]

Do Corporations Go to Heaven When They Die?

Perhaps heaven is littered with the disembodied, collective consciousness of Woolworth, Circuit City, Borders and Blockbuster. Similarly, it may be possible that Enron and Lehman Brothers, a little less fortunate due to the indiscretions of their leaders, have found their corporate souls to be forever tormented in business hell. And, what of the high tech start-ups that come and go in the beat of a hummingbird’s wing? Where are Webvan, Flooz, Gowalla, Beenz, Loopt, Kosmo, eToys and Pets.com? Are they spinning endlessly somewhere between the gluttons (third circle) and the heretics (sixth circle) in Dante’s concentric hell. And where are the venture capitalists and where will Burger King and Apple find themselves when they eventually pass to the other side?

This may all seem rather absurd. It is. Yet, the evangelical corporate crusaders such as Hobby Lobby and Chick Fil A would have us treat their corporations just as we do mere (im)mortals. Where is all this nonsense heading? Well, the Supreme Court of the United States, of course.

[div class=attrib]From the New York Times:[end-div]

David Green, who built a family picture-framing business into a 42-state chain of arts and crafts stores, prides himself on being the model of a conscientious Christian capitalist. His 525 Hobby Lobby stores forsake Sunday profits to give employees their biblical day of rest. The company donates to Christian counseling services and buys holiday ads that promote the faith in all its markets. Hobby Lobby has been known to stick decals over Botticelli’s naked Venus in art books it sells.

And the company’s in-house health insurance does not cover morning-after contraceptives, which Green, like many of his fellow evangelical Christians, regards as chemical abortions.

“We’re Christians,” he says, “and we run our business on Christian principles.”

This has put Hobby Lobby at the leading edge of a legal battle that poses the intriguing question: Can a corporation have a conscience? And if so, is it protected by the First Amendment.

The Affordable Care Act, a k a Obamacare, requires that companies with more than 50 full-time employees offer health insurance, including coverage for birth control. Churches and other purely religious organizations are exempt. The Obama administration, in an unrequited search for compromise, has also proposed to excuse nonprofit organizations such as hospitals and universities if they are affiliated with religions that preach the evil of contraception. You might ask why a clerk at Notre Dame or an orderly at a Catholic hospital should be denied the same birth control coverage provided to employees of secular institutions. You might ask why institutions that insist they are like everyone else when it comes to applying for federal grants get away with being special when it comes to federal health law. Good questions. You will find the unsatisfying answers in the Obama handbook of political expediency.

But these concessions are not enough to satisfy the religious lobbies. Evangelicals and Catholics, cheered on by anti-abortion groups and conservative Obamacare-haters, now want the First Amendment freedom of religion to be stretched to cover an array of for-profit commercial ventures, Hobby Lobby being the largest litigant. They are suing to be exempted on the grounds that corporations sometimes embody the faith of the individuals who own them.

“The legal case” for the religious freedom of corporations “does not start with, ‘Does the corporation pray?’ or ‘Does the corporation go to heaven?’ ” said Kyle Duncan, general counsel of the Becket Fund for Religious Liberty, which is representing Hobby Lobby. “It starts with the owner.” For owners who have woven religious practice into their operations, he told me, “an exercise of religion in the context of a business” is still an exercise of religion, and thus constitutionally protected.

The issue is almost certain to end up in the Supreme Court, where the betting is made a little more interesting by a couple of factors: six of the nine justices are Catholic, and this court has already ruled, in the Citizens United case, that corporations are protected by the First Amendment, at least when it comes to freedom of speech. Also, we know that at least four members of the court don’t think much of Obamacare.

In lower courts, advocates of the corporate religious exemption have won a few and lost a few. (Hobby Lobby has lost so far, and could eventually face fines of more than $1 million a day for defying the law. The company’s case is now before the Court of Appeals for the 10th Circuit.)

You can feel some sympathy for David Green’s moral dilemma, and even admire him for practicing what he preaches, without buying the idea that la corporation, c’est moi. Despite the Supreme Court’s expansive view of the First Amendment, Hobby Lobby has a high bar to get over — as it should.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Gluttony: The circle itself is a living abomination, a hellish digestive system revealing horrific faces with mouths ready to devour the gluttons over and over for eternity. Picture: Mihai Marius Mihu / Rex Features / Telegraph. To see more of the nine circles of hell from Dante’s Inferno recreated in Lego by artist Mihai Mihu jump here.[end-div]

Better Relaxation Equals Higher Productivity

A growing body of research shows that employees who are well rested and relaxed are generally more productive. Isn’t this just common sense? But the notion that employees who are happier and less-stressed outside the workplace can be more effective within the workplace still seems to evade most employers.

[div class=attrib]From the New York Times:[end-div]

THINK for a moment about your typical workday. Do you wake up tired? Check your e-mail before you get out of bed? Skip breakfast or grab something on the run that’s not particularly nutritious? Rarely get away from your desk for lunch? Run from meeting to meeting with no time in between? Find it nearly impossible to keep up with the volume of e-mail you receive? Leave work later than you’d like, and still feel compelled to check e-mail in the evenings?

More and more of us find ourselves unable to juggle overwhelming demands and maintain a seemingly unsustainable pace. Paradoxically, the best way to get more done may be to spend more time doing less. A new and growing body of multidisciplinary research shows that strategic renewal — including daytime workouts, short afternoon naps, longer sleep hours, more time away from the office and longer, more frequent vacations — boosts productivity, job performance and, of course, health.

“More, bigger, faster.” This, the ethos of the market economies since the Industrial Revolution, is grounded in a mythical and misguided assumption — that our resources are infinite.

Time is the resource on which we’ve relied to get more accomplished. When there’s more to do, we invest more hours. But time is finite, and many of us feel we’re running out, that we’re investing as many hours as we can while trying to retain some semblance of a life outside work.

Although many of us can’t increase the working hours in the day, we can measurably increase our energy. Science supplies a useful way to understand the forces at play here. Physicists understand energy as the capacity to do work. Like time, energy is finite; but unlike time, it is renewable. Taking more time off is counterintuitive for most of us. The idea is also at odds with the prevailing work ethic in most companies, where downtime is typically viewed as time wasted. More than one-third of employees, for example, eat lunch at their desks on a regular basis. More than 50 percent assume they’ll work during their vacations.

In most workplaces, rewards still accrue to those who push the hardest and most continuously over time. But that doesn’t mean they’re the most productive.

Spending more hours at work often leads to less time for sleep and insufficient sleep takes a substantial toll on performance. In a study of nearly 400 employees, published last year, researchers found that sleeping too little — defined as less than six hours each night — was one of the best predictors of on-the-job burn-out. A recent Harvard study estimated that sleep deprivation costs American companies $63.2 billion a year in lost productivity.

The Stanford researcher Cheri D. Mah found that when she got male basketball players to sleep 10 hours a night, their performances in practice dramatically improved: free-throw and three-point shooting each increased by an average of 9 percent.

Daytime naps have a similar effect on performance. When night shift air traffic controllers were given 40 minutes to nap — and slept an average of 19 minutes — they performed much better on tests that measured vigilance and reaction time.

Longer naps have an even more profound impact than shorter ones. Sara C. Mednick, a sleep researcher at the University of California, Riverside, found that a 60- to 90-minute nap improved memory test results as fully as did eight hours of sleep.

MORE vacations are similarly beneficial. In 2006, the accounting firm Ernst & Young did an internal study of its employees and found that for each additional 10 hours of vacation employees took, their year-end performance ratings from supervisors (on a scale of one to five) improved by 8 percent. Frequent vacationers were also significantly less likely to leave the firm.

As athletes understand especially well, the greater the performance demand, the greater the need for renewal. When we’re under pressure, however, most of us experience the opposite impulse: to push harder rather than rest. This may explain why a recent survey by Harris Interactive found that Americans left an average of 9.2 vacation days unused in 2012 — up from 6.2 days in 2011.

The importance of restoration is rooted in our physiology. Human beings aren’t designed to expend energy continuously. Rather, we’re meant to pulse between spending and recovering energy.

[div class=attrib]Read the entire article following the jump.[end-div]

Geoengineering As a Solution to Climate Change

Experimental physicist David Keith has a plan: dump hundreds of thousands of tons of atomized sulfuric acid into the upper atmosphere; watch the acid particles reflect additional sunlight; wait for global temperature to drop. Many of Keith’s peers think this geoengineering scheme is crazy, least of which are the possible unknown and unmeasured side-effects, but this hasn’t stopped the healthy debate. One thing is becoming increasingly clear — humans need to take collective action.

[div class=attrib]From Technology Review:[end-div]

Here is the plan. Customize several Gulfstream business jets with military engines and with equipment to produce and disperse fine droplets of sulfuric acid. Fly the jets up around 20 kilometers—significantly higher than the cruising altitude for a commercial jetliner but still well within their range. At that altitude in the tropics, the aircraft are in the lower stratosphere. The planes spray the sulfuric acid, carefully controlling the rate of its release. The sulfur combines with water vapor to form sulfate aerosols, fine particles less than a micrometer in diameter. These get swept upward by natural wind patterns and are dispersed over the globe, including the poles. Once spread across the stratosphere, the aerosols will reflect about 1 percent of the sunlight hitting Earth back into space. Increasing what scientists call the planet’s albedo, or reflective power, will partially offset the warming effects caused by rising levels of greenhouse gases.

The author of this so-called geoengineering scheme, David Keith, doesn’t want to implement it anytime soon, if ever. Much more research is needed to determine whether injecting sulfur into the stratosphere would have dangerous consequences such as disrupting precipitation patterns or further eating away the ozone layer that protects us from damaging ultraviolet radiation. Even thornier, in some ways, are the ethical and governance issues that surround geoengineering—questions about who should be allowed to do what and when. Still, Keith, a professor of applied physics at Harvard University and a leading expert on energy technology, has done enough analysis to suspect it could be a cheap and easy way to head off some of the worst effects of climate change.

According to Keith’s calculations, if operations were begun in 2020, it would take 25,000 metric tons of sulfuric acid to cut global warming in half after one year. Once under way, the injection of sulfuric acid would proceed continuously. By 2040, 11 or so jets delivering roughly 250,000 metric tons of it each year, at an annual cost of $700 million, would be required to compensate for the increased warming caused by rising levels of carbon dioxide. By 2070, he estimates, the program would need to be injecting a bit more than a million tons per year using a fleet of a hundred aircraft.

One of the startling things about Keith’s proposal is just how little sulfur would be required. A few grams of it in the stratosphere will offset the warming caused by a ton of carbon dioxide, according to his estimate. And even the amount that would be needed by 2070 is dwarfed by the roughly 50 million metric tons of sulfur emitted by the burning of fossil fuels every year. Most of that pollution stays in the lower atmosphere, and the sulfur molecules are washed out in a matter of days. In contrast, sulfate particles remain in the stratosphere for a few years, making them more effective at reflecting sunlight.

The idea of using sulfate aerosols to offset climate warming is not new. Crude versions of the concept have been around at least since a Russian climate scientist named Mikhail Budkyo proposed the idea in the mid-1970s, and more refined descriptions of how it might work have been discussed for decades. These days the idea of using sulfur particles to counteract warming—often known as solar radiation management, or SRM—is the subject of hundreds of papers in academic journals by scientists who use computer models to try to predict its consequences.

But Keith, who has published on geoengineering since the early 1990s, has emerged as a leading figure in the field because of his aggressive public advocacy for more research on the technology—and his willingness to talk unflinchingly about how it might work. Add to that his impeccable academic credentials—last year Harvard lured him away from the University of Calgary with a joint appointment in the school of engineering and the Kennedy School of Government—and Keith is one of the world’s most influential voices on solar geoengineering. He is one of the few who have done detailed engineering studies and logistical calculations on just how SRM might be carried out. And if he and his collaborator James ­Anderson, a prominent atmospheric chemist at Harvard, gain public funding, they plan to conduct some of the first field experiments to assess the risks of the technique.

Leaning forward from the edge of his chair in a small, sparse Harvard office on an unusually warm day this winter, he explains his urgency. Whether or not greenhouse-gas emissions are cut sharply—and there is little evidence that such reductions are coming—”there is a realistic chance that [solar geoengineering] technologies could actually reduce climate risk significantly, and we would be negligent if we didn’t look at that,” he says. “I’m not saying it will work, and I’m not saying we should do it.” But “it would be reckless not to begin serious research on it,” he adds. “The sooner we find out whether it works or not, the better.”

The overriding reason why Keith and other scientists are exploring solar geoengineering is simple and well documented, though often overlooked: the warming caused by atmospheric carbon dioxide buildup is for all practical purposes irreversible, because the climate change is directly related to the total cumulative emissions. Even if we halt carbon dioxide emissions entirely, the elevated concentrations of the gas in the atmosphere will persist for decades. And according to recent studies, the warming itself will continue largely unabated for at least 1,000 years. If we find in, say, 2030 or 2040 that climate change has become intolerable, cutting emissions alone won’t solve the problem.

“That’s the key insight,” says Keith. While he strongly supports cutting carbon dioxide emissions as rapidly as possible, he says that if the climate “dice” roll against us, that won’t be enough: “The only thing that we think might actually help [reverse the warming] in our lifetime is in fact geoengineering.”

[div class=attrib]Read the entire article following the jump.[end-div]

From Sea to Shining Sea – By Rail

Now that air travel has become well and truly commoditized, and for most of us, a nightmare, it’s time, again, to revisit the romance of rail. After all, the elitist romance of air travel passed away about 40-50 years ago. Now all we are left with is parking trauma at the airport; endless lines at check in, security, the gate and while boarding and disembarking; inane airport announcements and beeping golf carts; coughing, tweeting passengers crammed shoulder to shoulder in far too small seats; poor quality air and poor quality service in the cabin. It’s even dangerous to open the shade and look out of the aircraft window for fear of waking a cranky neighbor, or, more calamitous still, for washing out the in-seat displays showing the latest reality TV videos.

Some of you, surely, still pine for a quiet and calming ride across the country taking in the local sights at a more leisurely pace. Alfred Twu, who helped define the 2008 high speed rail proposal for California, would have us zooming across the entire United States in trains, again. So, it not be a leisurely ride — think more like 200-300 miles per hour — but it may well bring us closer to what we truly miss when suspended at 30,000 ft. We can’t wait.

[div class=attrib]From the Guardian:[end-div]

I created this US High Speed Rail Map as a composite of several proposed maps from 2009, when government agencies and advocacy groups were talking big about rebuilding America’s train system.

Having worked on getting California’s high speed rail approved in the 2008 elections, I’ve long sung the economic and environmental benefits of fast trains.

This latest map comes more from the heart. It speaks more to bridging regional and urban-rural divides than about reducing airport congestion or even creating jobs, although it would likely do that as well.

Instead of detailing construction phases and service speeds, I took a little artistic license and chose colors and linked lines to celebrate America’s many distinct but interwoven regional cultures.

The response to my map this week went above and beyond my wildest expectations, sparking vigorous political discussion between thousands of Americans ranging from off-color jokes about rival cities to poignant reflections on how this kind of rail network could change long-distance relationships and the lives of faraway family members.

Commenters from New York and Nebraska talked about “wanting to ride the red line”. Journalists from Chattanooga, Tennessee (population 167,000) asked to reprint the map because they were excited to be on the map. Hundreds more shouted “this should have been built yesterday”.

It’s clear that high speed rail is more than just a way to save energy or extend economic development to smaller cities.

More than mere steel wheels on tracks, high speed rail shrinks space and brings farflung families back together. It keeps couples in touch when distant career or educational opportunities beckon. It calls to adventure and travel. It is duct tape and string to reconnect politically divided regions. Its colorful threads weave new American Dreams.

That said, while trains still live large in the popular imagination, decades of limited service have left some blind spots in the collective consciousness. I’ll address few here:

Myth: High speed rail is just for big city people.
Fact: Unlike airplanes or buses which must make detours to drop off passengers at intermediate points, trains glide into and out of stations with little delay, pausing for under a minute to unload passengers from multiple doors. Trains can, have, and continue to effectively serve small towns and suburbs, whereas bus service increasingly bypasses them.

I do hear the complaint: “But it doesn’t stop in my town!” In the words of one commenter, “the train doesn’t need to stop on your front porch.” Local transit, rental cars, taxis, biking, and walking provide access to and from stations.

Myth: High speed rail is only useful for short distances.
Fact: Express trains that skip stops allow lines to serve many intermediate cities while still providing some fast end-to-end service. Overnight sleepers with lie-flat beds where one boards around dinner and arrives after breakfast have been successful in the US before and are in use on China’s newest 2,300km high speed line.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: U.S. High Speed Rail System proposal. Alfred Twu created this map to showcase what could be possible.[end-div]

The Death of Scientific Genius

There is a certain school of thought that asserts that scientific genius is a thing of the past. After all, we haven’t seen the recent emergence of pivotal talents such as Galileo, Newton, Darwin or Einstein. Is it possible that fundamentally new ways to look at our world — that a new mathematics or a new physics is no longer possible?

In a recent essay in Nature, Dean Keith Simonton, professor of psychology at UC Davis, argues that such fundamental and singular originality is a thing of the past.

[div class=attrib]From ars technica:[end-div]

Einstein, Darwin, Galileo, Mendeleev: the names of the great scientific minds throughout history inspire awe in those of us who love science. However, according to Dean Keith Simonton, a psychology professor at UC Davis, the era of the scientific genius may be over. In a comment paper published in Nature last week, he explains why.

The “scientific genius” Simonton refers to is a particular type of scientist; their contributions “are not just extensions of already-established, domain-specific expertise.” Instead, “the scientific genius conceives of a novel expertise.” Simonton uses words like “groundbreaking” and “overthrow” to illustrate the work of these individuals, explaining that they each contributed to science in one of two major ways: either by founding an entirely new field or by revolutionizing an already-existing discipline.

Today, according to Simonton, there just isn’t room to create new disciplines or overthrow the old ones. “It is difficult to imagine that scientists have overlooked some phenomenon worthy of its own discipline,” he writes. Furthermore, most scientific fields aren’t in the type of crisis that would enable paradigm shifts, according to Thomas Kuhn’s classic view of scientific revolutions. Simonton argues that instead of finding big new ideas, scientists currently work on the details in increasingly specialized and precise ways.

And to some extent, this argument is demonstrably correct. Science is becoming more and more specialized. The largest scientific fields are currently being split into smaller sub-disciplines: microbiology, astrophysics, neuroscience, and paleogeography, to name a few. Furthermore, researchers have more tools and the knowledge to hone in on increasingly precise issues and questions than they did a century—or even a decade—ago.

But other aspects of Simonton’s argument are a matter of opinion. To me, separating scientists who “build on what’s already known” from those who “alter the foundations of knowledge” is a false dichotomy. Not only is it possible to do both, but it’s impossible to establish—or even make a novel contribution to—a scientific field without piggybacking on the work of others to some extent. After all, it’s really hard to solve the problems that require new solutions if other people haven’t done the work to identify them. Plate tectonics, for example, was built on observations that were already widely known.

And scientists aren’t done altering the foundations of knowledge, either. In science, as in many other walks of life, we don’t yet know everything we don’t know. Twenty years ago, exoplanets were hypothetical. Dark energy, as far as we knew, didn’t exist.

Simonton points out that “cutting-edge work these days tends to emerge from large, well-funded collaborative teams involving many contributors” rather than a single great mind. This is almost certainly true, especially in genomics and physics. However, it’s this collaboration and cooperation between scientists, and between fields, that has helped science progress past where we ever thought possible. While Simonton uses “hybrid” fields like astrophysics and biochemistry to illustrate his argument that there is no room for completely new scientific disciplines, I see these fields as having room for growth. Here, diverse sets of ideas and methodologies can mix and lead to innovation.

Simonton is quick to assert that the end of scientific genius doesn’t mean science is at a standstill or that scientists are no longer smart. In fact, he argues the opposite: scientists are probably more intelligent now, since they must master more theoretical work, more complicated methods, and more diverse disciplines. In fact, Simonton himself would like to be wrong; “I hope that my thesis is incorrect. I would hate to think that genius in science has become extinct,” he writes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Einstein 1921 by F. Schmutzer. Courtesy of Wikipedia.[end-div]