Subjective Objectivism: The Paradox that is Ayn Rand

Ayn Rand: anti-collectivist ideologue, standard-bearer for unapologetic individualism and rugged self-reliance, or selfish, fantasist and elitist hypocrite?

Political conservatives and libertarians increasingly flock to her writings and support her philosophy of individualism and unfettered capitalism, which she dubbed, “objectivism”. On the other hand, liberals see her as selfish zealot, elitist, narcissistic, even psychopathic.

The truth, of course, is more nuanced and complex, especially the private Ayn Rand versus the very public persona. Thus those who fail to delve into Rand’s traumatic and colorful history fail to grasp the many paradoxes and contradictions that she enshrined.

Rand was firmly and vociferously pro-choice, yet she believed that women should submit to the will of great men. She was a devout atheist and outspoken pacifist, yet she believed Native Americans fully deserved their cultural genocide for not grasping capitalism. She viewed homosexuality as disgusting and immoral, but supported non-discrimination protection for homosexuals in the public domain, yet opposed such rights in private, all the while having an extremely colorful private life herself. She was a valiant opponent of government and federal regulation in all forms. Publicly, she viewed Social Security, Medicare and other “big government” programs with utter disdain, their dependents nothing more than weak-minded loafers and “takers”. Privately, later in life, she accepted payments from Social Security and Medicare. Perhaps most paradoxically, Rand derided those who would fake their own reality, while at the same time being chronically dependent on mind-distorting amphetamines; popping speed at the same time as writing her keystones to objectivism: Fountainhead and Atlas Shrugged.

[div class=attrib]From the Guardian:[end-div]

As an atheist Ayn Rand did not approve of shrines but the hushed, air-conditioned headquarters which bears her name acts as a secular version. Her walnut desk occupies a position of honour. She smiles from a gallery of black and white photos, young in some, old in others. A bronze bust, larger than life, tilts her head upward, jaw clenched, expression resolute.

The Ayn Rand Institute in Irvine, California, venerates the late philosopher as a prophet of unfettered capitalism who showed America the way. A decade ago it struggled to have its voice heard. Today its message booms all the way to Washington DC.

It was a transformation which counted Paul Ryan, chairman of the House budget committee, as a devotee. He gave Rand’s novel, Atlas Shrugged, as Christmas presents and hailed her as “the reason I got into public service”.

Then, last week, he was selected as the Republican vice-presidential nominee and his enthusiasm seemed to evaporate. In fact, the backtracking began earlier this year when Ryan said as a Catholic his inspiration was not Rand’s “objectivism” philosophy but Thomas Aquinas’.

The flap has illustrated an acute dilemma for the institute. Once peripheral, it has veered close to mainstream, garnering unprecedented influence. The Tea Party has adopted Rand as a seer and waves placards saying “We should shrug” and “Going Galt”, a reference to an Atlas Shrugged character named John Galt.

Prominent Republicans channel Rand’s arguments in promises to slash taxes and spending and to roll back government. But, like Ryan, many publicly renounce the controversial Russian emigre as a serious influence. Where, then, does that leave the institute, the keeper of her flame?

Given Rand’s association with plutocrats – she depicted captains of industry as “producers” besieged by parasitic “moochers” – the headquarters are unexpectedly modest. Founded in 1985 three years after Rand’s death, the institution moved in 2002 from Marina del Rey, west of Los Angeles, to a drab industrial park in Irvine, 90 minutes south, largely to save money. It shares a nondescript two-storey building with financial services and engineering companies.

There is little hint of Galt, the character who symbolises the power and glory of the human mind, in the bland corporate furnishings. But the quotations and excerpts adorning the walls echo a mission which drove Rand and continues to inspire followers as an urgent injunction.

“The demonstration of a new moral philosophy: the morality of rational self-interest.”

These, said Onkar Ghate, the institute’s vice-president, are relatively good times for Randians. “Our primary mission is to advance awareness of her ideas and promote her philosophy. I must say, it’s going very well.”

On that point, if none other, conservatives and progressives may agree. Thirty years after her death Rand, as a radical intellectual and political force, is going very well indeed. Her novel Atlas Shrugged, a 1,000 page assault on big government, social welfare and altruism first published in 1957, is reportedly selling more than 400,000 copies per year and is being made into a movie trilogy. Its radical author, who also penned The Fountainhead and other novels and essays, is the subject of a recent documentary and spate of books.

To critics who consider Rand’s philosophy that “of the psychopath, a misanthropic fantasy of cruelty, revenge and greed”, her posthumous success is alarming.

Relatively little attention however has been paid to the institute which bears her name and works, often behind the scenes, to direct her legacy and shape right-wing debate.

 

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ayn Rand in 1957. Courtesy of Wikipedia.[end-div]

Philosophy and Science Fiction

We excerpt an fascinating article from I09 on the association of science fiction to philosophical inquiry. It’s quiet remarkable that this genre of literature can provide such a rich vein for philosophers to mine, often more so than reality itself. Though, it is no coincidence that our greatest authors of science fiction were, and are, amateur philosophers at heart.

[div class=attrib]From i09:[end-div]

People use science fiction to illustrate philosophy all the time. From ethical quandaries to the very nature of existence, science fiction’s most famous texts are tailor-made for exploring philosophical ideas. In fact, many college campuses now offer courses in the philosophy of science fiction.

But science fiction doesn’t just illuminate philosophy — in fact, the genre grew out of philosophy, and the earliest works of science fiction were philosophical texts. Here’s why science fiction has its roots in philosophy, and why it’s the genre of thought experiments about the universe.

Philosophical Thought Experiments As Science Fiction
Science fiction is a genre that uses strange worlds and inventions to illuminate our reality — sort of the opposite of a lot of other writing, which uses the familiar to build a portrait that cumulatively shows how insane our world actually is. People, especially early twenty-first century people, live in a world where strangeness lurks just beyond our frame of vision — but we can’t see it by looking straight at it. When we try to turn and confront the weird and unthinkable that’s always in the corner of our eye, it vanishes. In a sense, science fiction is like a prosthetic sense of peripheral vision.

We’re sort of like the people chained up in on the cave wall, but never seeing the full picture.

Plato is probably the best-known user of allegories — a form of writing which has a lot in common with science fiction. A lot of allegories are really thought experiments, trying out a set of strange facts to see what principles you derive from them. As plenty of people have pointed out, Plato’s Allegory of the Cave is the template for a million “what is reality” stories, from the works of Philip K. Dick to The Matrix. But you could almost see the cave allegory in itself as a proto-science fiction story, because of the strange worldbuilding that goes into these people who have never seen the “real” world. (Plato also gave us an allegory about the Ring of Gyges, which turns its wearer invisible — sound familiar?).

Later philosophers who ponder the nature of existence also seem to stray into weird science fiction territory — like Descartes, raising the notion that he, Descartes, could have existed since the beginning of the universe (as an alternative to God as a cause for Descartes’ existence.) Sitting in his bread oven, Descartes tries to cut himself off from sensory input to see what he can deduce of the universe.

And by the same token, the philosophy of human nature often seems to depend on conjuring imaginary worlds, whether it be Hobbes’ “nasty, brutish and short” world without laws, or Rousseau’s “state of nature.” A great believer in the importance of science, Hobbes sees humans as essentially mechanistic beings who are programmed to behave in a selfish fashion — and the state is a kind of artificial human that can contain us and give us better programming, in a sense.

So not only can you use something like Star Trek’s Holodeck to point out philosophical notions of the fallibility of the senses, and the possible falseness of reality — philosophy’s own explorations of those sorts of topics are frequently kind of other-worldly. Philosophical thought experiments, like the oft-cited “state of nature,” are also close kin to science fiction world building. As Susan Schneider writes in the book Science Fiction and Philosophy, “if you read science fiction writers like Stanislaw Lem, Isaac Asimov, Arthur C. Clarke and Robert Sawyer, you already aware that some of the best science fiction tales are in fact long versions of philosophical thought experiments.”

But meanwhile, when people come to list the earliest known works that could be considered “real” science fiction, they always wind up listing philosophical works, written by philosophers.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Description This is the front cover art for the book Nineteen Eighty-Four (1984) written by George Orwell. Courtesy of Secker and Warburg (London) / Wikipedia.[end-div]

Is It Good That Money Can Buy (Almost) Anything?

Money is a curious invention. It enables efficient and almost frictionless commerce and it allows us to assign tangible value to our time. Yet it poses enormous societal challenges and ethical dilemmas. For instance, should we bribe our children with money in return for better grades? Should we allow a chronically ill kidney patient to purchase a replacement organ from a donor?

Raghuram Rajan, professor of finance at the University of Chicago, reviews a fascinating new book that attempts to answer some of these questions. The book, “What Money Can’t Buy: The Moral Limits of the Market” is written by noted Harvard philosopher Michael Sandel.

[div class=attrib]From Project Syndicate:[end-div]

In an interesting recent book, What Money Can’t Buy: The Moral Limits of the Market, the Harvard philosopher Michael Sandel points to the range of things that money can buy in modern societies and gently tries to stoke our outrage at the market’s growing dominance. Is he right that we should be alarmed?

While Sandel worries about the corrupting nature of some monetized transactions (do kids really develop a love of reading if they are bribed to read books?), he is also concerned about unequal access to money, which makes trades using money inherently unequal. More generally, he fears that the expansion of anonymous monetary exchange erodes social cohesion, and argues for reducing money’s role in society.

Sandel’s concerns are not entirely new, but his examples are worth reflecting upon. In the United States, some companies pay the unemployed to stand in line for free public tickets to congressional hearings. They then sell the tickets to lobbyists and corporate lawyers who have a business interest in the hearing but are too busy to stand in line.

Clearly, public hearings are an important element of participatory democracy. All citizens should have equal access. So selling access seems to be a perversion of democratic principles.

The fundamental problem, though, is scarcity. We cannot accommodate everyone in the room who might have an interest in a particularly important hearing. So we have to “sell” entry. We can either allow people to use their time (standing in line) to bid for seats, or we can auction seats for money. The former seems fairer, because all citizens seemingly start with equal endowments of time. But is a single mother with a high-pressure job and three young children as equally endowed with spare time as a student on summer vacation? And is society better off if she, the chief legal counsel for a large corporation, spends much of her time standing in line?

Whether it is better to sell entry tickets for time or for money thus depends on what we hope to achieve. If we want to increase society’s productive efficiency, people’s willingness to pay with money is a reasonable indicator of how much they will gain if they have access to the hearing. Auctioning seats for money makes sense – the lawyer contributes more to society by preparing briefs than by standing in line.

On the other hand, if it is important that young, impressionable citizens see how their democracy works, and that we build social solidarity by making corporate executives stand in line with jobless teenagers, it makes sense to force people to bid with their time and to make entry tickets non-transferable. But if we think that both objectives – efficiency and solidarity – should play some role, perhaps we should turn a blind eye to hiring the unemployed to stand in line in lieu of busy lawyers, so long as they do not corner all of the seats.

What about the sale of human organs, another example Sandel worries about? Something seems wrong when a lung or a kidney is sold for money. Yet we celebrate the kindness of a stranger who donates a kidney to a young child. So, clearly, it is not the transfer of the organ that outrages us – we do not think that the donor is misinformed about the value of a kidney or is being fooled into parting with it. Nor, I think, do we have concerns about the scruples of the person selling the organ – after all, they are parting irreversibly with something that is dear to them for a price that few of us would accept.

I think part of our discomfort has to do with the circumstances in which the transaction takes place. What kind of society do we live in if people have to sell their organs to survive?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google.[end-div]

The Pros and Cons of Online Reviews

There is no doubt that online reviews for products and services, from books to news cars to a vacation spot, have revolutionized shopping behavior. Internet and mobile technology has made gathering, reviewing and publishing open and honest crowdsourced opinion simple, efficient and ubiquitous.

However, the same tools that allow frank online discussion empower those wishing to cheat and manipulate the system. Cyberspace is rife with fake reviews, fake reviewers, inflated ratings, edited opinion, and paid insertions.

So, just as in any purchase transaction since the time when buyers and sellers first met, caveat emptor still applies.

[div class=attrib]From Slate:[end-div]

The Internet has fundamentally changed the way that buyers and sellers meet and interact in the marketplace. Online retailers make it cheap and easy to browse, comparison shop, and make purchases with the click of a mouse. The Web can also, in theory, make for better-informed purchases—both online and off—thanks to sites that offer crowdsourced reviews of everything from dog walkers to dentists.

In a Web-enabled world, it should be harder for careless or unscrupulous businesses to exploit consumers. Yet recent studies suggest that online reviewing is hardly a perfect consumer defense system. Researchers at Yale, Dartmouth, and USC have found evidence that hotel owners post fake reviews to boost their ratings on the site—and might even be posting negative reviews of nearby competitors.

The preponderance of online reviews speaks to their basic weakness: Because it’s essentially free to post a review, it’s all too easy to dash off thoughtless praise or criticism, or, worse, to construct deliberately misleading reviews without facing any consequences. It’s what economists (and others) refer to as the cheap-talk problem. The obvious solution is to make it more costly to post a review, but that eliminates one of the main virtues of crowdsourcing: There is much more wisdom in a crowd of millions than in select opinions of a few dozen.

Of course, that wisdom depends on reviewers giving honest feedback. A few well-publicized incidents suggest that’s not always the case. For example, when Amazon’s Canadian site accidentally revealed the identities of anonymous book reviewers in 2004, it became apparent that many reviews came from publishers and from the authors themselves.

Technological idealists, perhaps not surprisingly, see a solution to this problem in cutting-edge computer science. One widely reported study last year showed that a text-analysis algorithm proved remarkably adept at detecting made-up reviews. The researchers instructed freelance writers to put themselves in the role of a hotel marketer who has been tasked by his boss with writing a fake customer review that is flattering to the hotel. They also compiled a set of comparison TripAdvisor reviews that the study’s authors felt were likely to be genuine. Human judges could not distinguish between the real ones and the fakes. But the algorithm correctly identified the reviews as real or phony with 90 percent accuracy by picking up on subtle differences, like whether the review described specific aspects of the hotel room layout (the real ones do) or mentioned matters that were unrelated to the hotel itself, like whether the reviewer was there on vacation or business (a marker of fakes). Great, but in the cat-and-mouse game of fraud vs. fraud detection, phony reviewers can now design feedback that won’t set off any alarm bells.
Just how prevalent are fake reviews? A trio of business school professors, Yale’s Judith Chevalier, Yaniv Dover of Dartmouth, and USC’s Dina Mayzlin, have taken a clever approach to inferring an answer by comparing the reviews on two travel sites, TripAdvisor and Expedia. In order to post an Expedia review, a traveler needs to have made her hotel booking through the site. Hence, a hotel looking to inflate its rating or malign a competitor would have to incur the cost of paying itself through the site, accumulating transaction fees and tax liabilities in the process. On TripAdvisor, all you need to post fake reviews are a few phony login names and email addresses.

Differences in the overall ratings on TripAdvisor versus Expedia could simply be the result of a more sympathetic community of reviewers. (In practice, TripAdvisor’s ratings are actually lower on average.) So Mayzlin and her co-authors focus on the places where the gaps between TripAdvisor and Expedia reviews are widest. In their analysis, they looked at hotels that probably appear identical to the average traveler but have different underlying ownership or management. There are, for example, companies that own scores of franchises from hotel chains like Marriott and Hilton. Other hotels operate under these same nameplates but are independently owned. Similarly, many hotels are run on behalf of their owners by large management companies, while others are owner-managed. The average traveler is unlikely to know the difference between a Fairfield Inn owned by, say, the Pillar Hotel Group and one owned and operated by Ray Fisman. The study’s authors argue that the small owners and independents have less to lose by trying to goose their online ratings (or torpedo the ratings of their neighbors), reasoning that larger companies would be more vulnerable to punishment, censure, and loss of business if their shenanigans were uncovered. (The authors give the example of a recent case in which a manager at Ireland’s Clare Inn was caught posting fake reviews. The hotel is part of the Lynch Hotel Group, and in the wake of the fake postings, TripAdvisor removed suspicious reviews from other Lynch hotels, and unflattering media accounts of the episode generated negative PR that was shared across all Lynch properties.)

The researchers find that, even comparing hotels under the same brand, small owners are around 10 percent more likely to get five-star reviews on TripAdvisor than they are on Expedia (relative to hotels owned by large corporations). The study also examines whether these small owners might be targeting the competition with bad reviews. The authors look at negative reviews for hotels that have competitors within half a kilometer. Hotels where the nearby competition comes from small owners have 16 percent more one- and two-star ratings than those with neighboring hotels that are owned by big companies like Pillar.
This isn’t to say that consumers are making a mistake by using TripAdvisor to guide them in their hotel reservations. Despite the fraudulent posts, there is still a high degree of concordance between the ratings assigned by TripAdvisor and Expedia. And across the Web, there are scores of posters who seem passionate about their reviews.

Consumers, in turn, do seem to take online reviews seriously. By comparing restaurants that fall just above and just below the threshold for an extra half-star on Yelp, Harvard Business School’s Michael Luca estimates that an extra star is worth an extra 5 to 9 percent in revenue. Luca’s intent isn’t to examine whether restaurants are gaming Yelp’s system, but his findings certainly indicate that they’d profit from trying. (Ironically, Luca also finds that independent restaurants—the establishments that Mayzlin et al. would predict are most likely to put up fake postings—benefit the most from an extra star. You don’t need to check out Yelp to know what to expect when you walk into McDonald’s or Pizza Hut.)

[div class=attrib]Read the entire article following the jump:[end-div]

[div class=attrib]Image courtesy of Mashable.[end-div]

When to Eat Your Fruit and Veg

It’s time to jettison the $1.99 hyper-burger and super-sized fires and try some real fruits and vegetables. You know — the kind of product that comes directly from the soil. But, when is the best time to suck on a juicy peach or chomp some crispy radicchio?

A great chart, below, summarizes which fruits and vegetables are generally in season for the Northern Hemisphere.

[div class=attrib]Infographic courtesy of Visual News, designed by Column Five.[end-div]

Extreme Weather as the New Norm

Melting glaciers at the poles, wildfires in the western United States, severe flooding across Europe and parts of Asia, hurricanes in northern Australia, warmer temperatures across the globe. According to a many climatologists, including a growing number of ex-climate change skeptics, this is the new normal for our foreseeable future. Welcome to the changed climate.

[div class=attrib]From the New York Times:[end-div]

BY many measurements, this summer’s drought is one for the record books. But so was last year’s drought in the South Central states. And it has been only a decade since an extreme five-year drought hit the American West. Widespread annual droughts, once a rare calamity, have become more frequent and are set to become the “new normal.”

Until recently, many scientists spoke of climate change mainly as a “threat,” sometime in the future. But it is increasingly clear that we already live in the era of human-induced climate change, with a growing frequency of weather and climate extremes like heat waves, droughts, floods and fires.

Future precipitation trends, based on climate model projections for the coming fifth assessment from the Intergovernmental Panel on Climate Change, indicate that droughts of this length and severity will be commonplace through the end of the century unless human-induced carbon emissions are significantly reduced. Indeed, assuming business as usual, each of the next 80 years in the American West is expected to see less rainfall than the average of the five years of the drought that hit the region from 2000 to 2004.

That extreme drought (which we have analyzed in a new study in the journal Nature-Geoscience) had profound consequences for carbon sequestration, agricultural productivity and water resources: plants, for example, took in only half the carbon dioxide they do normally, thanks to a drought-induced drop in photosynthesis.

In the drought’s worst year, Western crop yields were down by 13 percent, with many local cases of complete crop failure. Major river basins showed 5 percent to 50 percent reductions in flow. These reductions persisted up to three years after the drought ended, because the lakes and reservoirs that feed them needed several years of average rainfall to return to predrought levels.

In terms of severity and geographic extent, the 2000-4 drought in the West exceeded such legendary events as the Dust Bowl of the 1930s. While that drought saw intervening years of normal rainfall, the years of the turn-of-the-century drought were consecutive. More seriously still, long-term climate records from tree-ring chronologies show that this drought was the most severe event of its kind in the western United States in the past 800 years. Though there have been many extreme droughts over the last 1,200 years, only three other events have been of similar magnitude, all during periods of “megadroughts.”

Most frightening is that this extreme event could become the new normal: climate models point to a warmer planet, largely because of greenhouse gas emissions. Planetary warming, in turn, is expected to create drier conditions across western North America, because of the way global-wind and atmospheric-pressure patterns shift in response.

Indeed, scientists see signs of the relationship between warming and drought in western North America by analyzing trends over the last 100 years; evidence suggests that the more frequent drought and low precipitation events observed for the West during the 20th century are associated with increasing temperatures across the Northern Hemisphere.

These climate-model projections suggest that what we consider today to be an episode of severe drought might even be classified as a period of abnormal wetness by the end of the century and that a coming megadrought — a prolonged, multidecade period of significantly below-average precipitation — is possible and likely in the American West.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Sun.[end-div]

How Great Companies Fail

A fascinating case study shows how Microsoft failed its employees through misguided HR (human resources) policies that pitted colleague against colleague.

[div class=attrib]From the Guardian:[end-div]

The idea for today’s off-topic note came to me when I read “Microsoft’s lost decade”, an aptly titled Vanity Fair story. In the piece, Kurt Eichenwald tracks Microsoft’s decline as he revisits a decade of technical missteps and bad business decisions. Predictably, the piece has generated strong retorts from Microsoft’s Ministry of Truth and from Ballmer himself (“It’s not been a lost decade for me!” he barked from the tumbrel).

But I don’t come to bury Caesar – not, yet, I’ll wait until actual numbers for Windows 8 and the Surface tablets emerge. Instead, let’s consider the centerpiece of Eichenwald’s article, his depiction of the cultural degeneracy and intramural paranoia that comes of a badly implemented performance review system.

Performance assessments are, of course, an important aspect of a healthy company. In order to maintain fighting weight, an organisation must honestly assay its employees’ contributions and cull the dead wood. This is tournament play, after all, and the coach must “release”; players who can’t help get the team to the finals.

But Microsoft’s implementation – “stack ranking”, a bell curve that pits employees and groups against one another like rats in a cage – plunged the company into internecine fights, horse trading, and backstabbing.

…every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor…For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings.

Employees quickly realised that it was more important to focus on organisation politics than actual performance:

Every current and former Microsoft employee I interviewed – every one – cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees.

This brought back bad memories of my corpocrat days working for a noted Valley company. When I landed here in 1985, I was dismayed by the pervasive presence of human resources, an éminence grise that cast a shadow across the entire organisation. Humor being the courtesy of despair, engineers referred to HR as the KGB or, for a more literary reference, the Bene Gesserit, monikers that knowingly imputed an efficiency to a department that offered anything but. Granted, there was no bell curve grading, no obligation to sacrifice the bottom 5%, but the politics were stifling nonetheless, the review process a painful charade.

In memory of those shenanigans, I’ve come up with a possible antidote to manipulative reviews, an attempt to deal honestly and pleasantly with the imperfections of life at work. (Someday I’ll write a Note about an equally important task: How to let go of people with decency – and without lawyers.)

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Telegraph / Microsoft.[end-div]

The Demise of Upward Mobility

Robert J. Samuelson paints a sobering picture of the once credible and seemingly attainable American Dream — the generational progress of upward mobility is no longer a given. He is the author of “The Great Inflation and Its Aftermath: The Past and Future of American Affluence”.

[div class=attrib]From Wilson Quarterly:[end-div]

The future of affluence is not what it used to be. Americans have long believed—it’s part of our national character—that our economic well-being will constantly increase. We see ourselves as a striving, inventive, and pragmatic people destined for higher living standards. History is a continuum of progress, from Robert Fulton’s steamboat to Henry Ford’s assembly line to Bill Gates’ software. Every generation will live better than its predecessors.
Well, maybe not.

For millions of younger Americans—say, those 40 and under—living better than their parents is a pipe dream. They won’t. The threat to their hopes does not arise from an impending collapse of technological gains of the sort epitomized by the creations of Fulton, Ford, and Gates. These advances will almost certainly continue, and per capita income—the average for all Americans and a conventional indicator of living standards—will climb. Statistically, American progress will resume. The Great Recession will be a bump, not a dead end.

The trouble is that many of these gains will bypass the young. The increases that might have fattened their paychecks will be siphoned off to satisfy other groups and other needs. Today’s young workers will have to finance Social Security and Medicare for a rapidly growing cohort of older Americans. Through higher premiums for employer-provided health insurance, they will subsidize care for others. Through higher taxes and fees, they will pay to repair aging infrastructure (roads, bridges, water systems) and to support squeezed public services, from schools to police.

The hit to their disposable incomes would matter less if the young were major beneficiaries of the resultant spending. In some cases—outlays for infrastructure and local services—they may be. But these are exceptions. By 2025 Social Security and Medicare will simply reroute income from the nearly four-fifths of the population that will be under 65 to the older one-fifth. And health care spending at all age levels is notoriously skewed: Ten percent of patients account for 65 percent of medical costs, reports the Kaiser Family Foundation. Although insurance provides peace of mind, the money still goes from young to old: Average health spending for those 45 to 64 is triple that for those 18 to 24.

The living standards of younger Americans will almost certainly suffer in comparison to those of their parents in a second crucial way. Our notion of economic progress is tied to financial security, but the young will have less of it. What good are higher incomes if they’re abruptly revoked? Though it wasn’t a second Great Depression, the Great Recession was a close call, shattering faith that modern economic policies made broad collapses impossible. Except for the savage 1980-82 slump, post-World War II recessions had been modest. Only minorities of Americans had suffered. By contrast, the Great Recession hurt almost everyone, through high unemployment, widespread home foreclosures, huge wealth losses in stocks and real estate—and fears of worse. A 2012 Gallup poll found that 68 percent of Americans knew someone who had lost a job.

The prospect of downward mobility is not just dispiriting. It assails the whole post–World War II faith in prosperity. Beginning in the 1950s, commentators celebrated the onrush of abundance as marking a new era in human progress. In his 1958 bestseller The Affluent Society, Harvard economist John Kenneth Galbraith announced the arrival of a “great and unprecedented affluence” that had eradicated the historical “poverty of the masses.”

Economic growth became a secular religion that was its own reward. Perhaps its chief virtue was that it dampened class conflict. In The Great Leap: The Past Twenty-Five Years in America (1966), John Brooks observed, “The middle class was enlarging itself and ever encroaching on the two extremes”—the very rich and the very poor. Business and labor could afford to reconcile because both could now share the fruits of expanding production. We could afford more spending on public services (education, health, environmental protection, culture) without depressing private incomes. Indeed, that was Galbraith’s main theme: Our prosperity could and should support both.

To be sure, there were crises of faith, moments when economic progress seemed delayed or doomed. The longest lapse occurred in the 1970s, when double-digit inflation spawned pessimism and frequent recessions, culminating in the 1980-82 downturn. Monthly unemployment peaked at 10.8 percent. But after Federal Reserve chairman Paul Volcker and President Ronald Reagan took steps to suppress high inflation, faith returned.
Now, it’s again imperiled. A 2011 Gallup poll found that 55 percent of Americans didn’t think their children would live as well as they did, the highest rate ever. We may face a crimped and contentious future.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ascending and Descending by M.C.Escher. Courtesy of M.C.Escher.[end-div]

Are You Cold or Hot? Depends on Your Politics

The United States is gripped by political deadlock. The Do-Nothing Congress consistently gets lower approval ratings than our banks, Paris Hilton, lawyers and BP during the catastrophe in the Gulf of Mexico. This stasis is driven by seemingly intractable ideological beliefs and a no-compromise attitude from both the left and right sides of the aisle.

So, it should come as no surprise that even your opinion of the weather and temperature is colored by your political persuasion.

Daniel Engber over at Slate sifts through some fascinating studies that highlight how our ingrained ideologies determine our worldview, down to even our basic view of the weather and our home thermostat setting.

[div class=attrib]From Slate:[end-div]

A few weeks ago, an academic journal called Weather, Climate and Society posted a curious finding about how Americans perceive the heat and cold. A team of researchers at the University of Oklahoma asked 8,000 adults living across the country to state both their political leanings and their impressions of the local weather. Are you a liberal or a conservative? Have average temperatures where you live been rising, falling, or staying about the same as previous years? Then they compared the answers to actual thermostat readings from each respondent’s ZIP code. Would their sense of how it feels outside be colored by the way they think?

Yes it would, the study found. So much so, in fact, that the people surveyed all but ignored their actual experience. No matter what the weather records showed for a given neighborhood (despite the global trend, it had gotten colder in some places and warmer in others), conservatives and liberals fell into the same two camps. The former said that temperatures were decreasing or had stayed the same, and the latter claimed they were going up. “Actual temperature deviations proved to be a relatively weak predictor of perceptions,” wrote the authors. (Hat tip to Ars Technica for finding the study.)

People’s opinions, then, seem to have an effect on how they feel the air around them. If you believe in climate change and think the world is getting warmer, you’ll be more inclined to sense that warmth on a walk around the block. And if you tend to think instead in terms of crooked scientists and climate conspiracies, then the local weather will seem a little cooler. Either way, the Oklahoma study suggests that the experience of heat and cold derives from “a complex mix of direct observation, ideology, and cultural cognitions.”

It’s easy to see how these factors might play out when people make grand assessments of the weather that rely on several years’ worth of noisy data. But another complex mix of ideology and culture affects how we experience the weather from moment to moment—and how we choose to cope with it. In yesterday’s column, I discussed the environmental case against air conditioning, and the belief that it’s worse to be hypothermic than overheated. But there are other concerns, too, that make their rounds among the anti-A/C brrr-geoisie. Some view air conditioning itself as a threat to their comfort and their health.

The notion that stale, recycled air might be sickening or dangerous has been circulating for as long as we’ve had home cooling. According to historian Marsha E. Ackermann’s Cool Comfort: America’s Romance With Air-Conditioning, the invention of the air conditioner set off a series of debates among high-profile scholars over whether it was better to fill a building with fresh air or to close it off from the elements altogether. One side argued for ventilation even in the most miserable summer weather; the other claimed that a hot, damp breeze could be a hazard to your health. (The precursor to the modern air conditioner, invented by a Floridian named John Gorrie, was designed according to the latter theory. Gorrie thought his device would stave off malaria and yellow fever.)

The cooling industry worked hard to promote the idea that A/C makes us more healthy and productive, and in the years after World War II it gained acceptance as a standard home appliance. Still, marketers worried about a lingering belief in the importance of fresh air, and especially the notion that the “shock effect” of moving too quickly from warm to cold would make you sick. Some of these fears would be realized in a new and deadly form of pneumonia known as Legionnaires’ disease. In the summer of 1976, around 4,000 members of the Pennsylvania State American Legion met for a conference at the fancy, air-conditioned Bellevue Stratford Hotel in Philadelphia, and over the next month, more than 180 Legionnaires took ill. The bacteria responsible for their condition were found to be propagating in the hotel’s cooling tower. Twenty-nine people died from the disease, and we finally had proof that air conditioning posed a mortal danger to America.

A few years later, a new diagnosis began to spread around the country, based on a nebulous array of symptoms including sore throats and headache that seemed to be associated with indoor air. Epidemiologists called the illness “Sick Building Syndrome,” and looked for its source in large-scale heating and cooling ducts. Even today, the particulars of the condition—and the question of whether or not it really exists—have not been resolved. But there is some good evidence for the idea that climate-control systems can breed allergenic mold or other micro-organisms. For a study published in 2004, researchers in France checked the medical records of 920 middle-aged women, and found that the ones who worked in air-conditioned offices (about 15 percent of the total pool) were almost twice as likely to take sick days or make a visit to an ear-nose-throat doctor.

This will come as no surprise to those who already shun the air conditioner and worship in the cult of fresh air. Like the opponents of A/C from a hundred years ago, they blame the sealed environment for creating a miasma of illness and disease. Well, of course it’s unhealthy to keep the windows closed; you need a natural breeze to blow all those spores and germs away. But their old-fashioned plea invites a response that’s just as antique. Why should the air be any fresher in summer than winter (when so few would let it in)? And what about the dangers that “fresh air” might pose in cities where the breeze swirls with soot and dust? A 2009 study in the journal Epidemiology confirmed that air conditioning can help stave off the effects of particulate matter in the environment. Researchers checked the health records of senior citizens who did or didn’t have air conditioners installed in their homes and found that those who were forced to leave their windows open in the summer—and suck down the dirty air outside—were more likely to end up in the hospital for pollution-related cardiovascular disease. Other studies have found similar correlations between a lack of A/C on sooty days and hospitalization for chronic obstructive pulmonary disease and pneumonia.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Crosley Air Conditioning / Treehugger.[end-div]

The Benefits of Self-Deception

 

Psychologists have long studied the causes and characteristics of deception. In recent times they have had a huge pool of talented liars from which to draw — bankers, mortgage lenders, Enron executives, borrowers, and of course politicians. Now, researchers have begun to took at the art of self-deception, with some interesting results. Self-deception may be a useful tool in influencing others.

[div class=attrib]From the Wall Street Journal:[end-div]

Lying to yourself—or self-deception, as psychologists call it—can actually have benefits. And nearly everybody does it, based on a growing body of research using new experimental techniques.

Self-deception isn’t just lying or faking, but is deeper and more complicated, says Del Paulhus, psychology professor at University of British Columbia and author of a widely used scale to measure self-deceptive tendencies. It involves strong psychological forces that keep us from acknowledging a threatening truth about ourselves, he says.

Believing we are more talented or intelligent than we really are can help us influence and win over others, says Robert Trivers, an anthropology professor at Rutgers University and author of “The Folly of Fools,” a 2011 book on the subject. An executive who talks himself into believing he is a great public speaker may not only feel better as he performs, but increase “how much he fools people, by having a confident style that persuades them that he’s good,” he says.

Researchers haven’t studied large population samples to compare rates of self-deception or compared men and women, but they know based on smaller studies that it is very common. And scientists in many different disciplines are drawn to studying it, says Michael I. Norton, an associate professor at Harvard Business School. “It’s also one of the most puzzling things that humans do.”

Researchers disagree over what exactly happens in the brain during self-deception. Social psychologists say people deceive themselves in an unconscious effort to boost self-esteem or feel better. Evolutionary psychologists, who say different parts of the brain can harbor conflicting beliefs at the same time, say self-deception is a way of fooling others to our own advantage.

In some people, the tendency seems to be an inborn personality trait. Others may develop a habit of self-deception as a way of coping with problems and challenges.

Behavioral scientists in recent years have begun using new techniques in the laboratory to predict when and why people are likely to deceive themselves. For example, they may give subjects opportunities to inflate their own attractiveness, skill or intelligence. Then, they manipulate such variables as subjects’ mood, promises of rewards or opportunities to cheat. They measure how the prevalence of self-deception changes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Truth or Consequences. Courtesy of CBS 1950-51 / Wikia.[end-div]

Shirking Life-As-Performance of a Social Network

Ex-Facebook employee number 51, gives us a glimpse from within the social network giant. It’s a tale of social isolation, shallow relationships, voyeurism, and narcissistic performance art. It’s also a tale of the re-discovery of life prior to “likes”, “status updates”, “tweets” and “followers”.

[div class=attrib]From the Washington Post:[end-div]

Not long after Katherine Losse left her Silicon Valley career and moved to this West Texas town for its artsy vibe and crisp desert air, she decided to make friends the old-fashioned way, in person. So she went to her Facebook page and, with a series of keystrokes, shut it off.

The move carried extra import because Losse had been the social network’s 51st employee and rose to become founder Mark Zuckerberg’s personal ghostwriter. But Losse gradually soured on the revolution in human relations she witnessed from within.

The explosion of social media, she believed, left hundreds of millions of users with connections that were more plentiful but also narrower and less satisfying, with intimacy losing out to efficiency. It was time, Losse thought, for people to renegotiate their relationships with technology.

“It’s okay to feel weird about this because I feel weird about this, and I was in the center of it,” said Losse, 36, who has long, dark hair and sky-blue eyes. “We all know there is an anxiety, there’s an unease, there’s a worry that our lives are changing.”

Her response was to quit her job — something made easier by the vested stock she cashed in — and to embrace the ancient toil of writing something in her own words, at book length, about her experiences and the philosophical questions they inspired.

That brought her to Marfa, a town of 2,000 people in an area so remote that astronomers long have come here for its famously dark night sky, beyond the light pollution that’s a byproduct of modern life.

Losse’s mission was oddly parallel. She wanted to live, at least for a time, as far as practical from the world’s relentless digital glow.

Losse was a graduate student in English at Johns Hopkins University in 2004 when Facebook began its spread, first at Harvard, then other elite schools and beyond. It provided a digital commons, a way of sharing personal lives that to her felt safer than the rest of the Internet.

The mix has proved powerful. More than 900 million people have joined; if they were citizens of a single country, Facebook Nation would be the world’s third largest.

At first, Losse was among those smitten. In 2005, after moving to Northern California in search of work, she responded to a query on the Facebook home page seeking résumés. Losse soon became one of the company’s first customer-service reps, replying to questions from users and helping to police abuses.

She was firmly on the wrong side of the Silicon Valley divide, which prizes the (mostly male) engineers over those, like Losse, with liberal arts degrees. Yet she had the sense of being on the ground floor of something exciting that might also yield a life-altering financial jackpot.

In her first days, she was given a master password that she said allowed her to see any information users typed into their Facebook pages. She could go into pages to fix technical problems and police content. Losse recounted sparring with a user who created a succession of pages devoted to anti-gay messages and imagery. In one exchange, she noticed the man’s password, “Ilovejason,” and was startled by the painful irony.

Another time, Losse cringed when she learned that a team of Facebook engineers was developing what they called “dark profiles” — pages for people who had not signed up for the service but who had been identified in posts by Facebook users. The dark profiles were not to be visible to ordinary users, Losse said, but if the person eventually signed up, Facebook would activate those latent links to other users.

All the world a stage

Losse’s unease sharpened when a celebrated Facebook engineer was developing the capacity for users to upload video to their pages. He started videotaping friends, including Losse, almost compulsively. On one road trip together, the engineer made a video of her napping in a car and uploaded it remotely to an internal Facebook page. Comments noting her siesta soon began appearing — only moments after it happened.

“The day before, I could just be in a car being in a car. Now my being in a car is a performance that is visible to everyone,” Losse said, exasperation creeping into her voice. “It’s almost like there is no middle of nowhere anymore.”

Losse began comparing Facebook to the iconic 1976 Eagles song “Hotel California,” with its haunting coda, “You can check out anytime you want, but you can never leave.” She put a copy of the record jacket on prominent display in a house she and several other employees shared not far from the headquarters (then in Palo Alto., Calif.; it’s now in Menlo Park).

As Facebook grew, Losse’s career blossomed. She helped introduce Facebook to new countries, pushing for quick, clean translations into new languages. Later, she moved to the heart of the company as Zuckerberg’s ghostwriter, mimicking his upbeat yet efficient style of communicating in blog posts he issued.

But her concerns continue to grow. When Zuckerberg, apparently sensing this, said to Losse, “I don’t know if I trust you,” she decided she needed to either be entirely committed to Facebook or leave. She soon sold some of her vested stock. She won’t say how much; they provided enough of a financial boon for her to go a couple of years without a salary, though not enough to stop working altogether, as some former colleagues have.

‘Touchy, private territory’

Among Losse’s concerns were the vast amount of personal data Facebook gathers. “They are playing on very touchy, private territory. They really are,” she said. “To not be conscious of that seems really dangerous.”

It wasn’t just Facebook. Losse developed a skepticism for many social technologies and the trade-offs they require.

Facebook and some others have portrayed proliferating digital connections as inherently good, bringing a sprawling world closer together and easing personal isolation.

Moira Burke, a researcher who trained at the Human-Computer Interaction Institute at Carnegie Mellon University and has since joined Facebook’s Data Team, tracked the moods of 1,200 volunteer users. She found that simply scanning the postings of others had little effect on well-being; actively participating in exchanges with friends, however, relieved loneliness.

Summing up her findings, she wrote on Facebook’s official blog, “The more people use Facebook, the better they feel.”

But Losse’s concerns about online socializing tracks with the findings of Sherry Turkle, a Massachusetts Institute of Technology psychologist who says users of social media have little understanding of the personal information they are giving away. Nor, she said, do many understand the potentially distorting consequences when they put their lives on public display, as what amounts to an ongoing performance on social media.

“In our online lives, we edit, we retouch, we clean up,” said Turkle, author of “Alone Together: Why We Expect More From Technology and Less From Each Other,” published in 2011. “We substitute what I call ‘connection for real conversation.’?”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Boy Kings by Katherine Losse.[end-div]

A Climate Change Skeptic Recants

A climate change skeptic recants. Of course, disbelievers in human-influenced climate change will point to the fact that physicist Richard Muller used an op-ed in the New York Times as evidence of flagrant falsehood and unmitigated bias.

Several years ago Muller set up the Berkeley Earth project, to collect and analyze land-surface temperature records from sources independent of NASA and NOAA. Convinced, at the time, that climate change researchers had the numbers all wrong, Muller and team set out to find the proof.

[div class=attrib]From the New York Times:[end-div]

CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.

My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.

These findings are stronger than those of the Intergovernmental Panel on Climate Change, the United Nations group that defines the scientific and diplomatic consensus on global warming. In its 2007 report, the I.P.C.C. concluded only that most of the warming of the prior 50 years could be attributed to humans. It was possible, according to the I.P.C.C. consensus statement, that the warming before 1956 could be because of changes in solar activity, and that even a substantial part of the more recent warming could be natural.

Our Berkeley Earth approach used sophisticated statistical methods developed largely by our lead scientist, Robert Rohde, which allowed us to determine earth land temperature much further back in time. We carefully studied issues raised by skeptics: biases from urban heating (we duplicated our results using rural data alone), from data selection (prior groups selected fewer than 20 percent of the available temperature stations; we used virtually 100 percent), from poor station quality (we separately analyzed good stations and poor ones) and from human intervention and data adjustment (our work is completely automated and hands-off). In our papers we demonstrate that none of these potentially troublesome effects unduly biased our conclusions.

The historic temperature pattern we observed has abrupt dips that match the emissions of known explosive volcanic eruptions; the particulates from such events reflect sunlight, make for beautiful sunsets and cool the earth’s surface for a few years. There are small, rapid variations attributable to El Niño and other ocean currents such as the Gulf Stream; because of such oscillations, the “flattening” of the recent temperature rise that some people claim is not, in our view, statistically significant. What has caused the gradual but systematic rise of two and a half degrees? We tried fitting the shape to simple math functions (exponentials, polynomials), to solar activity and even to rising functions like world population. By far the best match was to the record of atmospheric carbon dioxide, measured from atmospheric samples and air trapped in polar ice.

Just as important, our record is long enough that we could search for the fingerprint of solar variability, based on the historical record of sunspots. That fingerprint is absent. Although the I.P.C.C. allowed for the possibility that variations in sunlight could have ended the “Little Ice Age,” a period of cooling from the 14th century to about 1850, our data argues strongly that the temperature rise of the past 250 years cannot be attributed to solar changes. This conclusion is, in retrospect, not too surprising; we’ve learned from satellite measurements that solar activity changes the brightness of the sun very little.

How definite is the attribution to humans? The carbon dioxide curve gives a better match than anything else we’ve tried. Its magnitude is consistent with the calculated greenhouse effect — extra warming from trapped heat radiation. These facts don’t prove causality and they shouldn’t end skepticism, but they raise the bar: to be considered seriously, an alternative explanation must match the data at least as well as carbon dioxide does. Adding methane, a second greenhouse gas, to our analysis doesn’t change the results. Moreover, our analysis does not depend on large, complex global climate models, the huge computer programs that are notorious for their hidden assumptions and adjustable parameters. Our result is based simply on the close agreement between the shape of the observed temperature rise and the known greenhouse gas increase.

It’s a scientist’s duty to be properly skeptical. I still find that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I’ve analyzed some of the most alarmist claims, and my skepticism about them hasn’t changed.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Global land-surface temperature with a 10-year moving average. Courtesy of Berkeley Earth.[end-div]

The Exceptionalism of American Violence

The United States is often cited as the most generous nation on Earth. Unfortunately, it is also one of the most violent, having one of the highest murder rates of any industrialized country. Why this tragic paradox?

In an absorbing article excerpted below, backed by sound research, Anthropologist Eric Michael Johnson points to the lack of social capital on a local and national scale. Here, social capital is defined as interpersonal trust that promotes cooperation between citizens and groups for mutual benefit.

So, combine a culture that allows convenient access to very effective weapons with broad inequality, social isolation and distrust, and you get a very sobering picture — a country where around 70 people are killed each day by others wielding guns (25,423 firearm homicides in 2006-2007, based on Centers for Disease Control statistics).

[div class=attrib]From Scientific American:[end-div]

The United States is the deadliest wealthy country in the world. Can science help us explain, or even solve, our national crisis?

His tortured and sadistic grin beamed like a full moon on that dark night. “Madness, as you know, is like gravity,” he cackled. “All it takes is a little push.” But once the house lights rose, the terror was lifted for most of us. Few imagined that the fictive evil on screen back in 2008 would later inspire a depraved act of mass murder by a young man sitting with us in the audience, a student of neuroscience whose mind was teetering on the edge. What was it that pushed him over?

In the wake of the tragedy that struck Aurora, Colorado last Friday there remain more questions than answers. Just like last time–in January, 2011 when Congresswoman Gabrielle Giffords and 18 others were shot in Tucson, Arizona or before that in April, 2007 when a deranged gunman attacked students and staff at Virginia Tech–this senseless mass shooting has given rise to a national conversation as we struggle to find meaning in the madness.

While everyone agrees the blame should ultimately be placed on the perpetrator of this violence, the fact remains that the United States has one of the highest murder rates in the industrialized world. Of the 34 countries in the Organisation for Economic Co-operation and Development (OECD), the U.S. ranks fifth in homicides just behind Brazil (highest), Mexico, Russia, and Estonia. Our nation also holds the dubious honor of being responsible for half of the worst mass shootings in the last 30 years. How can we explain why the United States has nearly three times more murders per capita than neighboring Canada and ten times more than Japan? What makes the land of the free such a dangerous place to live?

Diagnosing a Murder

There have been hundreds of thoughtful explorations of this problem in the last week, though three in particular have encapsulated the major issues. Could it be, as science writer David Dobbs argues at Wired, that “an American culture that fetishizes violence,” such as the Batman franchise itself, has contributed to our fall? “Culture shapes the expression of mental dysfunction,” Dobbs writes, “just as it does other traits.”

Perhaps the push arrived with the collision of other factors, as veteran journalist Bill Moyers maintains, when the dark side of human nature encountered political allies who nurture our destructive impulses? “Violence is our alter ego, wired into our Stone Age brains,” he says. “The NRA is the best friend a killer’s instinct ever had.”

But then again maybe there is an economic explanation, as my Scientific American colleague John Horgan believes, citing a hypothesis by McMaster University evolutionary psychologists Martin Daly and his late wife Margo Wilson. “Daly and Wilson found a strong correlation between high Gini scores [a measure of inequality] and high homicide rates in Canadian provinces and U.S. counties,” Horgan writes, “blaming homicides not on poverty per se but on the collision of poverty and affluence, the ancient tug-of-war between haves and have-nots.”

In all three cases, as it was with other culprits such as the lack of religion in public schools or the popularity of violent video games (both of which are found in other wealthy countries and can be dismissed), commentators are looking at our society as a whole rather than specific details of the murderer’s background. The hope is that, if we can isolate the factor which pushes some people to murder their fellow citizens, perhaps we can alter our social environment and reduce the likelihood that these terrible acts will be repeated in the future. The only problem is, which one could it be?

The Exceptionalism of American Violence

As it turns out, the “social capital” Sapolsky found that made the Forest Troop baboons so peaceful is an important missing factor that can explain our high homicide rate in the United States. In 1999 Ichiro Kawachi at the Harvard School of Public Health led a study investigating the factors in American homicide for the journal Social Science and Medicine (pdf here). His diagnosis was dire.

“If the level of crime is an indicator of the health of society,” Kawachi wrote, “then the US provides an illustrative case study as one of the most unhealthy of modern industrialized nations.” The paper outlined what the most significant causal factors were for this exaggerated level of violence by developing what was called “an ecological theory of crime.” Whereas many other analyses of homicide take a criminal justice approach to the problem–such as the number of cops on the beat, harshness of prison sentences, or adoption of the death penalty–Kawachi used a public health perspective that emphasized social relations.

In all 50 states and the District of Columbia data were collected using the General Social Survey that measured social capital (defined as interpersonal trust that promotes cooperation between citizens for mutual benefit), along with measures of poverty and relative income inequality, homicide rates, incidence of other crimes–rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft–unemployment, percentage of high school graduates, and average alcohol consumption. By using a statistical method known as principal component analysis Kawachi was then able to identify which ecologic variables were most associated with particular types of crime.

The results were unambiguous: when income inequality was higher, so was the rate of homicide. Income inequality alone explained 74% of the variance in murder rates and half of the aggravated assaults. However, social capital had an even stronger association and, by itself, accounted for 82% of homicides and 61% of assaults. Other factors such as unemployment, poverty, or number of high school graduates were only weakly associated and alcohol consumption had no connection to violent crime at all. A World Bank sponsored study subsequently confirmed these results on income inequality concluding that, worldwide, homicide and the unequal distribution of resources are inextricably tied. (see Figure 2). However, the World Bank study didn’t measure social capital. According to Kawachi it is this factor that should be considered primary; when the ties that bind a community together are severed inequality is allowed to run free, and with deadly consequences.

But what about guns? Multiple studies have shown a direct correlation between the number of guns and the number of homicides. The United States is the most heavily armed country in the world with 90 guns for every 100 citizens. Doesn’t this over-saturation of American firepower explain our exaggerated homicide rate? Maybe not. In a follow-up study in 2001 Kawachi looked specifically at firearm prevalence and social capital among U.S. states. The results showed that when social capital and community involvement declined, gun ownership increased (see Figure 3).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Smith & Wesson M&P Victory model revolver. Courtesy of Oleg Volk / Wikpedia.[end-div]

The Emperor Has Transparent Clothes

Hot from the TechnoSensual Exposition in Vienna, Austria, come clothes that can be made transparent or opaque, and clothes that can detect a wearer telling a lie. While the value of the former may seem dubious outside of the home, the latter invention should be a mandatory garment for all politicians and bankers. Or, for the less adventurous, millinery fashionistas, how about a hat that reacts to ambient radio waves?

All these innovations find their way from the realms of a Philip K. Dick science fiction novel, courtesy of the confluence of new technologies and innovative textile design.

[div class=attrib]From New Scientist:[end-div]

WHAT if the world could see your innermost emotions? For the wearer of the Bubelle dress created by Philips Design, it’s not simply a thought experiment.

Aptly nicknamed “the blushing dress”, the futuristic garment has an inner layer fitted with sensors that measure heart rate, respiration and galvanic skin response. The measurements are fed to 18 miniature projectors that shine corresponding colours, shapes, and intensities onto an outer layer of fabric – turning the dress into something like a giant, high-tech mood ring. As a natural blusher, I feel like I already know what it would be like to wear this dress – like going emotionally, instead of physically, naked.

The Bubelle dress is just one of the technologically enhanced items of clothing on show at the Technosensual exhibition in Vienna, Austria, which celebrates the overlapping worlds of technology, fashion and design.

Other garments are even more revealing. Holy Dress, created by Melissa Coleman and Leonie Smelt, is a wearable lie detector – that also metes out punishment. Using voice-stress analysis, the garment is designed to catch the wearer out in a lie, whereupon it twinkles conspicuously and gives her a small shock. Though the garment is beautiful, a slim white dress under a geometric structure of copper tubes, I’d rather try it on a politician than myself. “You can become a martyr for truth,” says Coleman. To make it, she hacked a 1990s lie detector and added a novelty shocking pen.

Laying the wearer bare in a less metaphorical way, a dress that alternates between opaque and transparent is also on show. Designed by the exhibition’s curator, Anouk Wipprecht with interactive design laboratory Studio Roosegaarde, Intimacy 2.0 was made using conductive liquid crystal foil. When a very low electrical current is applied to the foil, the liquid crystals stand to attention in parallel, making the material transparent. Wipprecht expects the next iteration could be available commercially. It’s time to take the dresses “out of the museum and get them on the streets”, she says.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Taiknam Hat, a hat sensitive to ambient radio waves. Courtesy of Ricardo O’Nascimento, Ebru Kurbak, Fabiana Shizue / New Scientist.[end-div]

Crony Capitalism

We excerpt below a fascinating article from the WSJ on the increasingly incestuous and damaging relationship between the finance industry and our political institutions.

[div class=attrib]From the Wall Street Journal:[end-div]

Mitt Romney’s résumé at Bain should be a slam dunk. He has been a successful capitalist, and capitalism is the best thing that has ever happened to the material condition of the human race. From the dawn of history until the 18th century, every society in the world was impoverished, with only the thinnest film of wealth on top. Then came capitalism and the Industrial Revolution. Everywhere that capitalism subsequently took hold, national wealth began to increase and poverty began to fall. Everywhere that capitalism didn’t take hold, people remained impoverished. Everywhere that capitalism has been rejected since then, poverty has increased.

Capitalism has lifted the world out of poverty because it gives people a chance to get rich by creating value and reaping the rewards. Who better to be president of the greatest of all capitalist nations than a man who got rich by being a brilliant capitalist?

Yet it hasn’t worked out that way for Mr. Romney. “Capitalist” has become an accusation. The creative destruction that is at the heart of a growing economy is now seen as evil. Americans increasingly appear to accept the mind-set that kept the world in poverty for millennia: If you’ve gotten rich, it is because you made someone else poorer.

What happened to turn the mood of the country so far from our historic celebration of economic success?

Two important changes in objective conditions have contributed to this change in mood. One is the rise of collusive capitalism. Part of that phenomenon involves crony capitalism, whereby the people on top take care of each other at shareholder expense (search on “golden parachutes”).

But the problem of crony capitalism is trivial compared with the collusion engendered by government. In today’s world, every business’s operations and bottom line are affected by rules set by legislators and bureaucrats. The result has been corruption on a massive scale. Sometimes the corruption is retail, whereby a single corporation creates a competitive advantage through the cooperation of regulators or politicians (search on “earmarks”). Sometimes the corruption is wholesale, creating an industrywide potential for profit that would not exist in the absence of government subsidies or regulations (like ethanol used to fuel cars and low-interest mortgages for people who are unlikely to pay them back). Collusive capitalism has become visible to the public and increasingly defines capitalism in the public mind.

Another change in objective conditions has been the emergence of great fortunes made quickly in the financial markets. It has always been easy for Americans to applaud people who get rich by creating products and services that people want to buy. That is why Thomas Edison and Henry Ford were American heroes a century ago, and Steve Jobs was one when he died last year.

When great wealth is generated instead by making smart buy and sell decisions in the markets, it smacks of inside knowledge, arcane financial instruments, opportunities that aren’t accessible to ordinary people, and hocus-pocus. The good that these rich people have done in the process of getting rich is obscure. The benefits of more efficient allocation of capital are huge, but they are really, really hard to explain simply and persuasively. It looks to a large proportion of the public as if we’ve got some fabulously wealthy people who haven’t done anything to deserve their wealth.

The objective changes in capitalism as it is practiced plausibly account for much of the hostility toward capitalism. But they don’t account for the unwillingness of capitalists who are getting rich the old-fashioned way—earning it—to defend themselves.

I assign that timidity to two other causes. First, large numbers of today’s successful capitalists are people of the political left who may think their own work is legitimate but feel no allegiance to capitalism as a system or kinship with capitalists on the other side of the political fence. Furthermore, these capitalists of the left are concentrated where it counts most. The most visible entrepreneurs of the high-tech industry are predominantly liberal. So are most of the people who run the entertainment and news industries. Even leaders of the financial industry increasingly share the politics of George Soros. Whether measured by fundraising data or by the members of Congress elected from the ZIP Codes where they live, the elite centers with the most clout in the culture are filled with people who are embarrassed to identify themselves as capitalists, and it shows in the cultural effect of their work.

Another factor is the segregation of capitalism from virtue. Historically, the merits of free enterprise and the obligations of success were intertwined in the national catechism. McGuffey’s Readers, the books on which generations of American children were raised, have plenty of stories treating initiative, hard work and entrepreneurialism as virtues, but just as many stories praising the virtues of self-restraint, personal integrity and concern for those who depend on you. The freedom to act and a stern moral obligation to act in certain ways were seen as two sides of the same American coin. Little of that has survived.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Industrial Revolution brought about the end of true capitalism. Courtesy: Time Life Pictures/Mansell/Time Life Pictures/Getty Images.[end-div]

 

Modern Music Versus The Oldies

When it comes to music a generational gap has always been with us, separating young from old. Thus, without fail, parents will remark that the music listened to by their kids is loud and monotonous, nothing like the varied and much better music that they consumed in their younger days.

Well, this common, and perhaps universal, observation is now backed by some ground-breaking and objective research. So, adults over the age of 40, take heart — your music really is better than what’s playing today! And, if you are a parent, you may bask in the knowledge that your music really is better than that of your kids. That said, the comparative merits of your 1980’s “Hi Fi” system versus your kids’ docking stations with 5.1 surround and subwoofer earbuds remains thoroughly unsettled.

[div class=attrib]From the Telegraph:[end-div]

The scepticism about modern music shared by many middle-aged fans has been vindicated by a study of half a century’s worth of pop music, which found that today’s hits really do all sound the same.

Parents who find their children’s thumping stereos too much to bear will also be comforted to know that it isn’t just the effect of age: modern songs have also grown progressively louder over the past 50 years.

The study, by Spanish researchers, analysed an archive known as the Million Song Dataset to discover how the course of music changed between 1955 and 2010.

While loudness has steadily increased since the 1950s, the team found that the variety of chords, melodies and types of sound being used by musicians has become ever smaller.

Joan Serra of the Spanish National Research Council, who led the study published in the Scientific Reports journal, said: “We found evidence of a progressive homogenisation of the musical discourse.

“The diversity of transitions between note combinations – roughly speaking chords plus melodies – has consistently diminished in the past 50 years.”

The “timbre” of songs – the number of different tones they include, for example from different instruments – has also become narrower, he added.

The study was the first to conduct a large-scale measurement of “intrinsic loudness”, or the volume a song is recorded at, which determines how loud it will sound compared with other songs at a particular setting on an amplifier.

It appeared to support long-standing claims that the music industry is engaged in a “loudness war” in which volumes are gradually being increased.

Although older songs may be more varied and rich, the researchers advised that they could be made to sound more “fashionable and groundbreaking” if they were re-recorded and made blander and louder.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of HomeTheatre.[end-div]

Women See Bodies; Men See Body Parts

Yet another research study of gender differences shows some fascinating variation in the way men and women see and process their perceptions of others. Men tend to be perceived as a whole, women, on the other hand, are more likely to be perceived as parts.

[div class=attrib]From Scientific American:[end-div]

A glimpse at the magazine rack in any supermarket checkout line will tell you that women are frequently the focus of sexual objectification. Now, new research finds that the brain actually processes images of women differently than those of men, contributing to this trend.

Women are more likely to be picked apart by the brain and seen as parts rather than a whole, according to research published online June 29 in the European Journal of Social Psychology. Men, on the other hand, are processed as a whole rather than the sum of their parts.

“Everyday, ordinary women are being reduced to their sexual body parts,” said study author Sarah Gervais, a psychologist at the University of Nebraska, Lincoln. “This isn’t just something that supermodels or porn stars have to deal with.”

Objectification hurts
Numerous studies have found that feeling objectified is bad for women. Being ogled can make women do worse on math tests, and self-sexualization, or scrutiny of one’s own shape, is linked to body shame, eating disorders and poor mood.

But those findings have all focused on the perception of being sexualized or objectified, Gervais told LiveScience. She and her colleagues wondered about the eye of the beholder: Are people really objectifying women more than men?

To find out, the researchers focused on two types of mental processing, global and local. Global processing is how the brain identifies objects as a whole. It tends to be used when recognizing people, where it’s not just important to know the shape of the nose, for example, but also how the nose sits in relation to the eyes and mouth. Local processing focuses more on the individual parts of an object. You might recognize a house by its door alone, for instance, while you’re less likely to recognize a person’s arm without the benefit of seeing the rest of their body.

If women are sexually objectified, people should process their bodies in a more local way, focusing on individual body parts like breasts. To test the idea, Gervais and her colleagues carried out two nearly identical experiments with a total of 227 undergraduate participants. Each person was shown non-sexualized photographs, each of either a young man or young woman, 48 in total. After seeing each original full-body image, the participants saw two side-by-side photographs. One was the original image, while the other was the original with a slight alteration to the chest or waist (chosen because these are sexualized body parts). Participants had to pick which image they’d seen before.

In some cases, the second set of photos zoomed in on the chest or waist only, asking participants to pick the body part they’d seen previously versus the one that had been altered.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: People focus on the parts of a woman’s body when processing her image, according to research published in June in the European Journal of Social Psychology. Courtesy of LiveScience / Yuri Arcurs, Shutterstock.[end-div]

Best Countries for Women

If you’re female and value lengthy life expectancy, comprehensive reproductive health services, sound education and equality with males, where should you live? In short, Scandinavia, Australia and New Zealand, and Northern Europe. In a list of the 44 most well-developed nations, the United States ranks towards the middle, just below Canada and Estonia, but above Greece, Italy, Russia and most of Central and Eastern Europe.

The fascinating infographic from the National Post does a great job of summarizing the current state of womens’ affairs from data gathered from 165 countries.

[div class=attrib]Read the entire article and find a higher quality infographic after the jump.[end-div]

Living Organism as Software

For the first time scientists have built a computer software model of an entire organism from its molecular building blocks. This allows the model to predict previously unobserved cellular biological processes and behaviors. While the organism in question is a simple bacterium, this represents another huge advance in computational biology.

[div class=attrib]From the New York Times:[end-div]

Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.

The scientists and other experts said the work was a giant step toward developing computerized laboratories that could carry out complete experiments without the need for traditional instruments.

For medical researchers and drug designers, cellular models will be able to supplant experiments during the early stages of screening for new compounds. And for molecular biologists, models that are of sufficient accuracy will yield new understanding of basic biological principles.

The simulation of the complete life cycle of the pathogen, Mycoplasma genitalium, was presented on Friday in the journal Cell. The scientists called it a “first draft” but added that the effort was the first time an entire organism had been modeled in such detail — in this case, all of its 525 genes.

“Where I think our work is different is that we explicitly include all of the genes and every known gene function,” the team’s leader, Markus W. Covert, an assistant professor of bioengineering at Stanford, wrote in an e-mail. “There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”

The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites that are generated by cell processes.

“The model presented by the authors is the first truly integrated effort to simulate the workings of a free-living microbe, and it should be commended for its audacity alone,” wrote the Columbia scientists Peter L. Freddolino and Saeed Tavazoie in a commentary that accompanied the article. “This is a tremendous task, involving the interpretation and integration of a massive amount of data.”

They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.

For their computer simulation, the researchers had the advantage of extensive scientific literature on the bacterium. They were able to use data taken from more than 900 scientific papers to validate the accuracy of their software model.

Still, they said that the model of the simplest biological system was pushing the limits of their computers.

“Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”

In designing their model, the scientists chose an approach that parallels the design of modern software systems, known as object-oriented programming. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.

Similarly, the simulated bacterium is a series of modules that mimic the different functions of the cell.

“The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups which we could model individually, each with its own mathematics, and then to integrate these sub-models together into a whole,” Dr. Covert said. “It turned out to be a very exciting idea.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A Whole-Cell Computational Model Predicts Phenotype from Genotype. Courtesy of Cell / Elsevier Inc.[end-div]

Die Zombie, Die Zombie

Helen Sword cuts through (pun intended) the corporate-speak that continues to encroach upon our literature, particularly in business and academia, with a plea to kill our “zombie nouns”. Her latest book is “Stylish Academic Writing”.

[div class=attrib]From the New York Times:[end-div]

Take an adjective (implacable) or a verb (calibrate) or even another noun (crony) and add a suffix like ity, tion or ism. You’ve created a new noun: implacability, calibration, cronyism. Sounds impressive, right?

Nouns formed from other parts of speech are called nominalizations. Academics love them; so do lawyers, bureaucrats and business writers. I call them “zombie nouns” because they cannibalize active verbs, suck the lifeblood from adjectives and substitute abstract entities for human beings:

The proliferation of nominalizations in a discursive formation may be an indication of a tendency toward pomposity and abstraction.

The sentence above contains no fewer than seven nominalizations, each formed from a verb or an adjective. Yet it fails to tell us who is doing what. When we eliminate or reanimate most of the zombie nouns (tendency becomes tend, abstraction becomes abstract) and add a human subject and some active verbs, the sentence springs back to life:

Writers who overload their sentences with nominalizations tend to sound pompous and abstract.

Only one zombie noun – the key word nominalizations – has been allowed to remain standing.

At their best, nominalizations help us express complex ideas: perception, intelligence, epistemology. At their worst, they impede clear communication. I have seen academic colleagues become so enchanted by zombie nouns like heteronormativity and interpellation that they forget how ordinary people speak. Their students, in turn, absorb the dangerous message that people who use big words are smarter – or at least appear to be – than those who don’t.

In fact, the more abstract your subject matter, the more your readers will appreciate stories, anecdotes, examples and other handholds to help them stay on track. In her book “Darwin’s Plots,” the literary historian Gillian Beer supplements abstract nouns like evidence, relationships and beliefs with vivid verbs (rebuff, overturn, exhilarate) and concrete nouns that appeal to sensory experience (earth, sun, eyes):

Most major scientific theories rebuff common sense. They call on evidence beyond the reach of our senses and overturn the observable world. They disturb assumed relationships and shift what has been substantial into metaphor. The earth now only seems immovable. Such major theories tax, affront, and exhilarate those who first encounter them, although in fifty years or so they will be taken for granted, part of the apparently common-sense set of beliefs which instructs us that the earth revolves around the sun whatever our eyes may suggest.

Her subject matter – scientific theories – could hardly be more cerebral, yet her language remains firmly anchored in the physical world.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of PLOS (The Public Library of Science).[end-div]

Two Degrees

Author and environmentalist Bill McKibben has been writing about climate change and environmental issues for over 20 years. His first book, The End of Nature, was published in 1989, and is considered to be the first book aimed at the general public on the subject of climate change.

In his latest essay in Rolling Stone, which we excerpt below, McKibben offers a sobering assessment based on our current lack of action on a global scale. He argues that in the face of governmental torpor, and with individual action being almost inconsequential (at this late stage), only a radical re-invention of our fossil-fuel industries — to energy companies in the broad sense — can bring significant and lasting change.

Learn more about Bill McKibben, here.

[div class=attrib]From Rolling Stone:[end-div]

If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.

Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.

Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.

When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.

The First Number: 2° Celsius

If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”

In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.

The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.

Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”

Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Emissions from industry have helped increase the levels of carbon dioxide in the atmosphere, driving climate change. Courtesy of New Scientist / Eye Ubiquitous / Rex Features.[end-div]

Beware, Big Telecomm is Watching You

Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.

Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.

[div class=attrib]From Zeit Online:[end-div]

By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.

Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.

[div class=attrib]From TED:[end-div]

On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.

How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.

In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.

“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”

[div class=attrib]Read the entire article after the jump.[end-div]

Your Life Expectancy Mapped

Your life expectancy mapped, that is, if you live in London, U.K. So, take the iconic London tube (subway) map, then overlay it with figures for average life expectancy. Voila, you get to see how your neighbors on the Piccadilly Line fair in their longevity compared with say, you, who happen to live near a Central Line station. It turns out that in some cases adjacent areas — as depicted by nearby but different subway stations — show an astounding gap of more than 20 years in projected life span.

So, what is at work? And, more importantly, should you move to Bond Street where the average life expectancy is 96 years, versus only 79 in Kennington, South London?

[div class=attrib]From the Atlantic:[end-div]

Last year’s dystopian action flick In Time has Justin Timberlake playing a street rat who suddenly comes into a great deal of money — only the currency isn’t cash, it’s time. Hours and minutes of Timberlake’s life that can be traded just like dollars and cents in our world. Moving from poor districts to rich ones, and vice versa, requires Timberlake to pay a toll, each time shaving off a portion of his life savings.

Literally paying with your life just to get around town seems like — you guessed it — pure science fiction. It’s absolute baloney to think that driving or taking a crosstown bus could result in a shorter life (unless you count this). But a project by University College London researchers called Lives on the Line echoes something similar with a map that plots local differences in life expectancy based on the nearest Tube stop.

The trends are largely unsurprising, and correlate mostly with wealth. Britons living in the ritzier West London tend to have longer expected lifespans compared to those who live in the east or the south. Those residing near the Oxford Circus Tube stop have it the easiest, with an average life expectancy of 96 years. Going into less wealthy neighborhoods in south and east London, life expectancy begins to drop — though it still hovers in the respectable range of 78-79.

Meanwhile, differences in life expectancy between even adjacent stations can be stark. Britons living near Pimlico are predicted to live six years longer than those just across the Thames near Vauxhall. There’s about a two-decade difference between those living in central London compared to those near some stations on the Docklands Light Railway, according to the BBC. Similarly, moving from Tottenham Court Road to Holborn will also shave six years off the Londoner’s average life expectancy.

Michael Marmot, a UCL professor who wasn’t involved in the project, put the numbers in international perspective.

“The difference between Hackney and the West End,” Marmot told the BBC, “is the same as the difference between England and Guatemala in terms of life expectancy.”

[div class=atrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Atlantic / MappingLondon.co.uk.[end-div]

Curiosity in Flight

NASA pulled off another tremendous and daring feat of engineering when it successfully landed the Mars Science Laboratory (MSL) to the surface of Mars on August 5, 2012, 10:32 PM Pacific Time.

The MSL is housed aboard the Curiosity rover, a 2,000-pound, car-size robot. Not only did NASA land Curiosity a mere 1 second behind schedule following a journey of over 576 million kilometers (358 million miles) lasting around 8 months, it went one better. NASA had one of its Mars orbiters — Mars Reconnaissance Orbiter — snap an image of MSL from around 300 miles away as it descended through the Martian atmosphere, with its supersonic parachute unfurled.

Another historic day for science, engineering and exploration.

[div class=attrib]From NASA / JPL:[end-div]

NASA’s Curiosity rover and its parachute were spotted by NASA’s Mars Reconnaissance Orbiter as Curiosity descended to the surface on Aug. 5 PDT (Aug. 6 EDT). The High-Resolution Imaging Science Experiment (HiRISE) camera captured this image of Curiosity while the orbiter was listening to transmissions from the rover. Curiosity and its parachute are in the center of the white box; the inset image is a cutout of the rover stretched to avoid saturation. The rover is descending toward the etched plains just north of the sand dunes that fringe “Mt. Sharp.” From the perspective of the orbiter, the parachute and Curiosity are flying at an angle relative to the surface, so the landing site does not appear directly below the rover.

The parachute appears fully inflated and performing perfectly. Details in the parachute, such as the band gap at the edges and the central hole, are clearly seen. The cords connecting the parachute to the back shell cannot be seen, although they were seen in the image of NASA’s Phoenix lander descending, perhaps due to the difference in lighting angles. The bright spot on the back shell containing Curiosity might be a specular reflection off of a shiny area. Curiosity was released from the back shell sometime after this image was acquired.

This view is one product from an observation made by HiRISE targeted to the expected location of Curiosity about one minute prior to landing. It was captured in HiRISE CCD RED1, near the eastern edge of the swath width (there is a RED0 at the very edge). This means that the rover was a bit further east or downrange than predicted.

[div class=attrib]Follow the mission after the jump.[end-div]

[div class=attrib]Image courtesy of NASA/JPL-Caltech/Univ. of Arizona.[end-div]

The Radium Girls and the Polonium Assassin

Deborah Blum’s story begins with Marie Curie’s analysis of a “strange energy” released from uranium ore, and ends with the assassination of Russian dissident, Alexander Litveninko in 2006.

[div class=attrib]From Wired:[end-div]

In the late 19th century, a then-unknown chemistry student named Marie Curie was searching for a thesis subject. With encouragement from her husband, Pierre, she decided to study the strange energy released by uranium ores, a sizzle of power far greater than uranium alone could explain.

The results of that study are today among the most famous in the history of science. The Curies discovered not one but two new radioactive elements in their slurry of material (and Marie invented the word radioactivity to help explain them.) One was the glowing element radium. The other, which burned brighter and briefer, she named after her home country of Poland — Polonium (from the Latin root, polonia). In honor of that discovery, the Curies shared the 1903 Nobel Prize in Physics with their French colleague Henri Becquerel for his work with uranium.

Radium was always Marie Curie’s first love – “radium, my beautiful radium”, she used to call it. Her continued focus gained her a second Nobel Prize in chemistry in 1911. (Her Nobel lecture was titled Radium and New Concepts in Chemistry.)  It was also the higher-profile radium — embraced in a host of medical, industrial, and military uses — that first called attention to the health risks of radioactive elements. I’ve told some of that story here before in a look at the deaths and illnesses suffered by the “Radium Girls,” young women who in the 1920s painted watch-dial faces with radium-based luminous paint.

Polonium remained the unstable, mostly ignored step-child element of the story, less famous, less interesting, less useful than Curie’s beautiful radium. Until the last few years, that is. Until the reported 2006 assassination by polonium 210 of Russian spy turned dissident, Alexander Litveninko. And until the news this week, first reported by Al Jazeera, that surprisingly high levels of polonium-210 were detected by a Swiss laboratory in the clothes and other effects of the late Palestinian leader Yasser Arafat.

Arafat, 75, had been held for almost two years under an Israeli form of house arrest when he died in 2004 of a sudden wasting illness. His rapid deterioration led to a welter of conspiracy theories that he’d been poisoned, some accusing his political rivals and many more accusing Israel, which has steadfastly denied any such plot.

Recently (and for undisclosed reasons) his widow agreed to the forensic analysis of articles including clothes, a toothbrush, bed sheets, and his favorite kaffiyeh. Al Jazeera arranged for the analysis and took the materials to Europe for further study. After the University of Lausanne’s Institute of Radiation Physics released the findings, Suha Arafat asked that her husband’s body be exhumed and tested for polonium. Palestinian authorities have indicated that they may do so within the week.

And at this point, as we anticipate those results, it’s worth asking some questions about the use of a material like polonium as an assassination poison. Why, for instance, pick a poison that leaves such a durable trail of evidence behind? In the case of the Radium Girls, I mentioned earlier, scientists found that their bones were still hissing with radiation years after their deaths. In the case of Litvinenko, public health investigators found that he’d literally left a trail of radioactive residues across London where he was living at the time of his death.

In what we might imagine as the clever world of covert killings  why would a messy element like polonium even be on the assassination list? To answer that, it helps to begin by stepping back to some of the details provided in the Curies’ seminal work. Both radium and polonium are links in a chain of radioactive decay (element changes due to particle emission) that begins with uranium.  Polonium, which eventually decays to an isotope of lead, is one of the more unstable points in this chain, unstable enough that there are  some 33 known variants (isotopes) of the element.

Of these, the best known and most abundant is the energetic isotope polonium-210, with its half life of 138 days. Half-life refers to the time it takes for a radioactive element to burn through its energy supply, essentially the time it takes for activity to decrease by half. For comparison, the half life of the uranium isotope U-235, which often features in weapon design, is 700 million years. In other words, polonium is a little blast furnace of radioactive energy. The speed of its decay means that eight years after Arafat’s death, it would probably be identified by the its breakdown products. And it’s on that note – its life as a radioactive element –  that it becomes interesting as an assassin’s weapon.

Like radium, polonium’s radiation is primarily in the form of alpha rays — the emission of alpha particles. Compared to other subatomic particles, alpha particles tend to be high energy and high mass. Their relatively larger mass means that they don’t penetrate as well as other forms of radiation, in fact, alpha particles barely penetrate the skin. And they can stopped from even that by a piece of paper or protective clothing.

That may make them sound safe. It shouldn’t. It should just alert us that these are only really dangerous when they are inside the body. If a material emitting alpha radiation is swallowed or inhaled, there’s nothing benign about it. Scientists realized, for instance, that the reason the Radium Girls died of radiation poisoning was because they were lip-pointing their paintbrushes and swallowing radium-laced paint. The radioactive material deposited in their bones — which literally crumbled. Radium, by the way, has a half-life of about 1,600 years. Which means that it’s not in polonium’s league as an alpha emitter. How bad is this? By mass, polonium-210 is considered to be about 250,000 times more poisonous than hydrogen cyanide. Toxicologists estimate that an amount the size of a grain of salt could be fatal to the average adult.

In other words, a victim would never taste a lethal dose in food or drink. In the case of Litvinenko, investigators believed that he received his dose of polonium-210 in a cup of tea, dosed during a meeting with two Russian agents. (Just as an aside, alpha particles tend not to set off radiation detectors so it’s relatively easy to smuggle from country to country.) Another assassin advantage is that illness comes on gradually, making it hard to pinpoint the event.  Yet another advantage is that polonium poisoning is so rare that it’s not part of a standard toxics screen. In Litvinenko’s case, the poison wasn’t identified until shortly after his death. In Arafat’s case — if polonium-210 killed him and that has not been established — obviously it wasn’t considered at the time. And finally, it gets the job done.  “Once absorbed,” notes the U.S. Regulatory Commission, “The alpha radiation can rapidly destroy major organs, DNA and the immune system.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Pierre and Marie Curie in the laboratory, Paris c1906. Courtesy of Wikipedia.[end-div]

The North Continues to Melt Away

On July 16, 2012 the Petermann Glacier in Greenland calved another gigantic island of ice, about twice the size of Manhattan in New York, or about 46 square miles. Climatologists armed with NASA satellite imagery have been following the glacier for many years, and first spotted the break-off point around 8 years ago. The Petermann Glacier calved a previous huge iceberg, twice this size, in 2010.

According to NASA average temperatures in northern Greenland and the Canadian Arctic have increased by about 4 degrees Fahrenheit in the last 30 years.

So, driven by climate change or not, regardless of whether it is short-term or long-term, temporary or irreversible, man-made or a natural cycle, the trend is clear — the Arctic is warming, the ice cap is shrinking and sea-levels are rising.

[div class=attrib]From the Economist:[end-div]

STANDING ON THE Greenland ice cap, it is obvious why restless modern man so reveres wild places. Everywhere you look, ice draws the eye, squeezed and chiselled by a unique coincidence of forces. Gormenghastian ice ridges, silver and lapis blue, ice mounds and other frozen contortions are minutely observable in the clear Arctic air. The great glaciers impose order on the icy sprawl, flowing down to a semi-frozen sea.

The ice cap is still, frozen in perturbation. There is not a breath of wind, no engine’s sound, no bird’s cry, no hubbub at all. Instead of noise, there is its absence. You feel it as a pressure behind the temples and, if you listen hard, as a phantom roar. For generations of frosty-whiskered European explorers, and still today, the ice sheet is synonymous with the power of nature.

The Arctic is one of the world’s least explored and last wild places. Even the names of its seas and rivers are unfamiliar, though many are vast. Siberia’s Yenisey and Lena each carries more water to the sea than the Mississippi or the Nile. Greenland, the world’s biggest island, is six times the size of Germany. Yet it has a population of just 57,000, mostly Inuit scattered in tiny coastal settlements. In the whole of the Arctic—roughly defined as the Arctic Circle and a narrow margin to the south (see map)—there are barely 4m people, around half of whom live in a few cheerless post-Soviet cities such as Murmansk and Magadan. In most of the rest, including much of Siberia, northern Alaska, northern Canada, Greenland and northern Scandinavia, there is hardly anyone. Yet the region is anything but inviolate.

Fast forward

A heat map of the world, colour-coded for temperature change, shows the Arctic in sizzling maroon. Since 1951 it has warmed roughly twice as much as the global average. In that period the temperature in Greenland has gone up by 1.5°C, compared with around 0.7°C globally. This disparity is expected to continue. A 2°C increase in global temperatures—which appears inevitable as greenhouse-gas emissions soar—would mean Arctic warming of 3-6°C.

Almost all Arctic glaciers have receded. The area of Arctic land covered by snow in early summer has shrunk by almost a fifth since 1966. But it is the Arctic Ocean that is most changed. In the 1970s, 80s and 90s the minimum extent of polar pack ice fell by around 8% per decade. Then, in 2007, the sea ice crashed, melting to a summer minimum of 4.3m sq km (1.7m square miles), close to half the average for the 1960s and 24% below the previous minimum, set in 2005. This left the north-west passage, a sea lane through Canada’s 36,000-island Arctic Archipelago, ice-free for the first time in memory.

Scientists, scrambling to explain this, found that in 2007 every natural variation, including warm weather, clear skies and warm currents, had lined up to reinforce the seasonal melt. But last year there was no such remarkable coincidence: it was as normal as the Arctic gets these days. And the sea ice still shrank to almost the same extent.

There is no serious doubt about the basic cause of the warming. It is, in the Arctic as everywhere, the result of an increase in heat-trapping atmospheric gases, mainly carbon dioxide released when fossil fuels are burned. Because the atmosphere is shedding less solar heat, it is warming—a physical effect predicted back in 1896 by Svante Arrhenius, a Swedish scientist. But why is the Arctic warming faster than other places?

Consider, first, how very sensitive to temperature change the Arctic is because of where it is. In both hemispheres the climate system shifts heat from the steamy equator to the frozen pole. But in the north the exchange is much more efficient. This is partly because of the lofty mountain ranges of Europe, Asia and America that help mix warm and cold fronts, much as boulders churn water in a stream. Antarctica, surrounded by the vast southern seas, is subject to much less atmospheric mixing.

The land masses that encircle the Arctic also prevent the polar oceans revolving around it as they do around Antarctica. Instead they surge, north-south, between the Arctic land masses in a gigantic exchange of cold and warm water: the Pacific pours through the Bering Strait, between Siberia and Alaska, and the Atlantic through the Fram Strait, between Greenland and Norway’s Svalbard archipelago.

That keeps the average annual temperature for the high Arctic (the northernmost fringes of land and the sea beyond) at a relatively sultry -15°C; much of the rest is close to melting-point for much of the year. Even modest warming can therefore have a dramatic effect on the region’s ecosystems. The Antarctic is also warming, but with an average annual temperature of -57°C it will take more than a few hot summers for this to become obvious.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Sequence of three images showing the Petermann Glacier sliding toward the sea along the northwestern coast of Greenland, terminating in a huge, new floating ice island. Courtesy: NASA.[end-div]

Procrastination is a Good Thing

Procrastinators have known this for a long time: that success comes from making a decision at the last possible moment.

Procrastinating professor Frank Partnoy expands on this theory, captured in his book, “Wait: The Art and Science of Delay“.

[div class=attrib]From Smithsonian:[end-div]

Sometimes life seems to happen at warp speed. But, decisions, says Frank Partnoy, should not. When the financial market crashed in 2008, the former investment banker and corporate lawyer, now a professor of finance and law and co-director of the Center for Corporate and Securities Law at the University of San Diego, turned his attention to literature on decision-making.

“Much recent research about decisions helps us understand what we should do or how we should do it, but it says little about when,” he says.

In his new book, Wait: The Art and Science of Delay, Partnoy claims that when faced with a decision, we should assess how long we have to make it, and then wait until the last possible moment to do so. Should we take his advice on how to “manage delay,” we will live happier lives.

It is not surprising that the author of a book titled Wait is a self-described procrastinator. In what ways do you procrastinate?

I procrastinate in just about every possible way and always have, since my earliest memories going back to when I first starting going to elementary school and had these arguments with my mother about making my bed.

My mom would ask me to make my bed before going to school. I would say, no, because I didn’t see the point of making my bed if I was just going to sleep in it again that night. She would say, well, we have guests coming over at 6 o’clock, and they might come upstairs and look at your room. I said, I would make my bed when we know they are here. I want to see a car in the driveway. I want to hear a knock on the door. I know it will take me about one minute to make my bed so at 5:59, if they are here, I will make my bed.

I procrastinated all through college and law school. When I went to work at Morgan Stanley, I was delighted to find that although the pace of the trading floor is frenetic and people are very fast, there were lots of incredibly successful mentors of procrastination.

Now, I am an academic. As an academic, procrastination is practically a job requirement. If I were to say I would be submitting an academic paper by September 1, and I submitted it in August, people would question my character.

It has certainly been drilled into us that procrastination is a bad thing. Yet, you argue that we should embrace it. Why?

Historically, for human beings, procrastination has not been regarded as a bad thing. The Greeks and Romans generally regarded procrastination very highly. The wisest leaders embraced procrastination and would basically sit around and think and not do anything unless they absolutely had to.

The idea that procrastination is bad really started in the Puritanical era with Jonathan Edwards’s sermon against procrastination and then the American embrace of “a stitch in time saves nine,” and this sort of work ethic that required immediate and diligent action.

But if you look at recent studies, managing delay is an important tool for human beings. People are more successful and happier when they manage delay. Procrastination is just a universal state of being for humans. We will always have more things to do than we can possibly do, so we will always be imposing some sort of unwarranted delay on some tasks. The question is not whether we are procrastinating, it is whether we are procrastinating well.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of eHow.[end-div]

Curiosity: August 5, 2012, 10:31 PM Pacific Time

This is the time when NASA’s latest foray into space reaches its zenith — the upcoming landing of the Curiosity rover on Mars. At this time NASA’s Mars Science Laboratory (MSL) mission plans to deliver the nearly 2,000-pound, car-size robot rover to the surface of Mars. Curiosity will then embark on two years of exploration on the Red Planet.

For mission scientists and science buffs alike Curiosity’s descent and landing will be a major event. And, for the first time NASA will have a visual feed beamed back direct from the spacecraft (but only available after the event). The highly complex and fully automated landing has been dubbed “the Seven Minutes of Terror” by NASA engineers. Named for the time lag of signals from Curiosity to reach Earth due to the immense distance, mission scientists (and the rest of us) will not know whether Curiosity successfully descended and landed until a full 7 minutes after the fact.

For more on Curiosity and this special event visit NASA’s Jet Propulsion MSL site, here.

[div class=attrib]Image: This artist’s concept features NASA’s Mars Science Laboratory Curiosity rover, a mobile robot for investigating Mars’ past or present ability to sustain microbial life. Courtesy: NASA/JPL-Caltech.[end-div]

Re-resurgence of the United States

Those who have written off the United States in the 21st century may need to thing again. A combination of healthy demographics, sound intellectual capital, institutionalized innovation and fracking (yes, fracking) have placed the U.S. on a sound footing for the future, despite current political and economic woes.

[div class=attrib]From the Wilson Quarterly:[end-div]

If the United States were a person, a plausible diagnosis could be made that it suffers from manic depression. The country’s self-perception is highly volatile, its mood swinging repeatedly from euphoria to near despair and back again. Less than a decade ago, in the wake of the deceptively easy triumph over the wretched legions of Saddam Hussein, the United States was the lonely superpower, the essential nation. Its free markets and free thinking and democratic values had demonstrated their superiority over all other forms of human organization. Today the conventional wisdom speaks of inevitable decline and of equally inevitable Chinese triumph; of an American financial system flawed by greed and debt; of a political system deadlocked and corrupted by campaign contributions, negative ads, and lobbyists; of a social system riven by disparities of income, education, and opportunity.

It was ever thus. The mood of justified triumph and national solidarity after global victory in 1945 gave way swiftly to an era of loyalty oaths, political witch-hunts, and Senator Joseph McCarthy’s obsession with communist moles. The Soviet acquisition of the atom bomb, along with the victory of Mao Zedong’s communist armies in China, had by the end of the 1940s infected America with the fear of existential defeat. That was to become a pattern; at the conclusion of each decade of the Cold War, the United States felt that it was falling behind. The successful launch of the Sputnik satellite in 1957 triggered fears that the Soviet Union was winning the technological race, and the 1960 presidential election was won at least in part by John F. Kennedy’s astute if disingenuous claim that the nation was threatened by a widening “missile gap.”
At the end of the 1960s, with cities burning in race riots, campuses in an uproar, and a miserably unwinnable war grinding through the poisoned jungles of Indochina, an American fear of losing the titanic struggle with communism was perhaps understandable. Only the farsighted saw the importance of the contrast between American elections and the ruthless swagger of the Red Army’s tanks crushing the Prague Spring of 1968. At the end of the 1970s, with American diplomats held hostage in Tehran, a Soviet puppet ruling Afghanistan, and glib talk of Soviet troops soon washing their feet in the Indian Ocean, Americans waiting in line for gasoline hardly felt like winners. Yet at the end of the 1980s, what a surprise! The Cold War was over and the good guys had won.

Naturally, there were many explanations for this, from President Ronald Reagan’s resolve to Mikhail Gorbachev’s decency; from American industrial prowess to Soviet inefficiency. The most cogent reason was that the United States back in the late 1940s had crafted a bipartisan grand strategy for the Cold War that proved to be both durable and successful. It forged a tripartite economic alliance of Europe, North America, and Japan, backed up by various regional treaty organizations such as NATO, and counted on scientists, inventors, business leaders, and a prosperous and educated work force to deliver both guns and butter for itself and its allies. State spending on defense and science would keep unemployment at bay while Social Security would ensure that the siren songs of communism had little to offer the increasingly comfortable workers of the West. And while the West waited for its wealth and technologies to attain overwhelming superiority, its troops, missiles, and nuclear deterrent would contain Soviet and Chinese hopes of expansion.

It worked. The Soviet Union collapsed, and the Chinese leadership drew the appropriate lessons. (The Chinese view was that by starting with glasnost and political reform, and ducking the challenge of economic reform, Gorbachev had gotten the dynamics of change the wrong way round.) But by the end of 1991, the Democrat who would win the next year’s New Hampshire primary (Senator Paul Tsongas of Massachusetts) had a catchy new campaign slogan: “The Cold War is over—and Japan won.” With the country in a mild recession and mega-rich Japanese investors buying up landmarks such as Manhattan’s Rockefeller Center and California’s Pebble Beach golf course, Tsongas’s theme touched a national chord. But the Japanese economy has barely grown since, while America’s gross domestic product has almost doubled.

There are, of course, serious reasons for concern about the state of the American economy, society, and body politic today. But remember, the United States is like the weather in Ireland; if you don’t like it, just wait a few minutes and it’s sure to shift. This is a country that has been defined by its openness to change and innovation, and the search for the latest and the new has transformed the country’s productivity and potential. This openness, in effect, was America’s secret weapon that won both World War II and the Cold War. We tend to forget that the Soviet Union fulfilled Nikita Khrushchev’s pledge in 1961 to outproduce the United States in steel, coal, cement, and fertilizer within 20 years. But by 1981 the United States was pioneering a new kind of economy, based on plastics, silicon, and transistors, while the Soviet Union lumbered on building its mighty edifice of obsolescence.

This is the essence of America that the doom mongers tend to forget. Just as we did after Ezra Cornell built the nationwide telegraph system and after Henry Ford developed the assembly line, we are again all living in a future invented in America. No other country produced, or perhaps even could have produced, the transformative combination of Microsoft, Apple, Google, Amazon, and Facebook. The American combination of universities, research, venture capital, marketing, and avid consumers is easy to envy but tough to emulate. It’s not just free enterprise. The Internet itself might never have been born but for the Pentagon’s Defense Advanced Research Projects Agency, and much of tomorrow’s future is being developed at the nanotechnology labs at the Argonne National Laboratory outside Chicago and through the seed money of Department of Energy research grants.

American research labs are humming with new game-changing technologies. One MIT-based team is using viruses to bind and create new materials to build better batteries, while another is using viruses to create catalysts that can turn natural gas into oil and plastics. A University of Florida team is pioneering a practical way of engineering solar cells from plastics rather than silicon. The Center for Bits and Atoms at MIT was at the forefront of the revolution in fabricators, assembling 3-D printers and laser milling and cutting machines into a factory-in-a-box that just needs data, raw materials, and a power source to turn out an array of products. Now that the latest F-18 fighters are flying with titanium parts that were made by a 3-D printer, you know the technology has taken off. Some 23,000 such printers were sold last year, most of them to the kind of garage tinkerers—many of them loosely grouped in the “maker movement” of freelance inventors—who more than 70 years ago created Hewlett-Packard and 35 years ago produced the first Apple personal computer.

The real game changer for America is the combination of two not-so-new technologies: hydraulic fracturing (“fracking”) of underground rock formations and horizontal drilling, which allows one well to spin off many more deep underground. The result has been a “frack gas” revolution. As recently as 2005, the U.S. government assumed that the country had about a 10-year supply of natural gas remaining. Now it knows that there is enough for at least several decades. In 2009, the United States outpaced Russia to become the world’s top natural gas producer. Just a few years ago, the United States had five terminals receiving imported liquefied natural gas (LNG), and permits had been issued to build 17 more. Today, one of the five plants is being converted to export U.S. gas, and the owners of three others have applied to do the same. (Two applications to build brand new export terminals are also pending.) The first export contract, worth $8 billion, was signed with Britain’s BG Group, a multinational oil and gas company. Sometime between 2025 and 2030, America is likely to become self-sufficient in energy again. And since imported energy accounts for about half of the U.S. trade deficit, fracking will be a game changer in more ways than one.

The supply of cheap and plentiful local gas is already transforming the U.S. chemical industry by making cheap feedstock available—ethylene, a key component of plastics, and other crucial chemicals are derived from natural gas in a process called ethane cracking. Many American companies have announced major projects that will significantly boost U.S. petrochemical capacity. In addition to expansions along the Gulf Coast, Shell Chemical plans to build a new ethane cracking plant in Pennsylvania, near the Appalachian Mountains’ Marcellus Shale geologic formation. LyondellBasell Industries is seeking to increase ethylene output at its Texas plants, and Williams Companies is investing $3 billion in Gulf Coast development. In short, billions of dollars will pour into regions of the United States that desperately need investment. The American Chemistry Council projects that over several years the frack gas revolution will create 400,000 new jobs, adding $130 billion to the economy and more than $4 billion in annual tax revenues. The prospect of cheap power also promises to improve the balance sheets of the U.S. manufacturing industry.

[div class-attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtey of Wikipedia.[end-div]