All posts by Mike

Mobile Phone as Survival Gear

So, here’s the premise. You have hiked alone for days and now find yourself isolated and lost in a dense forest half-way up a mountain. Yes! You have a cell phone. But, oh no, there is no service in this remote part of the world. So, no call for help and no GPS. And, it gets worse: you have no emergency supplies and no food. What can you do? The neat infographic offers some tips.

[div class=attrib]Infographic courtesy of Natalie Bracco / AnsonAlex.com.[end-div]

Death Cafe

“Death Cafe” sounds like the name of a group of alternative musicians from Denmark. But it’s not. Its rather more literal definition is a coffee shop where customers go to talk about death over a cup of earl grey tea or double shot espresso. And, while it’s not displacing Starbucks (yet), death cafes are a growing trend in Europe, first inspired by the pop-up Cafe Mortels of Switzerland.

[div class=attrib]From the Independent:[end-div]

Do you have a death wish?” is not a question normally bandied about in seriousness. But have you ever actually asked whether a parent, partner or friend has a wish, or wishes, concerning their death? Burial or cremation? Where would they like to die? It’s not easy to do.

Stiff-upper-lipped Brits have a particular problem talking about death. Anyone who tries invariably gets shouted down with “Don’t talk like that!” or “If you say it, you’ll make it happen.” A survey by the charity Dying Matters reveals that more than 70 per cent of us are uncomfortable talking about death and that less than a third of us have spoken to family members about end-of-life wishes.

But despite this ingrained reluctance there are signs of burgeoning interest in exploring death. I attended my first death cafe recently and was surprised to discover that the gathering of goths, emos and the terminally ill that I’d imagined, turned out to be a collection of fascinating, normal individuals united by a wish to discuss mortality.

At a trendy coffee shop called Cakey Muto in Hackney, east London, taking tea (and scones!) with death turned out to be rather a lot of fun. What is believed to be the first official British death cafe took place in September last year, organised by former council worker Jon Underwood. Since then, around 150 people have attended death cafes in London and the one I visited was the 17th such happening.

“We don’t want to shove death down people’s throats,” Underwood says. “We just want to create an environment where talking about death is natural and comfortable.” He got the idea from the Swiss model (cafe mortel) invented by sociologist Bernard Crettaz, the popularity of which gained momentum in the Noughties and has since spread to France.

Underwood is keen to start a death cafe movement in English-speaking countries and his website (deathcafe.com) includes instructions for setting up your own. He has already inspired the first death cafe in America and groups have sprung up in Northern England too. Last month, he arranged the first death cafe targeting issues around dying for a specific group, the LGBT community, which he says was extremely positive and had 22 attendees.

Back in Cakey Muto, 10 fellow attendees and I eye each other nervously as the cafe door is locked and we seat ourselves in a makeshift circle. Conversation is kicked off by our facilitator, grief specialist Kristie West, who sets some ground rules. “This is a place for people to talk about death,” she says. “I want to make it clear that it is not about grief, even though I’m a grief specialist. It’s also not a debate platform. We don’t want you to air all your views and pick each other apart.”

A number of our party are directly involved in the “death industry”: a humanist-funeral celebrant, an undertaker and a lady who works in a funeral home. Going around the circle explaining our decision to come to a death cafe, what came across from this trio, none of whom knew each other, was their satisfaction in their work.

“I feel more alive than ever since working in a funeral home,” one of the women remarked. “It has helped me recognise that it isn’t a circle between life and death, it is more like a cosmic soup. The dead and the living are sort of floating about together.”

Others in the group include a documentary maker, a young woman whose mother died 18 months ago, a lady who doesn’t say much but was persuaded by her neighbour to come, and a woman who has attended three previous death cafes but still hasn’t managed to admit this new interest to her family or get them to talk about death.

The funeral celebrant tells the circle she’s been thinking a lot about what makes a good or bad death. She describes “the roaring corrosiveness of stepping into a household” where a “bad death” has taken place and the group meditates on what a bad death entails: suddenness, suffering and a difficult relationship between the deceased and bereaved?

“I have seen people have funerals which I don’t think they would have wanted,” says the undertaker, who has 17 years of experience. “It is possible to provide funerals more cheaply, more sensitively and with greater respect for the dead.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Death cafe menu courtesy of Death Cafe.[end-div]

Art That Makes You Scratch Your Head

Some works of art are visceral or grotesque, others evoke soaring and enlightening emotions. Some art just makes you think deeply about a specific event or about fundamental philosophical questions. Then, every once in a while, along comes a work that requires serious head-scratching.

[div class=attrib]From NPR:[end-div]

You are standing in a park in New Zealand. You look up at the top of a hill, and there, balanced on the ground, looking like it might catch a breeze and blow away, is a gigantic, rumpled piece of paper.

Except … one side of it, the underside, is … not there. You can see the sky, clouds, birds where there should be paper, so what is this?

As you approach, you realize it is made of metal. It’s a sculpture, made of welded and painted steel that looks like a two dimensional cartoon drawing of a three dimensional piece of paper … that is three dimensional if you get close, but looks two dimensional if you stay at the bottom of the hill…

[div class=attrib]Read the entire article and catch more images after the jump, and see more of Neil Dawson’s work here.[end-div]

[div class=attrib]Image: Horizons at Gibbs Farm by sculptor Neil Dawson, private art park, New Zealand. Courtesy of NPR / Gibbs Farm / Neil Dawson.[end-div]

Instagram: Confusing Mediocrity with Artistry

Professional photographers take note: there will always be room for high-quality images that tell a story or capture a timeless event or exude artistic elegance. But, your domain is under attack, again — and the results are not particularly pretty. This time courtesy of Instagram.

Just over a hundred years ago, to be a good photographer one required the skills of an alchemist; the chemical processing of plates and prints was more complex, much more time-consuming than capturing the shot itself, and sometimes dangerous. A good print required constant attention, lengthy cajoling and considerable patience, and of course a darkroom and some interesting chemicals.

Then Kodak came along; it commoditized film and processing, expanding photography to the masses. More recently as technology has improved and hardware prices have continued to drop, more cameras have found their ways into the hands of more people. However, until recently access to good quality (yet still expensive) photographic equipment has played an important role in allowing photographers to maintain superiority of their means and ends over everyday amateurs.

Even as photography has become a primarily digital process, with camera prices  continuing to plummet, many photographers have continued to distinguish their finished images from the burgeoning mainstream. After all, it still takes considerable skill and time to post-process an image in Photoshop or other imaging software.

Nowadays, anyone armed with a $99 smartphone is a photographer with a high-resolution camera. And, through the power of blogs and social networks every photographer is also a publisher. Technology has considerably democratized and shortened the process. So, now an image can find its way from the hands of the photographer to the eyes of a vast audience almost instantaneously. The numbers speak for themselves — by most estimates, around 4.2 million images are uploaded daily to Flickr and 4.5 million to Instagram.

And, as the smartphone is to a high-end medium or large format camera, so is Instagram to Photoshop. Now, armed with both smartphone and Instagram a photographer — applying the term loosely — can touch-up an image of their last meal with digital sepia or apply a duo-tone filter to a landscape of their bedroom, or, most importantly, snap a soft-focus, angled self-portrait. All this, and the photographer can still deliver the finished work to a horde of followers for instant, gratuitous “likes”.

But, here’s why Instagram may not be such a threat to photography after all, despite the vast ocean of images washing across the internet.

[div class=attrib]From the Atlantic Wire:[end-div]

While the Internet has had a good time making fun of these rich kid Instagram photos, haters should be careful. These postings are emblematic of the entire medium we all use. To be certain, these wealthy kid pix are particularly funny (and also sad) because they showcase a gross variant of entitlement. Preteens posing with helicopters they did nothing to earn and posting the pictures online for others to ogle provides an easy in for commentary on the state of the American dream. (Dead.) While we don’t disagree with that reading, it’s par for the course on Instagram, a shallow medium all about promoting superficiality that photo takers did little to nothing to earn.

The very basis of Instagram is not just to show off, but to feign talent we don’t have, starting with the filters themselves. The reason we associate the look with “cool” in the first place is that many of these pretty hazes originated from processes coveted either for their artistic or unique merits, as photographer and blogger Ming Thein explains: “Originally, these styles were either conscious artistic decisions, or the consequences of not enough money and using expired film. They were chosen precisely because they looked unique—either because it was a difficult thing to execute well (using tilt-shift lenses, for instance) or because nobody else did it (cross-processing),” he writes. Instagram, however, has made such techniques easy and available, taking away that original value. “It takes the skill out of actually having to do any of these things (learn to process B&W properly, either chemically or in Photoshop, for instance),” he continues.

Yet we apply them to make ourselves look like we’ve got something special. Everything becomes “amaaazzing,” to put it in the words of graphic design blogger Jack Mancer, who has his own screed about the site. But actually, nothing about it is truly amazing. Some might call the process democratizing—everyone is a professional!—but really, it’s a big hoax. Everyone is just pressing buttons to add computer-generated veneers to our mostly mundane lives. There is nothing artsy about that. But we still do it. Is that really better than the rich kids? Sure, we’re not embarrassing ourselves by posting extreme wealth we happened into. But what are we posting? And why? At the very least, we’re doing it to look artsy; if not that, there is some other, deeper, more sinister thing we’re trying to prove, which means we’re right up there with the rich kids.

Here are some examples of how we see this playing out on the network:

The Food Pic

Why you post this: This says my food looks cool, therefore it is yummy. Look how well I eat, or how well I cook, or what a foodie I am.

Why this is just like the rich kids: Putting an artsy filter on a pretty photo can make the grossest slosh look like gourmet eats. It does not prove culinary or photographic skill, it proves that you can press a button.

The Look How much Fun I’m Having Pic

Why you post this: To prove you have the best, most social, coolest life, and friends. To prove you are happy and fun.

Why this is just like the rich kids: This also has an underlying tone of flaunting wealth. Fun usually costs money, and it’s something not everybody else has.

The Picture of Thing Pic

Why you post this: This proves your fantastic, enviable artistic eye: “I turned a mundane object into art!”

What that is just like the rich kids: See above. Essentially, you’re bragging, but without the skills to support it.

Instagram and photo apps like it are shallow mediums that will generate shallow results. They are there for people to showcase something that doesn’t deserve a platform. The rich kids are a particularly salient example of how the entire network operates, but those who live in glass houses shot by Instagram shouldn’t throw beautifully if artfully filtered stones.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Tumblr: Rich Kids of Instgram.[end-div]

Air Conditioning in a Warming World

[div class=attrib]From the New York Times:[end-div]

THE blackouts that left hundreds of millions of Indians sweltering in the dark last month underscored the status of air-conditioning as one of the world’s most vexing environmental quandaries.

Fact 1: Nearly all of the world’s booming cities are in the tropics and will be home to an estimated one billion new consumers by 2025. As temperatures rise, they — and we — will use more air-conditioning.

Fact 2: Air-conditioners draw copious electricity, and deliver a double whammy in terms of climate change, since both the electricity they use and the coolants they contain result in planet-warming emissions.

Fact 3: Scientific studies increasingly show that health and productivity rise significantly if indoor temperature is cooled in hot weather. So cooling is not just about comfort.

Sum up these facts and it’s hard to escape: Today’s humans probably need air-conditioning if they want to thrive and prosper. Yet if all those new city dwellers use air-conditioning the way Americans do, life could be one stuttering series of massive blackouts, accompanied by disastrous planet-warming emissions.

We can’t live with air-conditioning, but we can’t live without it.

“It is true that air-conditioning made the economy happen for Singapore and is doing so for other emerging economies,” said Pawel Wargocki, an expert on indoor air quality at the International Center for Indoor Environment and Energy at the Technical University of Denmark. “On the other hand, it poses a huge threat to global climate and energy use. The current pace is very dangerous.”

Projections of air-conditioning use are daunting. In 2007, only 11 percent of households in Brazil and 2 percent in India had air-conditioning, compared with 87 percent in the United States, which has a more temperate climate, said Michael Sivak, a research professor in energy at the University of Michigan. “There is huge latent demand,” Mr. Sivak said. “Current energy demand does not yet reflect what will happen when these countries have more money and more people can afford air-conditioning.” He has estimated that, based on its climate and the size of the population, the cooling needs of Mumbai alone could be about a quarter of those of the entire United States, which he calls “one scary statistic.”

It is easy to decry the problem but far harder to know what to do, especially in a warming world where people in the United States are using our existing air-conditioners more often. The number of cooling degree days — a measure of how often cooling is needed — was 17 percent above normal in the United States in 2010, according to the Environmental Protection Agency, leading to “an increase in electricity demand.” This July was the hottest ever in the United States.

Likewise, the blackouts in India were almost certainly related to the rising use of air-conditioning and cooling, experts say, even if the immediate culprit was a grid that did not properly balance supply and demand.

The late arrival of this year’s monsoons, which normally put an end to India’s hottest season, may have devastated the incomes of farmers who needed the rain. But it “put smiles on the faces of those who sell white goods — like air-conditioners and refrigerators — because it meant lots more sales,” said Rajendra Shende, chairman of the Terre Policy Center in Pune, India.

“Cooling is the craze in India — everyone loves cool temperatures and getting to cool temperatures as quickly as possible,” Mr. Shende said. He said that cooling has become such a cultural priority that rather than advertise a car’s acceleration, salesmen in India now emphasize how fast its air-conditioner can cool.

Scientists are scrambling to invent more efficient air-conditioners and better coolant gases to minimize electricity use and emissions. But so far the improvements have been dwarfed by humanity’s rising demands.

And recent efforts to curb the use of air-conditioning, by fiat or persuasion, have produced sobering lessons.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Parkland Air Conditioning.[end-div]

Watch Out Corporate America: Gen-Y is Coming

Social scientists have had Generation-Y, also known as “millenials”, under their microscopes for a while. Born between 1982 and 1999, Gen-Y is now coming of age and becoming a force in the workplace displacing aging “boomers” as they retire to the hills. So, researchers are now looking at how Gen-Y is faring inside corporate America. Remember, Gen-Y is the “it’s all about me generation”; members are characterized as typically lazy and spoiled, have a grandiose sense of entitlement, inflated self-esteem and deep emotional fragility. Their predecessors, the baby boomers, on the other hand are often seen as over-bearing, work-obsessed, competitive and narrow-minded. A clash of cultures is taking shape in office cubes across the country as these groups, with such differing personalities and philosophies, tussle within the workplace. However, it may not be all bad, as columnist Emily Matchar, argues below — corporate America needs the kind of shake-up that Gen-Y promises.

[div class=attrib]From the Washington Post:[end-div]

Have you heard the one about the kid who got his mom to call his boss and ask for a raise? Or about the college student who quit her summer internship because it forbade Facebook in the office?

Yep, we’re talking about Generation Y — loosely defined as those born between 1982 and 1999 — also known as millennials. Perhaps you know them by their other media-generated nicknames: teacup kids,for their supposed emotional fragility; boomerang kids, who always wind up back home; trophy kids — everyone’s a winner!; the Peter Pan generation, who’ll never grow up.

Now this pampered, over-praised, relentlessly self-confident generation (at age 30, I consider myself a sort of older sister to them) is flooding the workplace. They’ll make up 75 percent of the American workforce by 2025 — and they’re trying to change everything.

These are the kids, after all, who text their dads from meetings. They think “business casual” includes skinny jeans. And they expect the company president to listen to their “brilliant idea.”

When will they adapt?

They won’t. Ever. Instead, through their sense of entitlement and inflated self-esteem, they’ll make the modern workplace adapt to them. And we should thank them for it. Because the modern workplace frankly stinks, and the changes wrought by Gen Y will be good for everybody.

Few developed countries demand as much from their workers as the United States. Americans spend more time at the office than citizens of most other developed nations. Annually, we work 408 hours more than the Dutch, 374 hours more than the Germans and 311 hours more than the French. We even work 59 hours more than the stereotypically nose-to-the-grindstone Japanese. Though women make up half of the American workforce, the United States is the only country in the developed world without guaranteed paid maternity leave.

All this hard work is done for less and less reward. Wages have been stagnant for years, benefits shorn, opportunities for advancement blocked. While the richest Americans get richer, middle-class workers are left to do more with less. Because jobs are scarce and we’re used to a hierarchical workforce, we accept things the way they are. Worse, we’ve taken our overwork as a badge of pride. Who hasn’t flushed with a touch of self-importance when turning down social plans because we’re “too busy with work”?

Into this sorry situation strolls the self-esteem generation, printer-fresh diplomas in hand. And they’re not interested in business as usual.

The current corporate culture simply doesn’t make sense to much of middle-class Gen Y. Since the cradle, these privileged kids have been offered autonomy, control and choices (“Green pants or blue pants today, sweetie?”). They’ve been encouraged to show their creativity and to take their extracurricular interests seriously. Raised by parents who wanted to be friends with their kids, they’re used to seeing their elders as peers rather than authority figures. When they want something, they’re not afraid to say so.

[div class=attrib]Read the entire article after the jump.[end-div]

Subjective Objectivism: The Paradox that is Ayn Rand

Ayn Rand: anti-collectivist ideologue, standard-bearer for unapologetic individualism and rugged self-reliance, or selfish, fantasist and elitist hypocrite?

Political conservatives and libertarians increasingly flock to her writings and support her philosophy of individualism and unfettered capitalism, which she dubbed, “objectivism”. On the other hand, liberals see her as selfish zealot, elitist, narcissistic, even psychopathic.

The truth, of course, is more nuanced and complex, especially the private Ayn Rand versus the very public persona. Thus those who fail to delve into Rand’s traumatic and colorful history fail to grasp the many paradoxes and contradictions that she enshrined.

Rand was firmly and vociferously pro-choice, yet she believed that women should submit to the will of great men. She was a devout atheist and outspoken pacifist, yet she believed Native Americans fully deserved their cultural genocide for not grasping capitalism. She viewed homosexuality as disgusting and immoral, but supported non-discrimination protection for homosexuals in the public domain, yet opposed such rights in private, all the while having an extremely colorful private life herself. She was a valiant opponent of government and federal regulation in all forms. Publicly, she viewed Social Security, Medicare and other “big government” programs with utter disdain, their dependents nothing more than weak-minded loafers and “takers”. Privately, later in life, she accepted payments from Social Security and Medicare. Perhaps most paradoxically, Rand derided those who would fake their own reality, while at the same time being chronically dependent on mind-distorting amphetamines; popping speed at the same time as writing her keystones to objectivism: Fountainhead and Atlas Shrugged.

[div class=attrib]From the Guardian:[end-div]

As an atheist Ayn Rand did not approve of shrines but the hushed, air-conditioned headquarters which bears her name acts as a secular version. Her walnut desk occupies a position of honour. She smiles from a gallery of black and white photos, young in some, old in others. A bronze bust, larger than life, tilts her head upward, jaw clenched, expression resolute.

The Ayn Rand Institute in Irvine, California, venerates the late philosopher as a prophet of unfettered capitalism who showed America the way. A decade ago it struggled to have its voice heard. Today its message booms all the way to Washington DC.

It was a transformation which counted Paul Ryan, chairman of the House budget committee, as a devotee. He gave Rand’s novel, Atlas Shrugged, as Christmas presents and hailed her as “the reason I got into public service”.

Then, last week, he was selected as the Republican vice-presidential nominee and his enthusiasm seemed to evaporate. In fact, the backtracking began earlier this year when Ryan said as a Catholic his inspiration was not Rand’s “objectivism” philosophy but Thomas Aquinas’.

The flap has illustrated an acute dilemma for the institute. Once peripheral, it has veered close to mainstream, garnering unprecedented influence. The Tea Party has adopted Rand as a seer and waves placards saying “We should shrug” and “Going Galt”, a reference to an Atlas Shrugged character named John Galt.

Prominent Republicans channel Rand’s arguments in promises to slash taxes and spending and to roll back government. But, like Ryan, many publicly renounce the controversial Russian emigre as a serious influence. Where, then, does that leave the institute, the keeper of her flame?

Given Rand’s association with plutocrats – she depicted captains of industry as “producers” besieged by parasitic “moochers” – the headquarters are unexpectedly modest. Founded in 1985 three years after Rand’s death, the institution moved in 2002 from Marina del Rey, west of Los Angeles, to a drab industrial park in Irvine, 90 minutes south, largely to save money. It shares a nondescript two-storey building with financial services and engineering companies.

There is little hint of Galt, the character who symbolises the power and glory of the human mind, in the bland corporate furnishings. But the quotations and excerpts adorning the walls echo a mission which drove Rand and continues to inspire followers as an urgent injunction.

“The demonstration of a new moral philosophy: the morality of rational self-interest.”

These, said Onkar Ghate, the institute’s vice-president, are relatively good times for Randians. “Our primary mission is to advance awareness of her ideas and promote her philosophy. I must say, it’s going very well.”

On that point, if none other, conservatives and progressives may agree. Thirty years after her death Rand, as a radical intellectual and political force, is going very well indeed. Her novel Atlas Shrugged, a 1,000 page assault on big government, social welfare and altruism first published in 1957, is reportedly selling more than 400,000 copies per year and is being made into a movie trilogy. Its radical author, who also penned The Fountainhead and other novels and essays, is the subject of a recent documentary and spate of books.

To critics who consider Rand’s philosophy that “of the psychopath, a misanthropic fantasy of cruelty, revenge and greed”, her posthumous success is alarming.

Relatively little attention however has been paid to the institute which bears her name and works, often behind the scenes, to direct her legacy and shape right-wing debate.

 

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ayn Rand in 1957. Courtesy of Wikipedia.[end-div]

Philosophy and Science Fiction

We excerpt an fascinating article from I09 on the association of science fiction to philosophical inquiry. It’s quiet remarkable that this genre of literature can provide such a rich vein for philosophers to mine, often more so than reality itself. Though, it is no coincidence that our greatest authors of science fiction were, and are, amateur philosophers at heart.

[div class=attrib]From i09:[end-div]

People use science fiction to illustrate philosophy all the time. From ethical quandaries to the very nature of existence, science fiction’s most famous texts are tailor-made for exploring philosophical ideas. In fact, many college campuses now offer courses in the philosophy of science fiction.

But science fiction doesn’t just illuminate philosophy — in fact, the genre grew out of philosophy, and the earliest works of science fiction were philosophical texts. Here’s why science fiction has its roots in philosophy, and why it’s the genre of thought experiments about the universe.

Philosophical Thought Experiments As Science Fiction
Science fiction is a genre that uses strange worlds and inventions to illuminate our reality — sort of the opposite of a lot of other writing, which uses the familiar to build a portrait that cumulatively shows how insane our world actually is. People, especially early twenty-first century people, live in a world where strangeness lurks just beyond our frame of vision — but we can’t see it by looking straight at it. When we try to turn and confront the weird and unthinkable that’s always in the corner of our eye, it vanishes. In a sense, science fiction is like a prosthetic sense of peripheral vision.

We’re sort of like the people chained up in on the cave wall, but never seeing the full picture.

Plato is probably the best-known user of allegories — a form of writing which has a lot in common with science fiction. A lot of allegories are really thought experiments, trying out a set of strange facts to see what principles you derive from them. As plenty of people have pointed out, Plato’s Allegory of the Cave is the template for a million “what is reality” stories, from the works of Philip K. Dick to The Matrix. But you could almost see the cave allegory in itself as a proto-science fiction story, because of the strange worldbuilding that goes into these people who have never seen the “real” world. (Plato also gave us an allegory about the Ring of Gyges, which turns its wearer invisible — sound familiar?).

Later philosophers who ponder the nature of existence also seem to stray into weird science fiction territory — like Descartes, raising the notion that he, Descartes, could have existed since the beginning of the universe (as an alternative to God as a cause for Descartes’ existence.) Sitting in his bread oven, Descartes tries to cut himself off from sensory input to see what he can deduce of the universe.

And by the same token, the philosophy of human nature often seems to depend on conjuring imaginary worlds, whether it be Hobbes’ “nasty, brutish and short” world without laws, or Rousseau’s “state of nature.” A great believer in the importance of science, Hobbes sees humans as essentially mechanistic beings who are programmed to behave in a selfish fashion — and the state is a kind of artificial human that can contain us and give us better programming, in a sense.

So not only can you use something like Star Trek’s Holodeck to point out philosophical notions of the fallibility of the senses, and the possible falseness of reality — philosophy’s own explorations of those sorts of topics are frequently kind of other-worldly. Philosophical thought experiments, like the oft-cited “state of nature,” are also close kin to science fiction world building. As Susan Schneider writes in the book Science Fiction and Philosophy, “if you read science fiction writers like Stanislaw Lem, Isaac Asimov, Arthur C. Clarke and Robert Sawyer, you already aware that some of the best science fiction tales are in fact long versions of philosophical thought experiments.”

But meanwhile, when people come to list the earliest known works that could be considered “real” science fiction, they always wind up listing philosophical works, written by philosophers.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Description This is the front cover art for the book Nineteen Eighty-Four (1984) written by George Orwell. Courtesy of Secker and Warburg (London) / Wikipedia.[end-div]

Is It Good That Money Can Buy (Almost) Anything?

Money is a curious invention. It enables efficient and almost frictionless commerce and it allows us to assign tangible value to our time. Yet it poses enormous societal challenges and ethical dilemmas. For instance, should we bribe our children with money in return for better grades? Should we allow a chronically ill kidney patient to purchase a replacement organ from a donor?

Raghuram Rajan, professor of finance at the University of Chicago, reviews a fascinating new book that attempts to answer some of these questions. The book, “What Money Can’t Buy: The Moral Limits of the Market” is written by noted Harvard philosopher Michael Sandel.

[div class=attrib]From Project Syndicate:[end-div]

In an interesting recent book, What Money Can’t Buy: The Moral Limits of the Market, the Harvard philosopher Michael Sandel points to the range of things that money can buy in modern societies and gently tries to stoke our outrage at the market’s growing dominance. Is he right that we should be alarmed?

While Sandel worries about the corrupting nature of some monetized transactions (do kids really develop a love of reading if they are bribed to read books?), he is also concerned about unequal access to money, which makes trades using money inherently unequal. More generally, he fears that the expansion of anonymous monetary exchange erodes social cohesion, and argues for reducing money’s role in society.

Sandel’s concerns are not entirely new, but his examples are worth reflecting upon. In the United States, some companies pay the unemployed to stand in line for free public tickets to congressional hearings. They then sell the tickets to lobbyists and corporate lawyers who have a business interest in the hearing but are too busy to stand in line.

Clearly, public hearings are an important element of participatory democracy. All citizens should have equal access. So selling access seems to be a perversion of democratic principles.

The fundamental problem, though, is scarcity. We cannot accommodate everyone in the room who might have an interest in a particularly important hearing. So we have to “sell” entry. We can either allow people to use their time (standing in line) to bid for seats, or we can auction seats for money. The former seems fairer, because all citizens seemingly start with equal endowments of time. But is a single mother with a high-pressure job and three young children as equally endowed with spare time as a student on summer vacation? And is society better off if she, the chief legal counsel for a large corporation, spends much of her time standing in line?

Whether it is better to sell entry tickets for time or for money thus depends on what we hope to achieve. If we want to increase society’s productive efficiency, people’s willingness to pay with money is a reasonable indicator of how much they will gain if they have access to the hearing. Auctioning seats for money makes sense – the lawyer contributes more to society by preparing briefs than by standing in line.

On the other hand, if it is important that young, impressionable citizens see how their democracy works, and that we build social solidarity by making corporate executives stand in line with jobless teenagers, it makes sense to force people to bid with their time and to make entry tickets non-transferable. But if we think that both objectives – efficiency and solidarity – should play some role, perhaps we should turn a blind eye to hiring the unemployed to stand in line in lieu of busy lawyers, so long as they do not corner all of the seats.

What about the sale of human organs, another example Sandel worries about? Something seems wrong when a lung or a kidney is sold for money. Yet we celebrate the kindness of a stranger who donates a kidney to a young child. So, clearly, it is not the transfer of the organ that outrages us – we do not think that the donor is misinformed about the value of a kidney or is being fooled into parting with it. Nor, I think, do we have concerns about the scruples of the person selling the organ – after all, they are parting irreversibly with something that is dear to them for a price that few of us would accept.

I think part of our discomfort has to do with the circumstances in which the transaction takes place. What kind of society do we live in if people have to sell their organs to survive?

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google.[end-div]

The Pros and Cons of Online Reviews

There is no doubt that online reviews for products and services, from books to news cars to a vacation spot, have revolutionized shopping behavior. Internet and mobile technology has made gathering, reviewing and publishing open and honest crowdsourced opinion simple, efficient and ubiquitous.

However, the same tools that allow frank online discussion empower those wishing to cheat and manipulate the system. Cyberspace is rife with fake reviews, fake reviewers, inflated ratings, edited opinion, and paid insertions.

So, just as in any purchase transaction since the time when buyers and sellers first met, caveat emptor still applies.

[div class=attrib]From Slate:[end-div]

The Internet has fundamentally changed the way that buyers and sellers meet and interact in the marketplace. Online retailers make it cheap and easy to browse, comparison shop, and make purchases with the click of a mouse. The Web can also, in theory, make for better-informed purchases—both online and off—thanks to sites that offer crowdsourced reviews of everything from dog walkers to dentists.

In a Web-enabled world, it should be harder for careless or unscrupulous businesses to exploit consumers. Yet recent studies suggest that online reviewing is hardly a perfect consumer defense system. Researchers at Yale, Dartmouth, and USC have found evidence that hotel owners post fake reviews to boost their ratings on the site—and might even be posting negative reviews of nearby competitors.

The preponderance of online reviews speaks to their basic weakness: Because it’s essentially free to post a review, it’s all too easy to dash off thoughtless praise or criticism, or, worse, to construct deliberately misleading reviews without facing any consequences. It’s what economists (and others) refer to as the cheap-talk problem. The obvious solution is to make it more costly to post a review, but that eliminates one of the main virtues of crowdsourcing: There is much more wisdom in a crowd of millions than in select opinions of a few dozen.

Of course, that wisdom depends on reviewers giving honest feedback. A few well-publicized incidents suggest that’s not always the case. For example, when Amazon’s Canadian site accidentally revealed the identities of anonymous book reviewers in 2004, it became apparent that many reviews came from publishers and from the authors themselves.

Technological idealists, perhaps not surprisingly, see a solution to this problem in cutting-edge computer science. One widely reported study last year showed that a text-analysis algorithm proved remarkably adept at detecting made-up reviews. The researchers instructed freelance writers to put themselves in the role of a hotel marketer who has been tasked by his boss with writing a fake customer review that is flattering to the hotel. They also compiled a set of comparison TripAdvisor reviews that the study’s authors felt were likely to be genuine. Human judges could not distinguish between the real ones and the fakes. But the algorithm correctly identified the reviews as real or phony with 90 percent accuracy by picking up on subtle differences, like whether the review described specific aspects of the hotel room layout (the real ones do) or mentioned matters that were unrelated to the hotel itself, like whether the reviewer was there on vacation or business (a marker of fakes). Great, but in the cat-and-mouse game of fraud vs. fraud detection, phony reviewers can now design feedback that won’t set off any alarm bells.
Just how prevalent are fake reviews? A trio of business school professors, Yale’s Judith Chevalier, Yaniv Dover of Dartmouth, and USC’s Dina Mayzlin, have taken a clever approach to inferring an answer by comparing the reviews on two travel sites, TripAdvisor and Expedia. In order to post an Expedia review, a traveler needs to have made her hotel booking through the site. Hence, a hotel looking to inflate its rating or malign a competitor would have to incur the cost of paying itself through the site, accumulating transaction fees and tax liabilities in the process. On TripAdvisor, all you need to post fake reviews are a few phony login names and email addresses.

Differences in the overall ratings on TripAdvisor versus Expedia could simply be the result of a more sympathetic community of reviewers. (In practice, TripAdvisor’s ratings are actually lower on average.) So Mayzlin and her co-authors focus on the places where the gaps between TripAdvisor and Expedia reviews are widest. In their analysis, they looked at hotels that probably appear identical to the average traveler but have different underlying ownership or management. There are, for example, companies that own scores of franchises from hotel chains like Marriott and Hilton. Other hotels operate under these same nameplates but are independently owned. Similarly, many hotels are run on behalf of their owners by large management companies, while others are owner-managed. The average traveler is unlikely to know the difference between a Fairfield Inn owned by, say, the Pillar Hotel Group and one owned and operated by Ray Fisman. The study’s authors argue that the small owners and independents have less to lose by trying to goose their online ratings (or torpedo the ratings of their neighbors), reasoning that larger companies would be more vulnerable to punishment, censure, and loss of business if their shenanigans were uncovered. (The authors give the example of a recent case in which a manager at Ireland’s Clare Inn was caught posting fake reviews. The hotel is part of the Lynch Hotel Group, and in the wake of the fake postings, TripAdvisor removed suspicious reviews from other Lynch hotels, and unflattering media accounts of the episode generated negative PR that was shared across all Lynch properties.)

The researchers find that, even comparing hotels under the same brand, small owners are around 10 percent more likely to get five-star reviews on TripAdvisor than they are on Expedia (relative to hotels owned by large corporations). The study also examines whether these small owners might be targeting the competition with bad reviews. The authors look at negative reviews for hotels that have competitors within half a kilometer. Hotels where the nearby competition comes from small owners have 16 percent more one- and two-star ratings than those with neighboring hotels that are owned by big companies like Pillar.
This isn’t to say that consumers are making a mistake by using TripAdvisor to guide them in their hotel reservations. Despite the fraudulent posts, there is still a high degree of concordance between the ratings assigned by TripAdvisor and Expedia. And across the Web, there are scores of posters who seem passionate about their reviews.

Consumers, in turn, do seem to take online reviews seriously. By comparing restaurants that fall just above and just below the threshold for an extra half-star on Yelp, Harvard Business School’s Michael Luca estimates that an extra star is worth an extra 5 to 9 percent in revenue. Luca’s intent isn’t to examine whether restaurants are gaming Yelp’s system, but his findings certainly indicate that they’d profit from trying. (Ironically, Luca also finds that independent restaurants—the establishments that Mayzlin et al. would predict are most likely to put up fake postings—benefit the most from an extra star. You don’t need to check out Yelp to know what to expect when you walk into McDonald’s or Pizza Hut.)

[div class=attrib]Read the entire article following the jump:[end-div]

[div class=attrib]Image courtesy of Mashable.[end-div]

When to Eat Your Fruit and Veg

It’s time to jettison the $1.99 hyper-burger and super-sized fires and try some real fruits and vegetables. You know — the kind of product that comes directly from the soil. But, when is the best time to suck on a juicy peach or chomp some crispy radicchio?

A great chart, below, summarizes which fruits and vegetables are generally in season for the Northern Hemisphere.

[div class=attrib]Infographic courtesy of Visual News, designed by Column Five.[end-div]

Extreme Weather as the New Norm

Melting glaciers at the poles, wildfires in the western United States, severe flooding across Europe and parts of Asia, hurricanes in northern Australia, warmer temperatures across the globe. According to a many climatologists, including a growing number of ex-climate change skeptics, this is the new normal for our foreseeable future. Welcome to the changed climate.

[div class=attrib]From the New York Times:[end-div]

BY many measurements, this summer’s drought is one for the record books. But so was last year’s drought in the South Central states. And it has been only a decade since an extreme five-year drought hit the American West. Widespread annual droughts, once a rare calamity, have become more frequent and are set to become the “new normal.”

Until recently, many scientists spoke of climate change mainly as a “threat,” sometime in the future. But it is increasingly clear that we already live in the era of human-induced climate change, with a growing frequency of weather and climate extremes like heat waves, droughts, floods and fires.

Future precipitation trends, based on climate model projections for the coming fifth assessment from the Intergovernmental Panel on Climate Change, indicate that droughts of this length and severity will be commonplace through the end of the century unless human-induced carbon emissions are significantly reduced. Indeed, assuming business as usual, each of the next 80 years in the American West is expected to see less rainfall than the average of the five years of the drought that hit the region from 2000 to 2004.

That extreme drought (which we have analyzed in a new study in the journal Nature-Geoscience) had profound consequences for carbon sequestration, agricultural productivity and water resources: plants, for example, took in only half the carbon dioxide they do normally, thanks to a drought-induced drop in photosynthesis.

In the drought’s worst year, Western crop yields were down by 13 percent, with many local cases of complete crop failure. Major river basins showed 5 percent to 50 percent reductions in flow. These reductions persisted up to three years after the drought ended, because the lakes and reservoirs that feed them needed several years of average rainfall to return to predrought levels.

In terms of severity and geographic extent, the 2000-4 drought in the West exceeded such legendary events as the Dust Bowl of the 1930s. While that drought saw intervening years of normal rainfall, the years of the turn-of-the-century drought were consecutive. More seriously still, long-term climate records from tree-ring chronologies show that this drought was the most severe event of its kind in the western United States in the past 800 years. Though there have been many extreme droughts over the last 1,200 years, only three other events have been of similar magnitude, all during periods of “megadroughts.”

Most frightening is that this extreme event could become the new normal: climate models point to a warmer planet, largely because of greenhouse gas emissions. Planetary warming, in turn, is expected to create drier conditions across western North America, because of the way global-wind and atmospheric-pressure patterns shift in response.

Indeed, scientists see signs of the relationship between warming and drought in western North America by analyzing trends over the last 100 years; evidence suggests that the more frequent drought and low precipitation events observed for the West during the 20th century are associated with increasing temperatures across the Northern Hemisphere.

These climate-model projections suggest that what we consider today to be an episode of severe drought might even be classified as a period of abnormal wetness by the end of the century and that a coming megadrought — a prolonged, multidecade period of significantly below-average precipitation — is possible and likely in the American West.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Sun.[end-div]

How Great Companies Fail

A fascinating case study shows how Microsoft failed its employees through misguided HR (human resources) policies that pitted colleague against colleague.

[div class=attrib]From the Guardian:[end-div]

The idea for today’s off-topic note came to me when I read “Microsoft’s lost decade”, an aptly titled Vanity Fair story. In the piece, Kurt Eichenwald tracks Microsoft’s decline as he revisits a decade of technical missteps and bad business decisions. Predictably, the piece has generated strong retorts from Microsoft’s Ministry of Truth and from Ballmer himself (“It’s not been a lost decade for me!” he barked from the tumbrel).

But I don’t come to bury Caesar – not, yet, I’ll wait until actual numbers for Windows 8 and the Surface tablets emerge. Instead, let’s consider the centerpiece of Eichenwald’s article, his depiction of the cultural degeneracy and intramural paranoia that comes of a badly implemented performance review system.

Performance assessments are, of course, an important aspect of a healthy company. In order to maintain fighting weight, an organisation must honestly assay its employees’ contributions and cull the dead wood. This is tournament play, after all, and the coach must “release”; players who can’t help get the team to the finals.

But Microsoft’s implementation – “stack ranking”, a bell curve that pits employees and groups against one another like rats in a cage – plunged the company into internecine fights, horse trading, and backstabbing.

…every unit was forced to declare a certain percentage of employees as top performers, then good performers, then average, then below average, then poor…For that reason, executives said, a lot of Microsoft superstars did everything they could to avoid working alongside other top-notch developers, out of fear that they would be hurt in the rankings.

Employees quickly realised that it was more important to focus on organisation politics than actual performance:

Every current and former Microsoft employee I interviewed – every one – cited stack ranking as the most destructive process inside of Microsoft, something that drove out untold numbers of employees.

This brought back bad memories of my corpocrat days working for a noted Valley company. When I landed here in 1985, I was dismayed by the pervasive presence of human resources, an éminence grise that cast a shadow across the entire organisation. Humor being the courtesy of despair, engineers referred to HR as the KGB or, for a more literary reference, the Bene Gesserit, monikers that knowingly imputed an efficiency to a department that offered anything but. Granted, there was no bell curve grading, no obligation to sacrifice the bottom 5%, but the politics were stifling nonetheless, the review process a painful charade.

In memory of those shenanigans, I’ve come up with a possible antidote to manipulative reviews, an attempt to deal honestly and pleasantly with the imperfections of life at work. (Someday I’ll write a Note about an equally important task: How to let go of people with decency – and without lawyers.)

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Telegraph / Microsoft.[end-div]

The Demise of Upward Mobility

Robert J. Samuelson paints a sobering picture of the once credible and seemingly attainable American Dream — the generational progress of upward mobility is no longer a given. He is the author of “The Great Inflation and Its Aftermath: The Past and Future of American Affluence”.

[div class=attrib]From Wilson Quarterly:[end-div]

The future of affluence is not what it used to be. Americans have long believed—it’s part of our national character—that our economic well-being will constantly increase. We see ourselves as a striving, inventive, and pragmatic people destined for higher living standards. History is a continuum of progress, from Robert Fulton’s steamboat to Henry Ford’s assembly line to Bill Gates’ software. Every generation will live better than its predecessors.
Well, maybe not.

For millions of younger Americans—say, those 40 and under—living better than their parents is a pipe dream. They won’t. The threat to their hopes does not arise from an impending collapse of technological gains of the sort epitomized by the creations of Fulton, Ford, and Gates. These advances will almost certainly continue, and per capita income—the average for all Americans and a conventional indicator of living standards—will climb. Statistically, American progress will resume. The Great Recession will be a bump, not a dead end.

The trouble is that many of these gains will bypass the young. The increases that might have fattened their paychecks will be siphoned off to satisfy other groups and other needs. Today’s young workers will have to finance Social Security and Medicare for a rapidly growing cohort of older Americans. Through higher premiums for employer-provided health insurance, they will subsidize care for others. Through higher taxes and fees, they will pay to repair aging infrastructure (roads, bridges, water systems) and to support squeezed public services, from schools to police.

The hit to their disposable incomes would matter less if the young were major beneficiaries of the resultant spending. In some cases—outlays for infrastructure and local services—they may be. But these are exceptions. By 2025 Social Security and Medicare will simply reroute income from the nearly four-fifths of the population that will be under 65 to the older one-fifth. And health care spending at all age levels is notoriously skewed: Ten percent of patients account for 65 percent of medical costs, reports the Kaiser Family Foundation. Although insurance provides peace of mind, the money still goes from young to old: Average health spending for those 45 to 64 is triple that for those 18 to 24.

The living standards of younger Americans will almost certainly suffer in comparison to those of their parents in a second crucial way. Our notion of economic progress is tied to financial security, but the young will have less of it. What good are higher incomes if they’re abruptly revoked? Though it wasn’t a second Great Depression, the Great Recession was a close call, shattering faith that modern economic policies made broad collapses impossible. Except for the savage 1980-82 slump, post-World War II recessions had been modest. Only minorities of Americans had suffered. By contrast, the Great Recession hurt almost everyone, through high unemployment, widespread home foreclosures, huge wealth losses in stocks and real estate—and fears of worse. A 2012 Gallup poll found that 68 percent of Americans knew someone who had lost a job.

The prospect of downward mobility is not just dispiriting. It assails the whole post–World War II faith in prosperity. Beginning in the 1950s, commentators celebrated the onrush of abundance as marking a new era in human progress. In his 1958 bestseller The Affluent Society, Harvard economist John Kenneth Galbraith announced the arrival of a “great and unprecedented affluence” that had eradicated the historical “poverty of the masses.”

Economic growth became a secular religion that was its own reward. Perhaps its chief virtue was that it dampened class conflict. In The Great Leap: The Past Twenty-Five Years in America (1966), John Brooks observed, “The middle class was enlarging itself and ever encroaching on the two extremes”—the very rich and the very poor. Business and labor could afford to reconcile because both could now share the fruits of expanding production. We could afford more spending on public services (education, health, environmental protection, culture) without depressing private incomes. Indeed, that was Galbraith’s main theme: Our prosperity could and should support both.

To be sure, there were crises of faith, moments when economic progress seemed delayed or doomed. The longest lapse occurred in the 1970s, when double-digit inflation spawned pessimism and frequent recessions, culminating in the 1980-82 downturn. Monthly unemployment peaked at 10.8 percent. But after Federal Reserve chairman Paul Volcker and President Ronald Reagan took steps to suppress high inflation, faith returned.
Now, it’s again imperiled. A 2011 Gallup poll found that 55 percent of Americans didn’t think their children would live as well as they did, the highest rate ever. We may face a crimped and contentious future.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ascending and Descending by M.C.Escher. Courtesy of M.C.Escher.[end-div]

Are You Cold or Hot? Depends on Your Politics

The United States is gripped by political deadlock. The Do-Nothing Congress consistently gets lower approval ratings than our banks, Paris Hilton, lawyers and BP during the catastrophe in the Gulf of Mexico. This stasis is driven by seemingly intractable ideological beliefs and a no-compromise attitude from both the left and right sides of the aisle.

So, it should come as no surprise that even your opinion of the weather and temperature is colored by your political persuasion.

Daniel Engber over at Slate sifts through some fascinating studies that highlight how our ingrained ideologies determine our worldview, down to even our basic view of the weather and our home thermostat setting.

[div class=attrib]From Slate:[end-div]

A few weeks ago, an academic journal called Weather, Climate and Society posted a curious finding about how Americans perceive the heat and cold. A team of researchers at the University of Oklahoma asked 8,000 adults living across the country to state both their political leanings and their impressions of the local weather. Are you a liberal or a conservative? Have average temperatures where you live been rising, falling, or staying about the same as previous years? Then they compared the answers to actual thermostat readings from each respondent’s ZIP code. Would their sense of how it feels outside be colored by the way they think?

Yes it would, the study found. So much so, in fact, that the people surveyed all but ignored their actual experience. No matter what the weather records showed for a given neighborhood (despite the global trend, it had gotten colder in some places and warmer in others), conservatives and liberals fell into the same two camps. The former said that temperatures were decreasing or had stayed the same, and the latter claimed they were going up. “Actual temperature deviations proved to be a relatively weak predictor of perceptions,” wrote the authors. (Hat tip to Ars Technica for finding the study.)

People’s opinions, then, seem to have an effect on how they feel the air around them. If you believe in climate change and think the world is getting warmer, you’ll be more inclined to sense that warmth on a walk around the block. And if you tend to think instead in terms of crooked scientists and climate conspiracies, then the local weather will seem a little cooler. Either way, the Oklahoma study suggests that the experience of heat and cold derives from “a complex mix of direct observation, ideology, and cultural cognitions.”

It’s easy to see how these factors might play out when people make grand assessments of the weather that rely on several years’ worth of noisy data. But another complex mix of ideology and culture affects how we experience the weather from moment to moment—and how we choose to cope with it. In yesterday’s column, I discussed the environmental case against air conditioning, and the belief that it’s worse to be hypothermic than overheated. But there are other concerns, too, that make their rounds among the anti-A/C brrr-geoisie. Some view air conditioning itself as a threat to their comfort and their health.

The notion that stale, recycled air might be sickening or dangerous has been circulating for as long as we’ve had home cooling. According to historian Marsha E. Ackermann’s Cool Comfort: America’s Romance With Air-Conditioning, the invention of the air conditioner set off a series of debates among high-profile scholars over whether it was better to fill a building with fresh air or to close it off from the elements altogether. One side argued for ventilation even in the most miserable summer weather; the other claimed that a hot, damp breeze could be a hazard to your health. (The precursor to the modern air conditioner, invented by a Floridian named John Gorrie, was designed according to the latter theory. Gorrie thought his device would stave off malaria and yellow fever.)

The cooling industry worked hard to promote the idea that A/C makes us more healthy and productive, and in the years after World War II it gained acceptance as a standard home appliance. Still, marketers worried about a lingering belief in the importance of fresh air, and especially the notion that the “shock effect” of moving too quickly from warm to cold would make you sick. Some of these fears would be realized in a new and deadly form of pneumonia known as Legionnaires’ disease. In the summer of 1976, around 4,000 members of the Pennsylvania State American Legion met for a conference at the fancy, air-conditioned Bellevue Stratford Hotel in Philadelphia, and over the next month, more than 180 Legionnaires took ill. The bacteria responsible for their condition were found to be propagating in the hotel’s cooling tower. Twenty-nine people died from the disease, and we finally had proof that air conditioning posed a mortal danger to America.

A few years later, a new diagnosis began to spread around the country, based on a nebulous array of symptoms including sore throats and headache that seemed to be associated with indoor air. Epidemiologists called the illness “Sick Building Syndrome,” and looked for its source in large-scale heating and cooling ducts. Even today, the particulars of the condition—and the question of whether or not it really exists—have not been resolved. But there is some good evidence for the idea that climate-control systems can breed allergenic mold or other micro-organisms. For a study published in 2004, researchers in France checked the medical records of 920 middle-aged women, and found that the ones who worked in air-conditioned offices (about 15 percent of the total pool) were almost twice as likely to take sick days or make a visit to an ear-nose-throat doctor.

This will come as no surprise to those who already shun the air conditioner and worship in the cult of fresh air. Like the opponents of A/C from a hundred years ago, they blame the sealed environment for creating a miasma of illness and disease. Well, of course it’s unhealthy to keep the windows closed; you need a natural breeze to blow all those spores and germs away. But their old-fashioned plea invites a response that’s just as antique. Why should the air be any fresher in summer than winter (when so few would let it in)? And what about the dangers that “fresh air” might pose in cities where the breeze swirls with soot and dust? A 2009 study in the journal Epidemiology confirmed that air conditioning can help stave off the effects of particulate matter in the environment. Researchers checked the health records of senior citizens who did or didn’t have air conditioners installed in their homes and found that those who were forced to leave their windows open in the summer—and suck down the dirty air outside—were more likely to end up in the hospital for pollution-related cardiovascular disease. Other studies have found similar correlations between a lack of A/C on sooty days and hospitalization for chronic obstructive pulmonary disease and pneumonia.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Crosley Air Conditioning / Treehugger.[end-div]

The Benefits of Self-Deception

 

Psychologists have long studied the causes and characteristics of deception. In recent times they have had a huge pool of talented liars from which to draw — bankers, mortgage lenders, Enron executives, borrowers, and of course politicians. Now, researchers have begun to took at the art of self-deception, with some interesting results. Self-deception may be a useful tool in influencing others.

[div class=attrib]From the Wall Street Journal:[end-div]

Lying to yourself—or self-deception, as psychologists call it—can actually have benefits. And nearly everybody does it, based on a growing body of research using new experimental techniques.

Self-deception isn’t just lying or faking, but is deeper and more complicated, says Del Paulhus, psychology professor at University of British Columbia and author of a widely used scale to measure self-deceptive tendencies. It involves strong psychological forces that keep us from acknowledging a threatening truth about ourselves, he says.

Believing we are more talented or intelligent than we really are can help us influence and win over others, says Robert Trivers, an anthropology professor at Rutgers University and author of “The Folly of Fools,” a 2011 book on the subject. An executive who talks himself into believing he is a great public speaker may not only feel better as he performs, but increase “how much he fools people, by having a confident style that persuades them that he’s good,” he says.

Researchers haven’t studied large population samples to compare rates of self-deception or compared men and women, but they know based on smaller studies that it is very common. And scientists in many different disciplines are drawn to studying it, says Michael I. Norton, an associate professor at Harvard Business School. “It’s also one of the most puzzling things that humans do.”

Researchers disagree over what exactly happens in the brain during self-deception. Social psychologists say people deceive themselves in an unconscious effort to boost self-esteem or feel better. Evolutionary psychologists, who say different parts of the brain can harbor conflicting beliefs at the same time, say self-deception is a way of fooling others to our own advantage.

In some people, the tendency seems to be an inborn personality trait. Others may develop a habit of self-deception as a way of coping with problems and challenges.

Behavioral scientists in recent years have begun using new techniques in the laboratory to predict when and why people are likely to deceive themselves. For example, they may give subjects opportunities to inflate their own attractiveness, skill or intelligence. Then, they manipulate such variables as subjects’ mood, promises of rewards or opportunities to cheat. They measure how the prevalence of self-deception changes.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Truth or Consequences. Courtesy of CBS 1950-51 / Wikia.[end-div]

Shirking Life-As-Performance of a Social Network

Ex-Facebook employee number 51, gives us a glimpse from within the social network giant. It’s a tale of social isolation, shallow relationships, voyeurism, and narcissistic performance art. It’s also a tale of the re-discovery of life prior to “likes”, “status updates”, “tweets” and “followers”.

[div class=attrib]From the Washington Post:[end-div]

Not long after Katherine Losse left her Silicon Valley career and moved to this West Texas town for its artsy vibe and crisp desert air, she decided to make friends the old-fashioned way, in person. So she went to her Facebook page and, with a series of keystrokes, shut it off.

The move carried extra import because Losse had been the social network’s 51st employee and rose to become founder Mark Zuckerberg’s personal ghostwriter. But Losse gradually soured on the revolution in human relations she witnessed from within.

The explosion of social media, she believed, left hundreds of millions of users with connections that were more plentiful but also narrower and less satisfying, with intimacy losing out to efficiency. It was time, Losse thought, for people to renegotiate their relationships with technology.

“It’s okay to feel weird about this because I feel weird about this, and I was in the center of it,” said Losse, 36, who has long, dark hair and sky-blue eyes. “We all know there is an anxiety, there’s an unease, there’s a worry that our lives are changing.”

Her response was to quit her job — something made easier by the vested stock she cashed in — and to embrace the ancient toil of writing something in her own words, at book length, about her experiences and the philosophical questions they inspired.

That brought her to Marfa, a town of 2,000 people in an area so remote that astronomers long have come here for its famously dark night sky, beyond the light pollution that’s a byproduct of modern life.

Losse’s mission was oddly parallel. She wanted to live, at least for a time, as far as practical from the world’s relentless digital glow.

Losse was a graduate student in English at Johns Hopkins University in 2004 when Facebook began its spread, first at Harvard, then other elite schools and beyond. It provided a digital commons, a way of sharing personal lives that to her felt safer than the rest of the Internet.

The mix has proved powerful. More than 900 million people have joined; if they were citizens of a single country, Facebook Nation would be the world’s third largest.

At first, Losse was among those smitten. In 2005, after moving to Northern California in search of work, she responded to a query on the Facebook home page seeking résumés. Losse soon became one of the company’s first customer-service reps, replying to questions from users and helping to police abuses.

She was firmly on the wrong side of the Silicon Valley divide, which prizes the (mostly male) engineers over those, like Losse, with liberal arts degrees. Yet she had the sense of being on the ground floor of something exciting that might also yield a life-altering financial jackpot.

In her first days, she was given a master password that she said allowed her to see any information users typed into their Facebook pages. She could go into pages to fix technical problems and police content. Losse recounted sparring with a user who created a succession of pages devoted to anti-gay messages and imagery. In one exchange, she noticed the man’s password, “Ilovejason,” and was startled by the painful irony.

Another time, Losse cringed when she learned that a team of Facebook engineers was developing what they called “dark profiles” — pages for people who had not signed up for the service but who had been identified in posts by Facebook users. The dark profiles were not to be visible to ordinary users, Losse said, but if the person eventually signed up, Facebook would activate those latent links to other users.

All the world a stage

Losse’s unease sharpened when a celebrated Facebook engineer was developing the capacity for users to upload video to their pages. He started videotaping friends, including Losse, almost compulsively. On one road trip together, the engineer made a video of her napping in a car and uploaded it remotely to an internal Facebook page. Comments noting her siesta soon began appearing — only moments after it happened.

“The day before, I could just be in a car being in a car. Now my being in a car is a performance that is visible to everyone,” Losse said, exasperation creeping into her voice. “It’s almost like there is no middle of nowhere anymore.”

Losse began comparing Facebook to the iconic 1976 Eagles song “Hotel California,” with its haunting coda, “You can check out anytime you want, but you can never leave.” She put a copy of the record jacket on prominent display in a house she and several other employees shared not far from the headquarters (then in Palo Alto., Calif.; it’s now in Menlo Park).

As Facebook grew, Losse’s career blossomed. She helped introduce Facebook to new countries, pushing for quick, clean translations into new languages. Later, she moved to the heart of the company as Zuckerberg’s ghostwriter, mimicking his upbeat yet efficient style of communicating in blog posts he issued.

But her concerns continue to grow. When Zuckerberg, apparently sensing this, said to Losse, “I don’t know if I trust you,” she decided she needed to either be entirely committed to Facebook or leave. She soon sold some of her vested stock. She won’t say how much; they provided enough of a financial boon for her to go a couple of years without a salary, though not enough to stop working altogether, as some former colleagues have.

‘Touchy, private territory’

Among Losse’s concerns were the vast amount of personal data Facebook gathers. “They are playing on very touchy, private territory. They really are,” she said. “To not be conscious of that seems really dangerous.”

It wasn’t just Facebook. Losse developed a skepticism for many social technologies and the trade-offs they require.

Facebook and some others have portrayed proliferating digital connections as inherently good, bringing a sprawling world closer together and easing personal isolation.

Moira Burke, a researcher who trained at the Human-Computer Interaction Institute at Carnegie Mellon University and has since joined Facebook’s Data Team, tracked the moods of 1,200 volunteer users. She found that simply scanning the postings of others had little effect on well-being; actively participating in exchanges with friends, however, relieved loneliness.

Summing up her findings, she wrote on Facebook’s official blog, “The more people use Facebook, the better they feel.”

But Losse’s concerns about online socializing tracks with the findings of Sherry Turkle, a Massachusetts Institute of Technology psychologist who says users of social media have little understanding of the personal information they are giving away. Nor, she said, do many understand the potentially distorting consequences when they put their lives on public display, as what amounts to an ongoing performance on social media.

“In our online lives, we edit, we retouch, we clean up,” said Turkle, author of “Alone Together: Why We Expect More From Technology and Less From Each Other,” published in 2011. “We substitute what I call ‘connection for real conversation.’?”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Boy Kings by Katherine Losse.[end-div]

A Climate Change Skeptic Recants

A climate change skeptic recants. Of course, disbelievers in human-influenced climate change will point to the fact that physicist Richard Muller used an op-ed in the New York Times as evidence of flagrant falsehood and unmitigated bias.

Several years ago Muller set up the Berkeley Earth project, to collect and analyze land-surface temperature records from sources independent of NASA and NOAA. Convinced, at the time, that climate change researchers had the numbers all wrong, Muller and team set out to find the proof.

[div class=attrib]From the New York Times:[end-div]

CALL me a converted skeptic. Three years ago I identified problems in previous climate studies that, in my mind, threw doubt on the very existence of global warming. Last year, following an intensive research effort involving a dozen scientists, I concluded that global warming was real and that the prior estimates of the rate of warming were correct. I’m now going a step further: Humans are almost entirely the cause.

My total turnaround, in such a short time, is the result of careful and objective analysis by the Berkeley Earth Surface Temperature project, which I founded with my daughter Elizabeth. Our results show that the average temperature of the earth’s land has risen by two and a half degrees Fahrenheit over the past 250 years, including an increase of one and a half degrees over the most recent 50 years. Moreover, it appears likely that essentially all of this increase results from the human emission of greenhouse gases.

These findings are stronger than those of the Intergovernmental Panel on Climate Change, the United Nations group that defines the scientific and diplomatic consensus on global warming. In its 2007 report, the I.P.C.C. concluded only that most of the warming of the prior 50 years could be attributed to humans. It was possible, according to the I.P.C.C. consensus statement, that the warming before 1956 could be because of changes in solar activity, and that even a substantial part of the more recent warming could be natural.

Our Berkeley Earth approach used sophisticated statistical methods developed largely by our lead scientist, Robert Rohde, which allowed us to determine earth land temperature much further back in time. We carefully studied issues raised by skeptics: biases from urban heating (we duplicated our results using rural data alone), from data selection (prior groups selected fewer than 20 percent of the available temperature stations; we used virtually 100 percent), from poor station quality (we separately analyzed good stations and poor ones) and from human intervention and data adjustment (our work is completely automated and hands-off). In our papers we demonstrate that none of these potentially troublesome effects unduly biased our conclusions.

The historic temperature pattern we observed has abrupt dips that match the emissions of known explosive volcanic eruptions; the particulates from such events reflect sunlight, make for beautiful sunsets and cool the earth’s surface for a few years. There are small, rapid variations attributable to El Niño and other ocean currents such as the Gulf Stream; because of such oscillations, the “flattening” of the recent temperature rise that some people claim is not, in our view, statistically significant. What has caused the gradual but systematic rise of two and a half degrees? We tried fitting the shape to simple math functions (exponentials, polynomials), to solar activity and even to rising functions like world population. By far the best match was to the record of atmospheric carbon dioxide, measured from atmospheric samples and air trapped in polar ice.

Just as important, our record is long enough that we could search for the fingerprint of solar variability, based on the historical record of sunspots. That fingerprint is absent. Although the I.P.C.C. allowed for the possibility that variations in sunlight could have ended the “Little Ice Age,” a period of cooling from the 14th century to about 1850, our data argues strongly that the temperature rise of the past 250 years cannot be attributed to solar changes. This conclusion is, in retrospect, not too surprising; we’ve learned from satellite measurements that solar activity changes the brightness of the sun very little.

How definite is the attribution to humans? The carbon dioxide curve gives a better match than anything else we’ve tried. Its magnitude is consistent with the calculated greenhouse effect — extra warming from trapped heat radiation. These facts don’t prove causality and they shouldn’t end skepticism, but they raise the bar: to be considered seriously, an alternative explanation must match the data at least as well as carbon dioxide does. Adding methane, a second greenhouse gas, to our analysis doesn’t change the results. Moreover, our analysis does not depend on large, complex global climate models, the huge computer programs that are notorious for their hidden assumptions and adjustable parameters. Our result is based simply on the close agreement between the shape of the observed temperature rise and the known greenhouse gas increase.

It’s a scientist’s duty to be properly skeptical. I still find that much, if not most, of what is attributed to climate change is speculative, exaggerated or just plain wrong. I’ve analyzed some of the most alarmist claims, and my skepticism about them hasn’t changed.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Global land-surface temperature with a 10-year moving average. Courtesy of Berkeley Earth.[end-div]

The Exceptionalism of American Violence

The United States is often cited as the most generous nation on Earth. Unfortunately, it is also one of the most violent, having one of the highest murder rates of any industrialized country. Why this tragic paradox?

In an absorbing article excerpted below, backed by sound research, Anthropologist Eric Michael Johnson points to the lack of social capital on a local and national scale. Here, social capital is defined as interpersonal trust that promotes cooperation between citizens and groups for mutual benefit.

So, combine a culture that allows convenient access to very effective weapons with broad inequality, social isolation and distrust, and you get a very sobering picture — a country where around 70 people are killed each day by others wielding guns (25,423 firearm homicides in 2006-2007, based on Centers for Disease Control statistics).

[div class=attrib]From Scientific American:[end-div]

The United States is the deadliest wealthy country in the world. Can science help us explain, or even solve, our national crisis?

His tortured and sadistic grin beamed like a full moon on that dark night. “Madness, as you know, is like gravity,” he cackled. “All it takes is a little push.” But once the house lights rose, the terror was lifted for most of us. Few imagined that the fictive evil on screen back in 2008 would later inspire a depraved act of mass murder by a young man sitting with us in the audience, a student of neuroscience whose mind was teetering on the edge. What was it that pushed him over?

In the wake of the tragedy that struck Aurora, Colorado last Friday there remain more questions than answers. Just like last time–in January, 2011 when Congresswoman Gabrielle Giffords and 18 others were shot in Tucson, Arizona or before that in April, 2007 when a deranged gunman attacked students and staff at Virginia Tech–this senseless mass shooting has given rise to a national conversation as we struggle to find meaning in the madness.

While everyone agrees the blame should ultimately be placed on the perpetrator of this violence, the fact remains that the United States has one of the highest murder rates in the industrialized world. Of the 34 countries in the Organisation for Economic Co-operation and Development (OECD), the U.S. ranks fifth in homicides just behind Brazil (highest), Mexico, Russia, and Estonia. Our nation also holds the dubious honor of being responsible for half of the worst mass shootings in the last 30 years. How can we explain why the United States has nearly three times more murders per capita than neighboring Canada and ten times more than Japan? What makes the land of the free such a dangerous place to live?

Diagnosing a Murder

There have been hundreds of thoughtful explorations of this problem in the last week, though three in particular have encapsulated the major issues. Could it be, as science writer David Dobbs argues at Wired, that “an American culture that fetishizes violence,” such as the Batman franchise itself, has contributed to our fall? “Culture shapes the expression of mental dysfunction,” Dobbs writes, “just as it does other traits.”

Perhaps the push arrived with the collision of other factors, as veteran journalist Bill Moyers maintains, when the dark side of human nature encountered political allies who nurture our destructive impulses? “Violence is our alter ego, wired into our Stone Age brains,” he says. “The NRA is the best friend a killer’s instinct ever had.”

But then again maybe there is an economic explanation, as my Scientific American colleague John Horgan believes, citing a hypothesis by McMaster University evolutionary psychologists Martin Daly and his late wife Margo Wilson. “Daly and Wilson found a strong correlation between high Gini scores [a measure of inequality] and high homicide rates in Canadian provinces and U.S. counties,” Horgan writes, “blaming homicides not on poverty per se but on the collision of poverty and affluence, the ancient tug-of-war between haves and have-nots.”

In all three cases, as it was with other culprits such as the lack of religion in public schools or the popularity of violent video games (both of which are found in other wealthy countries and can be dismissed), commentators are looking at our society as a whole rather than specific details of the murderer’s background. The hope is that, if we can isolate the factor which pushes some people to murder their fellow citizens, perhaps we can alter our social environment and reduce the likelihood that these terrible acts will be repeated in the future. The only problem is, which one could it be?

The Exceptionalism of American Violence

As it turns out, the “social capital” Sapolsky found that made the Forest Troop baboons so peaceful is an important missing factor that can explain our high homicide rate in the United States. In 1999 Ichiro Kawachi at the Harvard School of Public Health led a study investigating the factors in American homicide for the journal Social Science and Medicine (pdf here). His diagnosis was dire.

“If the level of crime is an indicator of the health of society,” Kawachi wrote, “then the US provides an illustrative case study as one of the most unhealthy of modern industrialized nations.” The paper outlined what the most significant causal factors were for this exaggerated level of violence by developing what was called “an ecological theory of crime.” Whereas many other analyses of homicide take a criminal justice approach to the problem–such as the number of cops on the beat, harshness of prison sentences, or adoption of the death penalty–Kawachi used a public health perspective that emphasized social relations.

In all 50 states and the District of Columbia data were collected using the General Social Survey that measured social capital (defined as interpersonal trust that promotes cooperation between citizens for mutual benefit), along with measures of poverty and relative income inequality, homicide rates, incidence of other crimes–rape, robbery, aggravated assault, burglary, larceny, and motor vehicle theft–unemployment, percentage of high school graduates, and average alcohol consumption. By using a statistical method known as principal component analysis Kawachi was then able to identify which ecologic variables were most associated with particular types of crime.

The results were unambiguous: when income inequality was higher, so was the rate of homicide. Income inequality alone explained 74% of the variance in murder rates and half of the aggravated assaults. However, social capital had an even stronger association and, by itself, accounted for 82% of homicides and 61% of assaults. Other factors such as unemployment, poverty, or number of high school graduates were only weakly associated and alcohol consumption had no connection to violent crime at all. A World Bank sponsored study subsequently confirmed these results on income inequality concluding that, worldwide, homicide and the unequal distribution of resources are inextricably tied. (see Figure 2). However, the World Bank study didn’t measure social capital. According to Kawachi it is this factor that should be considered primary; when the ties that bind a community together are severed inequality is allowed to run free, and with deadly consequences.

But what about guns? Multiple studies have shown a direct correlation between the number of guns and the number of homicides. The United States is the most heavily armed country in the world with 90 guns for every 100 citizens. Doesn’t this over-saturation of American firepower explain our exaggerated homicide rate? Maybe not. In a follow-up study in 2001 Kawachi looked specifically at firearm prevalence and social capital among U.S. states. The results showed that when social capital and community involvement declined, gun ownership increased (see Figure 3).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Smith & Wesson M&P Victory model revolver. Courtesy of Oleg Volk / Wikpedia.[end-div]

The Emperor Has Transparent Clothes

Hot from the TechnoSensual Exposition in Vienna, Austria, come clothes that can be made transparent or opaque, and clothes that can detect a wearer telling a lie. While the value of the former may seem dubious outside of the home, the latter invention should be a mandatory garment for all politicians and bankers. Or, for the less adventurous, millinery fashionistas, how about a hat that reacts to ambient radio waves?

All these innovations find their way from the realms of a Philip K. Dick science fiction novel, courtesy of the confluence of new technologies and innovative textile design.

[div class=attrib]From New Scientist:[end-div]

WHAT if the world could see your innermost emotions? For the wearer of the Bubelle dress created by Philips Design, it’s not simply a thought experiment.

Aptly nicknamed “the blushing dress”, the futuristic garment has an inner layer fitted with sensors that measure heart rate, respiration and galvanic skin response. The measurements are fed to 18 miniature projectors that shine corresponding colours, shapes, and intensities onto an outer layer of fabric – turning the dress into something like a giant, high-tech mood ring. As a natural blusher, I feel like I already know what it would be like to wear this dress – like going emotionally, instead of physically, naked.

The Bubelle dress is just one of the technologically enhanced items of clothing on show at the Technosensual exhibition in Vienna, Austria, which celebrates the overlapping worlds of technology, fashion and design.

Other garments are even more revealing. Holy Dress, created by Melissa Coleman and Leonie Smelt, is a wearable lie detector – that also metes out punishment. Using voice-stress analysis, the garment is designed to catch the wearer out in a lie, whereupon it twinkles conspicuously and gives her a small shock. Though the garment is beautiful, a slim white dress under a geometric structure of copper tubes, I’d rather try it on a politician than myself. “You can become a martyr for truth,” says Coleman. To make it, she hacked a 1990s lie detector and added a novelty shocking pen.

Laying the wearer bare in a less metaphorical way, a dress that alternates between opaque and transparent is also on show. Designed by the exhibition’s curator, Anouk Wipprecht with interactive design laboratory Studio Roosegaarde, Intimacy 2.0 was made using conductive liquid crystal foil. When a very low electrical current is applied to the foil, the liquid crystals stand to attention in parallel, making the material transparent. Wipprecht expects the next iteration could be available commercially. It’s time to take the dresses “out of the museum and get them on the streets”, she says.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Taiknam Hat, a hat sensitive to ambient radio waves. Courtesy of Ricardo O’Nascimento, Ebru Kurbak, Fabiana Shizue / New Scientist.[end-div]

Crony Capitalism

We excerpt below a fascinating article from the WSJ on the increasingly incestuous and damaging relationship between the finance industry and our political institutions.

[div class=attrib]From the Wall Street Journal:[end-div]

Mitt Romney’s résumé at Bain should be a slam dunk. He has been a successful capitalist, and capitalism is the best thing that has ever happened to the material condition of the human race. From the dawn of history until the 18th century, every society in the world was impoverished, with only the thinnest film of wealth on top. Then came capitalism and the Industrial Revolution. Everywhere that capitalism subsequently took hold, national wealth began to increase and poverty began to fall. Everywhere that capitalism didn’t take hold, people remained impoverished. Everywhere that capitalism has been rejected since then, poverty has increased.

Capitalism has lifted the world out of poverty because it gives people a chance to get rich by creating value and reaping the rewards. Who better to be president of the greatest of all capitalist nations than a man who got rich by being a brilliant capitalist?

Yet it hasn’t worked out that way for Mr. Romney. “Capitalist” has become an accusation. The creative destruction that is at the heart of a growing economy is now seen as evil. Americans increasingly appear to accept the mind-set that kept the world in poverty for millennia: If you’ve gotten rich, it is because you made someone else poorer.

What happened to turn the mood of the country so far from our historic celebration of economic success?

Two important changes in objective conditions have contributed to this change in mood. One is the rise of collusive capitalism. Part of that phenomenon involves crony capitalism, whereby the people on top take care of each other at shareholder expense (search on “golden parachutes”).

But the problem of crony capitalism is trivial compared with the collusion engendered by government. In today’s world, every business’s operations and bottom line are affected by rules set by legislators and bureaucrats. The result has been corruption on a massive scale. Sometimes the corruption is retail, whereby a single corporation creates a competitive advantage through the cooperation of regulators or politicians (search on “earmarks”). Sometimes the corruption is wholesale, creating an industrywide potential for profit that would not exist in the absence of government subsidies or regulations (like ethanol used to fuel cars and low-interest mortgages for people who are unlikely to pay them back). Collusive capitalism has become visible to the public and increasingly defines capitalism in the public mind.

Another change in objective conditions has been the emergence of great fortunes made quickly in the financial markets. It has always been easy for Americans to applaud people who get rich by creating products and services that people want to buy. That is why Thomas Edison and Henry Ford were American heroes a century ago, and Steve Jobs was one when he died last year.

When great wealth is generated instead by making smart buy and sell decisions in the markets, it smacks of inside knowledge, arcane financial instruments, opportunities that aren’t accessible to ordinary people, and hocus-pocus. The good that these rich people have done in the process of getting rich is obscure. The benefits of more efficient allocation of capital are huge, but they are really, really hard to explain simply and persuasively. It looks to a large proportion of the public as if we’ve got some fabulously wealthy people who haven’t done anything to deserve their wealth.

The objective changes in capitalism as it is practiced plausibly account for much of the hostility toward capitalism. But they don’t account for the unwillingness of capitalists who are getting rich the old-fashioned way—earning it—to defend themselves.

I assign that timidity to two other causes. First, large numbers of today’s successful capitalists are people of the political left who may think their own work is legitimate but feel no allegiance to capitalism as a system or kinship with capitalists on the other side of the political fence. Furthermore, these capitalists of the left are concentrated where it counts most. The most visible entrepreneurs of the high-tech industry are predominantly liberal. So are most of the people who run the entertainment and news industries. Even leaders of the financial industry increasingly share the politics of George Soros. Whether measured by fundraising data or by the members of Congress elected from the ZIP Codes where they live, the elite centers with the most clout in the culture are filled with people who are embarrassed to identify themselves as capitalists, and it shows in the cultural effect of their work.

Another factor is the segregation of capitalism from virtue. Historically, the merits of free enterprise and the obligations of success were intertwined in the national catechism. McGuffey’s Readers, the books on which generations of American children were raised, have plenty of stories treating initiative, hard work and entrepreneurialism as virtues, but just as many stories praising the virtues of self-restraint, personal integrity and concern for those who depend on you. The freedom to act and a stern moral obligation to act in certain ways were seen as two sides of the same American coin. Little of that has survived.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Industrial Revolution brought about the end of true capitalism. Courtesy: Time Life Pictures/Mansell/Time Life Pictures/Getty Images.[end-div]

 

Modern Music Versus The Oldies

When it comes to music a generational gap has always been with us, separating young from old. Thus, without fail, parents will remark that the music listened to by their kids is loud and monotonous, nothing like the varied and much better music that they consumed in their younger days.

Well, this common, and perhaps universal, observation is now backed by some ground-breaking and objective research. So, adults over the age of 40, take heart — your music really is better than what’s playing today! And, if you are a parent, you may bask in the knowledge that your music really is better than that of your kids. That said, the comparative merits of your 1980’s “Hi Fi” system versus your kids’ docking stations with 5.1 surround and subwoofer earbuds remains thoroughly unsettled.

[div class=attrib]From the Telegraph:[end-div]

The scepticism about modern music shared by many middle-aged fans has been vindicated by a study of half a century’s worth of pop music, which found that today’s hits really do all sound the same.

Parents who find their children’s thumping stereos too much to bear will also be comforted to know that it isn’t just the effect of age: modern songs have also grown progressively louder over the past 50 years.

The study, by Spanish researchers, analysed an archive known as the Million Song Dataset to discover how the course of music changed between 1955 and 2010.

While loudness has steadily increased since the 1950s, the team found that the variety of chords, melodies and types of sound being used by musicians has become ever smaller.

Joan Serra of the Spanish National Research Council, who led the study published in the Scientific Reports journal, said: “We found evidence of a progressive homogenisation of the musical discourse.

“The diversity of transitions between note combinations – roughly speaking chords plus melodies – has consistently diminished in the past 50 years.”

The “timbre” of songs – the number of different tones they include, for example from different instruments – has also become narrower, he added.

The study was the first to conduct a large-scale measurement of “intrinsic loudness”, or the volume a song is recorded at, which determines how loud it will sound compared with other songs at a particular setting on an amplifier.

It appeared to support long-standing claims that the music industry is engaged in a “loudness war” in which volumes are gradually being increased.

Although older songs may be more varied and rich, the researchers advised that they could be made to sound more “fashionable and groundbreaking” if they were re-recorded and made blander and louder.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of HomeTheatre.[end-div]

Women See Bodies; Men See Body Parts

Yet another research study of gender differences shows some fascinating variation in the way men and women see and process their perceptions of others. Men tend to be perceived as a whole, women, on the other hand, are more likely to be perceived as parts.

[div class=attrib]From Scientific American:[end-div]

A glimpse at the magazine rack in any supermarket checkout line will tell you that women are frequently the focus of sexual objectification. Now, new research finds that the brain actually processes images of women differently than those of men, contributing to this trend.

Women are more likely to be picked apart by the brain and seen as parts rather than a whole, according to research published online June 29 in the European Journal of Social Psychology. Men, on the other hand, are processed as a whole rather than the sum of their parts.

“Everyday, ordinary women are being reduced to their sexual body parts,” said study author Sarah Gervais, a psychologist at the University of Nebraska, Lincoln. “This isn’t just something that supermodels or porn stars have to deal with.”

Objectification hurts
Numerous studies have found that feeling objectified is bad for women. Being ogled can make women do worse on math tests, and self-sexualization, or scrutiny of one’s own shape, is linked to body shame, eating disorders and poor mood.

But those findings have all focused on the perception of being sexualized or objectified, Gervais told LiveScience. She and her colleagues wondered about the eye of the beholder: Are people really objectifying women more than men?

To find out, the researchers focused on two types of mental processing, global and local. Global processing is how the brain identifies objects as a whole. It tends to be used when recognizing people, where it’s not just important to know the shape of the nose, for example, but also how the nose sits in relation to the eyes and mouth. Local processing focuses more on the individual parts of an object. You might recognize a house by its door alone, for instance, while you’re less likely to recognize a person’s arm without the benefit of seeing the rest of their body.

If women are sexually objectified, people should process their bodies in a more local way, focusing on individual body parts like breasts. To test the idea, Gervais and her colleagues carried out two nearly identical experiments with a total of 227 undergraduate participants. Each person was shown non-sexualized photographs, each of either a young man or young woman, 48 in total. After seeing each original full-body image, the participants saw two side-by-side photographs. One was the original image, while the other was the original with a slight alteration to the chest or waist (chosen because these are sexualized body parts). Participants had to pick which image they’d seen before.

In some cases, the second set of photos zoomed in on the chest or waist only, asking participants to pick the body part they’d seen previously versus the one that had been altered.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: People focus on the parts of a woman’s body when processing her image, according to research published in June in the European Journal of Social Psychology. Courtesy of LiveScience / Yuri Arcurs, Shutterstock.[end-div]

Best Countries for Women

If you’re female and value lengthy life expectancy, comprehensive reproductive health services, sound education and equality with males, where should you live? In short, Scandinavia, Australia and New Zealand, and Northern Europe. In a list of the 44 most well-developed nations, the United States ranks towards the middle, just below Canada and Estonia, but above Greece, Italy, Russia and most of Central and Eastern Europe.

The fascinating infographic from the National Post does a great job of summarizing the current state of womens’ affairs from data gathered from 165 countries.

[div class=attrib]Read the entire article and find a higher quality infographic after the jump.[end-div]

Living Organism as Software

For the first time scientists have built a computer software model of an entire organism from its molecular building blocks. This allows the model to predict previously unobserved cellular biological processes and behaviors. While the organism in question is a simple bacterium, this represents another huge advance in computational biology.

[div class=attrib]From the New York Times:[end-div]

Scientists at Stanford University and the J. Craig Venter Institute have developed the first software simulation of an entire organism, a humble single-cell bacterium that lives in the human genital and respiratory tracts.

The scientists and other experts said the work was a giant step toward developing computerized laboratories that could carry out complete experiments without the need for traditional instruments.

For medical researchers and drug designers, cellular models will be able to supplant experiments during the early stages of screening for new compounds. And for molecular biologists, models that are of sufficient accuracy will yield new understanding of basic biological principles.

The simulation of the complete life cycle of the pathogen, Mycoplasma genitalium, was presented on Friday in the journal Cell. The scientists called it a “first draft” but added that the effort was the first time an entire organism had been modeled in such detail — in this case, all of its 525 genes.

“Where I think our work is different is that we explicitly include all of the genes and every known gene function,” the team’s leader, Markus W. Covert, an assistant professor of bioengineering at Stanford, wrote in an e-mail. “There’s no one else out there who has been able to include more than a handful of functions or more than, say, one-third of the genes.”

The simulation, which runs on a cluster of 128 computers, models the complete life span of the cell at the molecular level, charting the interactions of 28 categories of molecules — including DNA, RNA, proteins and small molecules known as metabolites that are generated by cell processes.

“The model presented by the authors is the first truly integrated effort to simulate the workings of a free-living microbe, and it should be commended for its audacity alone,” wrote the Columbia scientists Peter L. Freddolino and Saeed Tavazoie in a commentary that accompanied the article. “This is a tremendous task, involving the interpretation and integration of a massive amount of data.”

They called the simulation an important advance in the new field of computational biology, which has recently yielded such achievements as the creation of a synthetic life form — an entire bacterial genome created by a team led by the genome pioneer J. Craig Venter. The scientists used it to take over an existing cell.

For their computer simulation, the researchers had the advantage of extensive scientific literature on the bacterium. They were able to use data taken from more than 900 scientific papers to validate the accuracy of their software model.

Still, they said that the model of the simplest biological system was pushing the limits of their computers.

“Right now, running a simulation for a single cell to divide only one time takes around 10 hours and generates half a gigabyte of data,” Dr. Covert wrote. “I find this fact completely fascinating, because I don’t know that anyone has ever asked how much data a living thing truly holds. We often think of the DNA as the storage medium, but clearly there is more to it than that.”

In designing their model, the scientists chose an approach that parallels the design of modern software systems, known as object-oriented programming. Software designers organize their programs in modules, which communicate with one another by passing data and instructions back and forth.

Similarly, the simulated bacterium is a series of modules that mimic the different functions of the cell.

“The major modeling insight we had a few years ago was to break up the functionality of the cell into subgroups which we could model individually, each with its own mathematics, and then to integrate these sub-models together into a whole,” Dr. Covert said. “It turned out to be a very exciting idea.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A Whole-Cell Computational Model Predicts Phenotype from Genotype. Courtesy of Cell / Elsevier Inc.[end-div]

Die Zombie, Die Zombie

Helen Sword cuts through (pun intended) the corporate-speak that continues to encroach upon our literature, particularly in business and academia, with a plea to kill our “zombie nouns”. Her latest book is “Stylish Academic Writing”.

[div class=attrib]From the New York Times:[end-div]

Take an adjective (implacable) or a verb (calibrate) or even another noun (crony) and add a suffix like ity, tion or ism. You’ve created a new noun: implacability, calibration, cronyism. Sounds impressive, right?

Nouns formed from other parts of speech are called nominalizations. Academics love them; so do lawyers, bureaucrats and business writers. I call them “zombie nouns” because they cannibalize active verbs, suck the lifeblood from adjectives and substitute abstract entities for human beings:

The proliferation of nominalizations in a discursive formation may be an indication of a tendency toward pomposity and abstraction.

The sentence above contains no fewer than seven nominalizations, each formed from a verb or an adjective. Yet it fails to tell us who is doing what. When we eliminate or reanimate most of the zombie nouns (tendency becomes tend, abstraction becomes abstract) and add a human subject and some active verbs, the sentence springs back to life:

Writers who overload their sentences with nominalizations tend to sound pompous and abstract.

Only one zombie noun – the key word nominalizations – has been allowed to remain standing.

At their best, nominalizations help us express complex ideas: perception, intelligence, epistemology. At their worst, they impede clear communication. I have seen academic colleagues become so enchanted by zombie nouns like heteronormativity and interpellation that they forget how ordinary people speak. Their students, in turn, absorb the dangerous message that people who use big words are smarter – or at least appear to be – than those who don’t.

In fact, the more abstract your subject matter, the more your readers will appreciate stories, anecdotes, examples and other handholds to help them stay on track. In her book “Darwin’s Plots,” the literary historian Gillian Beer supplements abstract nouns like evidence, relationships and beliefs with vivid verbs (rebuff, overturn, exhilarate) and concrete nouns that appeal to sensory experience (earth, sun, eyes):

Most major scientific theories rebuff common sense. They call on evidence beyond the reach of our senses and overturn the observable world. They disturb assumed relationships and shift what has been substantial into metaphor. The earth now only seems immovable. Such major theories tax, affront, and exhilarate those who first encounter them, although in fifty years or so they will be taken for granted, part of the apparently common-sense set of beliefs which instructs us that the earth revolves around the sun whatever our eyes may suggest.

Her subject matter – scientific theories – could hardly be more cerebral, yet her language remains firmly anchored in the physical world.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of PLOS (The Public Library of Science).[end-div]

Two Degrees

Author and environmentalist Bill McKibben has been writing about climate change and environmental issues for over 20 years. His first book, The End of Nature, was published in 1989, and is considered to be the first book aimed at the general public on the subject of climate change.

In his latest essay in Rolling Stone, which we excerpt below, McKibben offers a sobering assessment based on our current lack of action on a global scale. He argues that in the face of governmental torpor, and with individual action being almost inconsequential (at this late stage), only a radical re-invention of our fossil-fuel industries — to energy companies in the broad sense — can bring significant and lasting change.

Learn more about Bill McKibben, here.

[div class=attrib]From Rolling Stone:[end-div]

If the pictures of those towering wildfires in Colorado haven’t convinced you, or the size of your AC bill this summer, here are some hard numbers about climate change: June broke or tied 3,215 high-temperature records across the United States. That followed the warmest May on record for the Northern Hemisphere – the 327th consecutive month in which the temperature of the entire globe exceeded the 20th-century average, the odds of which occurring by simple chance were 3.7 x 10-99, a number considerably larger than the number of stars in the universe.

Meteorologists reported that this spring was the warmest ever recorded for our nation – in fact, it crushed the old record by so much that it represented the “largest temperature departure from average of any season on record.” The same week, Saudi authorities reported that it had rained in Mecca despite a temperature of 109 degrees, the hottest downpour in the planet’s history.

Not that our leaders seemed to notice. Last month the world’s nations, meeting in Rio for the 20th-anniversary reprise of a massive 1992 environmental summit, accomplished nothing. Unlike George H.W. Bush, who flew in for the first conclave, Barack Obama didn’t even attend. It was “a ghost of the glad, confident meeting 20 years ago,” the British journalist George Monbiot wrote; no one paid it much attention, footsteps echoing through the halls “once thronged by multitudes.” Since I wrote one of the first books for a general audience about global warming way back in 1989, and since I’ve spent the intervening decades working ineffectively to slow that warming, I can say with some confidence that we’re losing the fight, badly and quickly – losing it because, most of all, we remain in denial about the peril that human civilization is in.

When we think about global warming at all, the arguments tend to be ideological, theological and economic. But to grasp the seriousness of our predicament, you just need to do a little math. For the past year, an easy and powerful bit of arithmetical analysis first published by financial analysts in the U.K. has been making the rounds of environmental conferences and journals, but it hasn’t yet broken through to the larger public. This analysis upends most of the conventional political thinking about climate change. And it allows us to understand our precarious – our almost-but-not-quite-finally hopeless – position with three simple numbers.

The First Number: 2° Celsius

If the movie had ended in Hollywood fashion, the Copenhagen climate conference in 2009 would have marked the culmination of the global fight to slow a changing climate. The world’s nations had gathered in the December gloom of the Danish capital for what a leading climate economist, Sir Nicholas Stern of Britain, called the “most important gathering since the Second World War, given what is at stake.” As Danish energy minister Connie Hedegaard, who presided over the conference, declared at the time: “This is our chance. If we miss it, it could take years before we get a new and better one. If ever.”

In the event, of course, we missed it. Copenhagen failed spectacularly. Neither China nor the United States, which between them are responsible for 40 percent of global carbon emissions, was prepared to offer dramatic concessions, and so the conference drifted aimlessly for two weeks until world leaders jetted in for the final day. Amid considerable chaos, President Obama took the lead in drafting a face-saving “Copenhagen Accord” that fooled very few. Its purely voluntary agreements committed no one to anything, and even if countries signaled their intentions to cut carbon emissions, there was no enforcement mechanism. “Copenhagen is a crime scene tonight,” an angry Greenpeace official declared, “with the guilty men and women fleeing to the airport.” Headline writers were equally brutal: COPENHAGEN: THE MUNICH OF OUR TIMES? asked one.

The accord did contain one important number, however. In Paragraph 1, it formally recognized “the scientific view that the increase in global temperature should be below two degrees Celsius.” And in the very next paragraph, it declared that “we agree that deep cuts in global emissions are required… so as to hold the increase in global temperature below two degrees Celsius.” By insisting on two degrees – about 3.6 degrees Fahrenheit – the accord ratified positions taken earlier in 2009 by the G8, and the so-called Major Economies Forum. It was as conventional as conventional wisdom gets. The number first gained prominence, in fact, at a 1995 climate conference chaired by Angela Merkel, then the German minister of the environment and now the center-right chancellor of the nation.

Some context: So far, we’ve raised the average temperature of the planet just under 0.8 degrees Celsius, and that has caused far more damage than most scientists expected. (A third of summer sea ice in the Arctic is gone, the oceans are 30 percent more acidic, and since warm air holds more water vapor than cold, the atmosphere over the oceans is a shocking five percent wetter, loading the dice for devastating floods.) Given those impacts, in fact, many scientists have come to think that two degrees is far too lenient a target. “Any number much above one degree involves a gamble,” writes Kerry Emanuel of MIT, a leading authority on hurricanes, “and the odds become less and less favorable as the temperature goes up.” Thomas Lovejoy, once the World Bank’s chief biodiversity adviser, puts it like this: “If we’re seeing what we’re seeing today at 0.8 degrees Celsius, two degrees is simply too much.” NASA scientist James Hansen, the planet’s most prominent climatologist, is even blunter: “The target that has been talked about in international negotiations for two degrees of warming is actually a prescription for long-term disaster.” At the Copenhagen summit, a spokesman for small island nations warned that many would not survive a two-degree rise: “Some countries will flat-out disappear.” When delegates from developing nations were warned that two degrees would represent a “suicide pact” for drought-stricken Africa, many of them started chanting, “One degree, one Africa.”

Despite such well-founded misgivings, political realism bested scientific data, and the world settled on the two-degree target – indeed, it’s fair to say that it’s the only thing about climate change the world has settled on. All told, 167 countries responsible for more than 87 percent of the world’s carbon emissions have signed on to the Copenhagen Accord, endorsing the two-degree target. Only a few dozen countries have rejected it, including Kuwait, Nicaragua and Venezuela. Even the United Arab Emirates, which makes most of its money exporting oil and gas, signed on. The official position of planet Earth at the moment is that we can’t raise the temperature more than two degrees Celsius – it’s become the bottomest of bottom lines. Two degrees.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Emissions from industry have helped increase the levels of carbon dioxide in the atmosphere, driving climate change. Courtesy of New Scientist / Eye Ubiquitous / Rex Features.[end-div]

Beware, Big Telecomm is Watching You

Facebook trawls your profile, status and friends to target ads more effectively. It also allows 3rd parties, for a fee, to mine mountains of aggregated data for juicy analyses. Many online companies do the same. However, some companies are taking this to a whole, new and very personal level.

Here’s an example from Germany. Politician Malte Spitz gathered 6 months of his personal geolocation data from his mobile phone company. Then, he combined this data with his activity online, such as Twitter updates, blog entries and website visits. The interactive results seen here, plotted over time and space, show the detailed extent to which an individual’s life is being tracked and recorded.

[div class=attrib]From Zeit Online:[end-div]

By pushing the play button, you will set off on a trip through Malte Spitz’s life. The speed controller allows you to adjust how fast you travel, the pause button will let you stop at interesting points. In addition, a calendar at the bottom shows when he was in a particular location and can be used to jump to a specific time period. Each column corresponds to one day.

Not surprisingly, Spitz had to sue his phone company, Deutsche Telekom, to gain access to his own phone data.

[div class=attrib]From TED:[end-div]

On August 31, 2009, politician Malte Spitz traveled from Berlin to Erlangen, sending 29 text messages as he traveled. On November 5, 2009, he rocked out to U2 at the Brandenburg Gate. On January 10, 2010, he made 10 outgoing phone calls while on a trip to Dusseldorf, and spent 22 hours, 53 minutes and 57 seconds of the day connected to the internet.

How do we know all this? By looking at a detailed, interactive timeline of Spitz’s life, created using information obtained from his cell phone company, Deutsche Telekom, between September 2009 and February 2010.

In an impassioned talk given at TEDGlobal 2012, Spitz, a member of Germany’s Green Party, recalls his multiple-year quest to receive this data from his phone company. And he explains why he decided to make this shockingly precise log into public information in the newspaper Die Zeit – to sound a warning bell of sorts.

“If you have access to this information, you can see what your society is doing,” says Spitz. “If you have access to this information, you can control your country.”

[div class=attrib]Read the entire article after the jump.[end-div]

Your Life Expectancy Mapped

Your life expectancy mapped, that is, if you live in London, U.K. So, take the iconic London tube (subway) map, then overlay it with figures for average life expectancy. Voila, you get to see how your neighbors on the Piccadilly Line fair in their longevity compared with say, you, who happen to live near a Central Line station. It turns out that in some cases adjacent areas — as depicted by nearby but different subway stations — show an astounding gap of more than 20 years in projected life span.

So, what is at work? And, more importantly, should you move to Bond Street where the average life expectancy is 96 years, versus only 79 in Kennington, South London?

[div class=attrib]From the Atlantic:[end-div]

Last year’s dystopian action flick In Time has Justin Timberlake playing a street rat who suddenly comes into a great deal of money — only the currency isn’t cash, it’s time. Hours and minutes of Timberlake’s life that can be traded just like dollars and cents in our world. Moving from poor districts to rich ones, and vice versa, requires Timberlake to pay a toll, each time shaving off a portion of his life savings.

Literally paying with your life just to get around town seems like — you guessed it — pure science fiction. It’s absolute baloney to think that driving or taking a crosstown bus could result in a shorter life (unless you count this). But a project by University College London researchers called Lives on the Line echoes something similar with a map that plots local differences in life expectancy based on the nearest Tube stop.

The trends are largely unsurprising, and correlate mostly with wealth. Britons living in the ritzier West London tend to have longer expected lifespans compared to those who live in the east or the south. Those residing near the Oxford Circus Tube stop have it the easiest, with an average life expectancy of 96 years. Going into less wealthy neighborhoods in south and east London, life expectancy begins to drop — though it still hovers in the respectable range of 78-79.

Meanwhile, differences in life expectancy between even adjacent stations can be stark. Britons living near Pimlico are predicted to live six years longer than those just across the Thames near Vauxhall. There’s about a two-decade difference between those living in central London compared to those near some stations on the Docklands Light Railway, according to the BBC. Similarly, moving from Tottenham Court Road to Holborn will also shave six years off the Londoner’s average life expectancy.

Michael Marmot, a UCL professor who wasn’t involved in the project, put the numbers in international perspective.

“The difference between Hackney and the West End,” Marmot told the BBC, “is the same as the difference between England and Guatemala in terms of life expectancy.”

[div class=atrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Atlantic / MappingLondon.co.uk.[end-div]