Category Archives: Idea Soup

Guns, Freedom and the Uncivil Society

Associate professor of philosophy, Firmin DeBrabander, argues that guns have no place in a civil society. Guns hinder free speech and free assembly for those at either end of the barrel. Guns fragment our society and undermine the sense and mechanisms of community. He is right.

[div class=attrib]From the New York Times:[end-div]

The night of the shootings at Sandy Hook Elementary School in Newtown, Conn., I was in the car with my wife and children, working out details for our eldest son’s 12th birthday the following Sunday — convening a group of friends at a showing of the film  “The Hobbit.” The memory of the Aurora movie theatre massacre was fresh in his mind, so he was concerned that it not be a late night showing. At that moment, like so many families, my wife and I were weighing whether to turn on the radio and expose our children to coverage of the school shootings in Connecticut. We did. The car was silent in the face of the flood of gory details. When the story was over, there was a long thoughtful pause in the back of the car. Then my eldest son asked if he could be homeschooled.

That incident brought home to me what I have always suspected, but found difficult to articulate: an armed society — especially as we prosecute it at the moment in this country — is the opposite of a civil society.

The Newtown shootings occurred at a peculiar time in gun rights history in this nation. On one hand, since the mid 1970s, fewer households each year on average have had a gun. Gun control advocates should be cheered by that news, but it is eclipsed by a flurry of contrary developments. As has been well publicized, gun sales have steadily risen over the past few years, and spiked with each of Obama’s election victories.

Furthermore, of the weapons that proliferate amongst the armed public, an increasing number are high caliber weapons (the weapon of choice in the goriest shootings in recent years). Then there is the legal landscape, which looks bleak for the gun control crowd.

Every state except for Illinois has a law allowing the carrying of concealed weapons — and just last week, a federal court struck down Illinois’ ban. States are now lining up to allow guns on college campuses. In September, Colorado joined four other states in such a move, and statehouses across the country are preparing similar legislation. And of course, there was Oklahoma’s ominous Open Carry Law approved by voters this election day — the fifteenth of its kind, in fact — which, as the name suggests, allows those with a special permit to carry weapons in the open, with a holster on their hip.

Individual gun ownership — and gun violence — has long been a distinctive feature of American society, setting us apart from the other industrialized democracies of the world. Recent legislative developments, however, are progressively bringing guns out of the private domain, with the ultimate aim of enshrining them in public life. Indeed, the N.R.A. strives for a day when the open carry of powerful weapons might be normal, a fixture even, of any visit to the coffee shop or grocery store — or classroom.

As N.R.A. president Wayne LaPierre expressed in a recent statement on the organization’s Web site, more guns equal more safety, by their account. A favorite gun rights saying is “an armed society is a polite society.” If we allow ever more people to be armed, at any time, in any place, this will provide a powerful deterrent to potential criminals. Or if more citizens were armed — like principals and teachers in the classroom, for example — they could halt senseless shootings ahead of time, or at least early on, and save society a lot of heartache and bloodshed.

As ever more people are armed in public, however — even brandishing weapons on the street — this is no longer recognizable as a civil society. Freedom is vanished at that point.

And yet, gun rights advocates famously maintain that individual gun ownership, even of high caliber weapons, is the defining mark of our freedom as such, and the ultimate guarantee of our enduring liberty. Deeper reflection on their argument exposes basic fallacies.

In her book “The Human Condition,” the philosopher Hannah Arendt states that “violence is mute.” According to Arendt, speech dominates and distinguishes the polis, the highest form of human association, which is devoted to the freedom and equality of its component members. Violence — and the threat of it — is a pre-political manner of communication and control, characteristic of undemocratic organizations and hierarchical relationships. For the ancient Athenians who practiced an incipient, albeit limited form of democracy (one that we surely aim to surpass), violence was characteristic of the master-slave relationship, not that of free citizens.

Arendt offers two points that are salient to our thinking about guns: for one, they insert a hierarchy of some kind, but fundamental nonetheless, and thereby undermine equality. But furthermore, guns pose a monumental challenge to freedom, and particular, the liberty that is the hallmark of any democracy worthy of the name — that is, freedom of speech. Guns do communicate, after all, but in a way that is contrary to free speech aspirations: for, guns chasten speech.

This becomes clear if only you pry a little more deeply into the N.R.A.’s logic behind an armed society. An armed society is polite, by their thinking, precisely because guns would compel everyone to tamp down eccentric behavior, and refrain from actions that might seem threatening. The suggestion is that guns liberally interspersed throughout society would cause us all to walk gingerly — not make any sudden, unexpected moves — and watch what we say, how we act, whom we might offend.

As our Constitution provides, however, liberty entails precisely the freedom to be reckless, within limits, also the freedom to insult and offend as the case may be. The Supreme Court has repeatedly upheld our right to experiment in offensive language and ideas, and in some cases, offensive action and speech. Such experimentation is inherent to our freedom as such. But guns by their nature do not mix with this experiment — they don’t mix with taking offense. They are combustible ingredients in assembly and speech.

I often think of the armed protestor who showed up to one of the famously raucous town hall hearings on Obamacare in the summer of 2009. The media was very worked up over this man, who bore a sign that invoked a famous quote of Thomas Jefferson, accusing the president of tyranny. But no one engaged him at the protest; no one dared approach him even, for discussion or debate — though this was a town hall meeting, intended for just such purposes. Such is the effect of guns on speech — and assembly. Like it or not, they transform the bearer, and end the conversation in some fundamental way. They announce that the conversation is not completely unbounded, unfettered and free; there is or can be a limit to negotiation and debate — definitively.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

Blind Loyalty and the Importance of Critical Thinking

Two landmark studies in the 1960s and ’70s put behavioral psychology squarely in the public consciousness. The obedience experiments by Stanley Milgram and the Stanford Prison experiment demonstrated how regular individuals could be made, quite simply, to obey figures in authority and to subject others to humiliation, suffering and pain.

A re-examination of these experiments and several recent similar studies have prompted a number of psychologists to offer a reinterpretation of the original conclusions. They suggest that humans may not be inherently evil after all. However, we remain dangerously flawed — our willingness to follow those in authority, especially in those with whom we identify, makes us susceptible to believing in the virtue of actions that by all standards would be monstrous. It turns out that an open mind able to think critically may be the best antidote.

[div class=attrib]From the Pacific Standard:[end-div]

They are among the most famous of all psychological studies, and together they paint a dark portrait of human nature. Widely disseminated in the media, they spread the belief that people are prone to blindly follow authority figures—and will quickly become cruel and abusive when placed in positions of power.

It’s hard to overstate the impact of Stanley Milgram’s obedience experiments of 1961, or the Stanford Prison Experiment of 1971. Yet in recent years, the conclusions derived from those studies have been, if not debunked, radically reinterpreted.

A new perspective—one that views human nature in a more nuanced light—is offered by psychologists Alex Haslam of the University of Queensland, Australia, and Stephen Reicher of the University of St. Andrews in Scotland.

In an essay published in the open-access journal PLoS Biology, they argue that people will indeed comply with the questionable demands of authority figures—but only if they strongly identify with that person, and buy into the rightness of those beliefs.
In other words, we’re not unthinking automatons. Nor are we monsters waiting for permission for our dark sides to be unleashed. However, we are more susceptible to psychological manipulation than we may realize.

In Milgram’s study, members of the general public were placed in the role of “teacher” and told that a “learner” was in a nearby room. Each time the “learner” failed to correctly recall a word as part of a memory experiment, the “teacher” was told to administer an electrical shock.

As the “learner” kept making mistakes, the “teacher” was ordered to give him stronger and stronger jolts of electricity. If a participant hesitated, the experimenter—an authority figure wearing a white coat—instructed him to continue.

Somewhat amazingly, most people did so: 65 percent of participants continued to give stronger and stronger shocks until the experiment ended with the “learner” apparently unconscious. (The torture was entirely fictional; no actual shocks were administered.)
To a world still reeling from the question of why so many Germans obeyed orders and carried out Nazi atrocities, here was a clear answer: We are predisposed to obey authority figures.

The Stanford Prisoner Experiment, conducted a few years later, was equally unnerving. Students were randomly assigned to assume the role of either prisoner or guard in a “prison” set up in the university’s psychology department. As Haslam and Reicher note, “such was the abuse meted out to the prisoners by the guards that the study had to be terminated after just six days.”

Lead author Philip Zimbardo, who assumed the role of “prison superintendent” with a level of zeal he later found frightening, concluded that brutality was “a natural consequence of being in the uniform of a guard and asserting the power inherent in that role.”

So is all this proof of the “banality of evil,” to use historian Hannah Arendt’s memorable phrase? Not really, argue Haslam and Reicher. They point to their own work on the BBC Prison Study, which mimicked the seminal Stanford study.

They found that participants “did not conform automatically to their assigned role” as prisoner or guard. Rather, there was a period of resistance, which ultimately gave way to a “draconian” new hierarchy. Before becoming brutal, the participants needed time to assume their new identities, and internalize their role in the system.

Once they did so, “the hallmark of the tyrannical regime was not conformity, but creative leadership and engaged followership within a group of true believers,” they write. “This analysis mirrors recent conclusions about the Nazi tyranny.”

[div class=attrib]Read the entire article after the jump.[end-div]

Apocalypse Now… First, Brew Some Tea

We love stories of dystopian futures, apocalyptic prophecies and nightmarish visions here at theDiagonal. For some of our favorite articles on the end of days, check out end of world predictions, and how the world may end.

The next impending catastrophe is due a mere week from now, on December 21st, 2012, according to Mayan-watchers. So, of course, it’s time to make final preparations for the end of the world, again. Not to be outdone by the Mayans, the British, guardians of that very stiff-upper-lip, have some timely advice for doomsayers and doomsday aficionados. After all, only the British could come up with a propaganda poster during the second World War emblazoned with “Keep Calm and Carry On”. While there is some very practical advice, such as “leave extra time for journeys”, we find fault with the British authorities for not suggesting “take time to make a good, strong cup of tea”.

[div class=attrib]From the Independent:[end-div]

With the world edging ever closer to what some believe could be an end of days catastrophe that will see the planet and its inhabitants destroyed, British authorities have been issuing tongue in cheek advice on how to prepare.

The advice comes just two weeks ahead of the day that some believe will mark the end of world.

According to some interpretations of the ancient Mayan calendar the 21st of December will signal the end of a 5,125-year cycle known as the Long Count – and will bring about the apocalypse.

There have been scattered reports of panic buying of candles and essentials in China and Russia. There has also been a reported hike in the sales of survival shelters in America.

An official US government blog was published last week saying it was “just rumours” and insisting that “the world will not end on December 21, 2012, or any day in 2012”.

In France, authorities have even taken steps to prevent access to Bugarach mountain, which is thought by some to be a sacred place that will protect them from the end of the world.

Reports claimed websites in the US were selling tickets to access the mountain on the 21st.

In the UK, however, the impending apocalypse is being treated with dead-pan humour by some organisations.

The AA has advised: “Before heading off, take time to do the basic checks on your car and allow extra time for your journey.

“Local radio is a good source of traffic and weather updates and for any warnings of an impending apocalypse. Should the announcer break such solemn news, try to remain focused on the road ahead and keep your hands on the wheel.”

A London Fire Brigade spokesman issued the following advice: “Fit a smoke alarm on each level of your home, then at least you might stand a chance of knowing that the end of the world is nigh ahead of those who don’t.

“If you survive the apocalypse you’ll be alerted to a fire more quickly should one ever break out.”

An RSPCA [Royal Society for the Prevention of Cruelty to Animals] spokesman offered advice for animal lovers ahead of apocalypse saying: “Luckily for animals, they do not have the same fears of the future – or its imminent destruction – as us humans, so it is unlikely that our pets will be worrying about the end of the world.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Digital scan of original KEEP CALM AND CARRY ON poster owned by wartimeposters.co.uk.. Courtesy of Wikipedia.[end-div]

We The People… Want a Twinkie

The old adage, “be careful what you wish for, lest it come true”, shows that desires may well come to fruition, but often have unintended consequences. In this case, for the White House. A couple of years ago the administration launched an online drive to foster dialogue and participation in civic affairs. Known as “We the People: Your Voice in Our Government” the program allows individuals to petition the government on any important issue of the day. And, while White House officials may have had in mind a discussion of substantive issues, many petitions are somewhat more off the wall. Some of our favorite, colorful petitions, many of which have garnered thousands of signatures to date, include:

“Legalize home distillation for home spirits!”

“Secure resources and funding, and begin construction of a Death Star by 2016.”

“Nationalize the Twinkie industry.”

“Peacefully grant the State of Texas to withdraw from the United States of America and create its own NEW government.”

“Peacefully grant the city of Austin Texas to withdraw from the state of Texas & remain part of the United States.”

“Allow the city of El Paso to secede from the state of Texas. El Paso is tired of being a second class city within Texas.”

“Legalize the use of DMT, magic mushrooms, and mescaline for all people.”

“Outlaw offending prophets of major religions.”

“Legally recognize the tea party as a hate group and remove them from office for treason against the United States.”

“Give us back our incandescent lightbulbs! We, the undersigned, want the freedom to choose our own lightbulbs.”

“Create and Approve The MICHAEL JOSEPH JACKSON National Holiday.”

[div class=attrib]From the Washington Post:[end-div]

Forget the “fiscal cliff”: When it comes to the nation’s most pressing concerns, other matters trump financial calamity.

Several thousand Americans, for example, are calling on President Obama to nationalize the troubled Twinkies industry to prevent the loss of the snack cake’s “sweet creamy center.”

Thousands more have signed petitions calling on the White House to replace the courts with a single Hall of Justice, remove Jerry Jones as owner of the Dallas Cowboys, give federal workers a holiday on Christmas Eve, allow members of the military to put their hands in their pockets and begin construction of a “Star Wars”-style Death Star by 2016.

And that’s just within the past month.

The people have spoken, but it might not be what the Obama administration expected to hear. More than a year after it was launched, an ambitious White House online petition program aimed at encouraging civic participation has become cluttered with thousands of demands that are often little more than extended Internet jokes. Interest has escalated in the wake of Obama’s reelection, which spurred more than a dozen efforts from tens of thousands of petitioners seeking permission for their states to secede from the union.

The idea, dubbed “We the People” and modeled loosely on a British government program, was meant to encourage people to exercise their First Amendment rights by collecting enough electronic signatures to meet a threshold that would guarantee an official administration response. (The level was initially set at 5,000 signatures, but that was quickly raised to 25,000 after the public responded a little too enthusiastically.)

Administration officials have spent federal time and tax dollars answering petitioner demands that the government recognize extraterrestrial life, allow online poker, legalize marijuana, remove “under God” from the Pledge of Allegiance and ban Rush Limbaugh from Armed Forces Network radio.

The last issue merited a formal response from the Defense Department: “AFN does not censor content, and we believe it is important that service members have access to a variety of viewpoints,” spokesman Bryan G. Whitman wrote to the more than 29,000 people who signed the anti-Limbaugh petition.

The “We the People” program emerged in the news last week when petitioners demanded that Obama block an appearance at Sunday’s “Christmas in Washington” concert by Psy, the South Korean “Gangnam Style” singer who is under fire for anti-American lyrics. The program’s rules require that petitions relate to “current or potential actions or policies of the federal government,” prompting the White House to pull down the petition because Obama has no authority over booking at the privately run charitable event.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: We The People. U.S. Constitution. Courtesy of Wikipedia.[end-div]

What Thomas Jefferson Never Said

Commentators of all political persuasions often cite Jefferson to add weight and gravitas to further a particular point or position. Yet scholarly analysis shows that many quotes are incorrectly attributed to the Founding Father and 3rd President. Some examples of words never spoken or written by Jefferson:

“Dissent is the highest form of patriotism””The democracy will cease to exist when you take away from those who are willing to work and give to those who would not.”

“My reading of history convinces me that most bad government results from too much government.”

“The beauty of the Second Amendment is that it will not be needed until they try to take it.”

[div class=attrib]From the WSJ:[end-div]

Thomas Jefferson once famously wrote, “All tyranny needs to gain a foothold is for people of good conscience to remain silent.”

Or did he? Numerous social movements attribute the quote to him. “The Complete Idiot’s Guide to U.S. Government and Politics” cites it in a discussion of American democracy. Actor Chuck Norris’s 2010 treatise “Black Belt Patriotism: How to Reawaken America” uses it to urge conservatives to become more involved in politics. It is even on T-shirts and decals.

Yet the founding father and third U.S. president never wrote it or said it, insists Anna Berkes, a 33-year-old research librarian at the Jefferson Library at Monticello, his grand estate just outside Charlottesville, Va. Nor does he have any connection to many of the “Jeffersonian” quotes that politicians on both sides of the aisle have slung back and forth in recent years, she says.

“People will see a quote and it appeals to an opinion that they have and if it has Jefferson’s name attached to it that gives it more weight,” she says. “He’s constantly being invoked by people when they are making arguments about politics and actually all sorts of topics.”

A spokeswoman for the Guide’s publisher said it was looking into the quote. Mr. Norris’s publicist didn’t respond to requests for comment.

To counter what she calls rampant misattribution, Ms. Berkes is fighting the Internet with the Internet. She has set up a “Spurious Quotations” page on the Monticello website listing bogus quotes attributed to the founding father, a prolific writer and rhetorician who was the principal author of the Declaration of Independence.

The fake quotes posted and dissected on Monticello.org include “My reading of history convinces me that most bad government has grown out of too much government.” In detailed footnotes, Ms. Berkes says it resembles a line Jefferson wrote in an 1807 letter: “History, in general, only informs us what bad government is.” But she can’t find that exact quotation in any of his writings.

Another that graces many epicurean websites: “On a hot day in Virginia, I know nothing more comforting than a fine spiced pickle, brought up trout-like from the sparkling depths of the aromatic jar below the stairs of Aunt Sally’s cellar.”

Jefferson never said that either, says Ms. Berkes. The earliest reference to the quote comes from a 1922 speech by a man extolling the benefits of pickles, she says.

Jefferson is a “flypaper figure,” like Abraham Lincoln, Mark Twain, Winston Churchill and baseball player and manager Yogi Berra—larger-than-life figures who have fake or misattributed quotes stick to them all the time, says Ralph Keyes, an author of books about quotes wrongly credited to famous or historical figures.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Reproduction of the 1805 Rembrandt Peale painting of Thomas Jefferson, New York Historical Society. Courtesy of Wikipedia.[end-div]

From Man’s Best Friend to a Girl’s Best Friend

Chances are that you have a pet. And, whether you’re a dog person or a cat person, or a bird fancier or a lover of lizards you’d probably mourn if you were to lose your furry, or feathery or scaly, friend. So, when your pet crosses over to the other side why not pulverize her or him, filter out any non-carbon remains and then compress the results into, well, a diamond!

[div class=attrib]From WSJ:[end-div]

Natalie Pilon’s diamond is her best friend.

Every time she looks into the ring on her finger, Ms. Pilon sees Meowy, her late beloved silver cat. Meowy really is there: The ring’s two diamonds were made from her cremated remains.

“It’s a little eccentric—not something everyone would do,” says Ms. Pilon, a biotech sales representative in Boston, whose cat passed away last year. “It’s a way for me to remember my cat, and have her with me all the time.”

Americans have a long tradition of pampering and memorializing their pets. Now, technology lets precious friends become precious gems.

The idea of turning the carbon in ashes into man-made diamonds emerged a decade ago as a way to memorialize humans. Today, departed pets are fueling the industry’s growth, with a handful of companies selling diamonds, gemstones and other jewelry out of pet remains, including hair and feathers.

Some gems start at about $250, while pet diamonds cost at least $1,400, with prices based on color and size. The diamonds have the same physical properties as mined diamonds, purveyors say.

LifeGem, an Elk Grove Village, Ill., company, says it has made more than 1,000 animal diamonds in the past decade, mostly from dogs and cats but also a few birds, rabbits, horses and one armadillo. Customers truly can see facets of their pets, says Dean VandenBiesen, LifeGem’s co-founder, because “remains have some unique characteristics in terms of the ratios of elements, so no two diamonds are exactly alike.”

Jennifer Durante, 42 years old, of St. Petersburg, Fla., commissioned another company, Pet Gems, to create a light-blue zircon gemstone out of remains from her teacup Chihuahua, Tetley. “It reminds me of his eyes when the sun would shine into them,” she says.

Sonya Zofrea, a 42-year-old police officer in San Fernando, Calif., has two yellow diamonds to memorialize Baby, a black cat with yellow eyes who wandered into her life as a stray. The first contained a blemish, so maker LifeGem created another one free of charge with the cat’s ashes. But Ms. Zofrea felt the first reminded her most of her occasionally naughty kitty. “When I saw the imperfection, I thought, that’s just her,” says Ms. Zofrea. “She’s an imperfect little soul, aren’t we all?”

A spokesman for the Gemological Institute of America declined to comment on specific companies or processes, but said that synthetic diamonds, like naturally occurring ones, are made of carbon. “That carbon could come from the remains of a deceased pet,” he said.

Producing a one-carat diamond requires less than a cup of ashes or unpacked hair. Sometimes, companies add outside carbon if there isn’t enough.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Safety and Paranoia Go Hand in Hand

Brooke Allen reviews a handy new tome for those who live in comfort and safety but who perceive threats large and small from all crevices and all angles. Paradoxically, most people in the West are generally safer than any previous generations, and yet they imagine existential threats ranging from viral pandemics to hemispheric mega-storms.

[div class=attrib]From WSJ:[end-div]

Never in our history have Americans been so fearful; never, objectively speaking, have we been so safe. Except for the bombing of Pearl Harbor and the destruction of the World Trade Center, war has not touched our shores in a century and a half. Despite relative decline, we are still militarily No. 1. We have antibiotics, polio vaccines, airbags; our children need no longer suffer even measles or chicken pox. So what are we all so frightened of?

In “Encyclopedia Paranoiaca,” Henry Beard and Christopher Cerf—in association, supposedly, with the staff of something called the Cassandra Institute—try to answer that question in some detail. The result is an amusing and cruelly accurate cultural critique, offering a “comprehensive and authoritative inventory of the perils, menaces, threats, blights, banes, and other assorted pieces of Damoclean cutlery” that hover over our collective head.

There’s the big stuff, of course: global warming and nuclear warfare, not to mention super-volcanoes and mega-tsunamis “capable of crossing entire oceans at jet-airplane speed and wreaking almost unimaginable damage.” The authors don’t even bother to list terror attacks or hurricanes, both high on the list of national obsessions after the events of recent years. But they do dwell on financial perils. “Investments, domestic” and “investments, overseas” are both listed as dangers, as are “gold, failure to invest in” and “gold, investing in.” Damned if you do, damned if you don’t—as with so many of life’s decisions.

Our understandable fear of outsize disasters is matched, oddly enough, by an equally paralyzing terror of the microscopic. American germophobia has only intensified in recent years, as we can see from the sudden ubiquity of hand sanitizers. Messrs. Beard and Cerf gleefully fan the flames of our paranoia. Toilets, flushing of: You’d do well to keep the seat down when engaging in this hazardous activity, because toilet water and all its contents are vaporized by the flushing action and settle upon everything in your bathroom—including your toothbrush. A lovely hot bath turns out to be, according to a scientist at NYU Medical Center, a foul stew of pathogens, with up to 100,000 bacteria per square inch. But showers are not much better—they distribute the scary Mycobacterium avium. And your kitchen is even yuckier than your bathroom! Dishwashers carry fungi on the rubber band in the door. Kitchen sinks: According to one scientist consulted by the authors, “if an alien came from space and studied bacteria counts in the typical home, he would probably conclude he should wash his hands in the toilet, and pee in your sink.” Sponges: Their “damp, porous environment serves as a perfect breeding ground in which the microbes can flourish and multiply until there are literally billions of them.” Cutting boards—let’s not even go there.

But don’t pull out the cleaning products too fast. Through a clever system of cross-referencing, the authors demonstrate that the cure is likely to be as harmful as the malady. Room air purifiers: “The ozone spewed out by these machines is more hazardous than any substances they may remove.” Antibacterial products: Their overuse is creating a new breed of “superbugs” resistant to the original agents and to antibiotics as well. Paper towels might be bad for the environment, but hand-drying machines are actually scary: In one study, “people who used a hot-air hand-drying machine to dry their hands had two to three times as many bacteria on their hands as they did before they washed them.”

And what about toxins? Some of the book’s entries might surprise you. You could probably guess that the popular Brazilian blowout hair-straightening treatment might contain stuff you wouldn’t want to breathe in (it does—formaldehyde), but what about the natural-stone kitchen countertops so beloved by design-conscious Bobos? Their granite emits “a continuous stream of radioactive radon gas.” And those compact fluorescent light bulbs touted by environmentalists? The average CFL bulb “contains enough mercury,” the authors tell us, “to contaminate as many as six thousand gallons of water to a point beyond safe drinking levels. The bulbs are harmless enough unless they break, but if they do, you and your family face the immediate danger of mercury poisoning.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Encyclopedia Paranoiaca book cover courtesy of Amazon.com.[end-div]

Antifragile

One of our favorite thinkers (and authors) here at theDiagonal is Nassim Taleb. His new work entitled Antifragile expands on ideas that he first described in his bestseller Black Swan.

Based on humanity’s need to find order and patterns out of chaos, and proclivity to seek causality where none exists we’ll need several more books from him before his profound and yet common-sense ideas sink in. In his latest work, Taleb shows how the improbable and unpredictable lie at the foundation of our universe.

[div class=attrib]From the Guardian:[end-div]

Now much does Nassim Taleb dislike journalists? Let me count the ways. “An erudite is someone who displays less than he knows; a journalist or consultant the opposite.” “This business of journalism is about pure entertainment, not the search for the truth.” “Most so-called writers keep writing and writing with the hope, some day, to find something to say.” He disliked them before, but after he predicted the financial crash in his 2007 book, The Black Swan, a book that became a global bestseller, his antipathy reached new heights. He has dozens and dozens of quotes on the subject, and if that’s too obtuse for us non-erudites, his online home page puts it even plainer: “I beg journalists and members of the media to leave me alone.”

He’s not wildly keen on appointments either. In his new book, Antifragile, he writes that he never makes them because a date in the calendar “makes me feel like a prisoner”.

So imagine, if you will, how keenly he must be looking forward to the prospect of a pre-arranged appointment to meet me, a journalist. I approach our lunch meeting, at the Polytechnic Institute of New York University where he’s the “distinguished professor of risk engineering”, as one might approach a sleeping bear: gingerly. And with a certain degree of fear. And yet there he is, striding into the faculty lobby in a jacket and Steve Jobs turtleneck (“I want you to write down that I started wearing them before he did. I want that to be known.”), smiling and effusive.

First, though, he has to have his photo taken. He claims it’s the first time he’s allowed it in three years, and has allotted just 10 minutes for it, though in the end it’s more like five. “The last guy I had was a fucking dick. He wanted to be artsy fartsy,” he tells the photographer, Mike McGregor. “You’re OK.”

Being artsy fartsy, I will learn, is even lower down the scale of Nassim Taleb pet hates than journalists. But then, being contradictory about what one hates and despises and loves and admires is actually another key Nassim Taleb trait.

In print, the hating and despising is there for all to see: he’s forever having spats and fights. When he’s not slagging off the Nobel prize for economics (a “fraud”), bankers (“I have a physical allergy to them”) and the academic establishment (he has it in for something he calls the “Soviet-Harvard illusion”), he’s trading blows with Steven Pinker (“clueless”), and a random reviewer on Amazon, who he took to his Twitter stream to berate. And this is just in the last week.

And yet here he is, chatting away, surprisingly friendly and approachable. When I say as much as we walk to the restaurant, he asks, “What do you mean?”

“In your book, you’re quite…” and I struggle to find the right word, “grumpy”.

He shrugs. “When you write, you don’t have the social constraints of having people in front of you, so you talk about abstract matters.”

Social constraints, it turns out, have their uses. And he’s an excellent host. We go to his regular restaurant, a no-nonsense, Italian-run, canteen-like place, a few yards from his faculty in central Brooklyn, and he insists that I order a glass of wine.

“And what’ll you have?” asks the waitress.

“I’ll take a coffee,” he says.

“What?” I say. “No way! You can’t trick me into ordering a glass of wine and then have coffee.” It’s like flunking lesson #101 at interviewing school, though in the end he relents and has not one but two glasses and a plate of “pasta without pasta” (though strictly speaking you could call it “mixed vegetables and chicken”), and attacks the bread basket “because it doesn’t have any calories here in Brooklyn”.

But then, having read his latest book, I actually know an awful lot about his diet. How he doesn’t eat sugar, any fruits which “don’t have a Greek or Hebrew name” or any liquid which is less than 1,000 years old. Just as I know that he doesn’t like air-conditioning, soccer moms, sunscreen and copy editors. That he believes the “non-natural” has to prove its harmlessness. That America tranquillises its children with drugs and pathologises sadness. That he values honour above all things, banging on about it so much that at times he comes across as a medieval knight who’s got lost somewhere in the space-time continuum. And that several times a week he goes and lifts weights in a basement gym with a bunch of doormen.

He says that after the financial crisis he received “all manner of threats” and at one time was advised to “stock up on bodyguards”. Instead, “I found it more appealing to look like one”. Now, he writes, when he’s harassed by limo drivers in the arrival hall at JFK, “I calmly tell them to fuck off.”

Taleb started out as a trader, worked as a quantitative analyst and ran his own investment firm, but the more he studied statistics, the more he became convinced that the entire financial system was a keg of dynamite that was ready to blow. In The Black Swan he argued that modernity is too complex to understand, and “Black Swan” events – hitherto unknown and unpredicted shocks – will always occur.

What’s more, because of the complexity of the system, if one bank went down, they all would. The book sold 3m copies. And months later, of course, this was more or less exactly what happened. Overnight, he went from lone-voice-in-the-wilderness, spouting off-the-wall theories, to the great seer of the modern age.

Antifragile, the follow-up, is his most important work so far, he says. It takes the central idea of The Black Swan and expands it to encompass almost every other aspect of life, from the 19th century rise of the nation state to what to eat for breakfast (fresh air, as a general rule).

[div class-attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Black Swan, the movie, not the book by the same name by Nassim Taleb. Courtesy of Wkipedia.[end-div]

Startup Culture: New is the New New

Starting up a new business was once a demanding and complex process, often undertaken in anonymity in the long shadows between the hours of a regular job. It still is over course. However nowadays “the startup” has become more of an event. The tech sector has raised this to a fine art by spawning an entire self-sustaining and self-promoting industry around startups.

You’ll find startup gurus, serial entrepreneurs and digital prophets — yes, AOL has a digital prophet on its payroll — strutting around on stage, twittering tips in the digital world, leading business plan bootcamps, pontificating on accelerator panels, hosting incubator love-ins in coffee shops or splashed across the covers of Entrepreneur or Inc or FastCompany magazines on an almost daily basis. Beware! The back of your cereal box may be next.

[div class=attrib]From the Telegraph:[end-div]

I’ve seen the best minds of my generation destroyed by marketing, shilling for ad clicks, dragging themselves through the strip-lit corridors of convention centres looking for a venture capitalist. Just as X Factor has convinced hordes of tone deaf kids they can be pop stars, the startup industry has persuaded thousands that they can be the next rockstar entrepreneur. What’s worse is that while X Factor clogs up the television schedules for a couple of months, tech conferences have proliferated to such an extent that not a week goes by without another excuse to slope off. Some founders spend more time on panels pontificating about their business plans than actually executing them.

Earlier this year, I witnessed David Shing, AOL’s Digital Prophet – that really is his job title – delivering the opening remarks at a tech conference. The show summed up the worst elements of the self-obsessed, hyperactive world of modern tech. A 42-year-old man with a shock of Russell Brand hair, expensive spectacles and paint-splattered trousers, Shingy paced the stage spouting buzzwords: “Attention is the new currency, man…the new new is providing utility, brothers and sisters…speaking on the phone is completely cliche.” The audience lapped it all up. At these rallies in praise of the startup, enthusiasm and energy matter much more than making sense.

Startup culture is driven by slinging around superlatives – every job is an “incredible opportunity”, every product is going to “change lives” and “disrupt” an established industry. No one wants to admit that most startups stay stuck right there at the start, pub singers pining for their chance in the spotlight. While the startups and hangers-on milling around in the halls bring in stacks of cash for the event organisers, it’s the already successful entrepreneurs on stage and the investors who actually benefit from these conferences. They meet up at exclusive dinners and in the speakers’ lounge where the real deals are made. It’s Studio 54 for geeks.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Startup, WA. Courtesy of Wikipedia.[end-div]

USANIT

Ever-present in Europe nationalism continues to grow as austerity measures across the continent catalyze xenophobia. And, now it’s spreading westwards across the Atlantic to the United States of America. Well, actually to be more precise nationalistic fervor is spreading to Texas. Perhaps in our lifetimes we’ll have to contend with USANIT — the United States of America Not Including Texas. Seventy-seven thousand Texans, so far, want the Lone Star to fly again across their nascent nation.

[div class=attrib]From the Guardian:[end-div]

Less than a week after Barack Obama was re-elected president, a slew of petitions have appeared on the White House’s We the People site, asking for states to be granted the right to peacefully withdraw from the union.

On Tuesday, all but one of the 33 states listed were far from reaching the 25,000 signature mark needed to get a response from the White House. Texas, however, had gained more than 77,000 online signatures in three days.

People from other states had signed the Texas petition. Another petition on the website was titled: “Deport everyone that signed a petition to withdraw their state from the United States of America.” It had 3,536 signatures.

The Texas petition reads:

Given that the state of Texas maintains a balanced budget and is the 15th largest economy in the world, it is practically feasible for Texas to withdraw from the union, and to do so would protect it’s citizens’ standard of living and re-secure their rights and liberties in accordance with the original ideas and beliefs of our founding fathers which are no longer being reflected by the federal government.

Activists across the country have advocated for independent statehood since the union was restored after the end of the Civil War in 1865. Texas has been host to some of the most fervent fights for independence.

Daniel Miller is the president of the Texas Nationalist Movement, which supports Texan independence and has its own online petition.

“We want to be able to govern ourselves without having some government a thousand-plus miles away that we have to go ask ‘mother may I’ to,” Miller said. “We want to protect our political, our cultural and our economic identities.”

Miller is not a fan of the word “secession”, because he views it as an over-generalization of what his group hopes to accomplish, but he encourages advocates for Texan independence to show their support when they can, including by signing the White House website petition.

“Given the political, cultural and economic pressures the United States is under, it’s not beyond the pale where one could envision the break up of the United States,” he said. “I don’t look at it as possibility, I look at it as an inevitability.”

Miller has been working for Texas independence for 16 years. He pointed to last week’s federal elections as evidence that a state independence movement is gaining traction. Miller pointed to the legalization of the sale of marijuana in Colorado and Washington, disobeying federal mandate.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]State Flag of Texas courtesy of Wikipedia.[end-div]

Socialism and Capitalism Share the Same Parent

Expanding on the work of Immanuel Kant in the late 18th century, German philosopher Georg Wilhelm Friedrich Hegel laid the foundations for what would later become two opposing political systems, socialism and free market capitalism. His comprehensive framework of Absolute Idealism influenced numerous philosophers and thinkers of all shades including Karl Marx and Ralph Waldo Emerson. While many thinkers later rounded on Hegel’s world view as nothing but a thinly veiled attempt to justify totalitarianism in his own nation, there is no argument as to the profound influence of his works on later thinkers from both the left and the right wings of the political spectrum.

[div class=attrib]From FairObserver:[end-div]

It is common knowledge that among developed western countries the two leading socioeconomic systems are socialism and capitalism. The former is often associated more closely with European systems of governance and the latter with the American free market economy. It is also generally known that these two systems are rooted in two fundamentally different assumptions about how a healthy society progresses. What is not as well known is that they both stem from the same philosophical roots, namely the evolutionary philosophy of Georg Wilhelm Friedrich Hegel.

Georg Wilhelm Friedrich Hegel was a leading figure in the movement known as German Idealism that had its beginnings in the late 18th century. That philosophical movement was initiated by another prominent German thinker, Immanuel Kant. Kant published “The Critique of Pure Reason” in 1781, offering a radical new way to understand how we as human beings get along in the world. Hegel expanded on Kant’s theory of knowledge by adding a theory of social and historical progress. Both socialism and capitalism were inspired by different, and to some extent apposing, interpretations of Hegel’s philosophical system.

Immanuel Kant recognized that human beings create their view of reality by incorporating new information into their previous understanding of reality using the laws of reason. As this integrative process unfolds we are compelled to maintain a coherent picture of what is real in order to operate effectively in the world. The coherent picture of reality that we maintain Kant called a necessary transcendental unity. It can be understood as the overarching picture of reality, or worldview, that helps us make sense of the world and against which we interpret and judge all new experiences and information.

Hegel realized that not only must individuals maintain a cohesive picture of reality, but societies and cultures must also maintain a collectively held and unified understanding of what is real. To use a gross example, it is not enough for me to know what a dollar bill is and what it is worth. If I am to be able to buy something with my money, then other people must agree on its value. Reality is not merely an individual event; it is a collective affair of shared agreement. Hegel further saw that the collective understanding of reality that is held in common by many human beings in any given society develops over the course of history. In his book “The Philosophy of History”, Hegel outlines his theory of how this development occurs. Karl Marx started with Hegel’s philosophy and then added his own profound insights – especially in regards to how oppression and class struggle drive the course of history.

Across the Atlantic in America, there was another thinker, Ralph Waldo Emerson, who was strongly influenced by German Idealism and especially the philosophy of Hegel. In the development of the American mind one cannot overstate the role that Emerson played as the pathfinder who marked trails of thought that continue to guide the  current American worldview. His ideas became grooves in consciousness set so deeply in the American psyche that they are often simply experienced as truth.  What excited Emerson about Hegel was his description of how reality emerged from a universal mind. Emerson similarly believed that what we as human beings experience as real has emerged through time from a universal source of intelligence. This distinctly Hegelian tone in Emerson can be heard clearly in this passage from his essay entitled “History”:

“There is one mind common to all individual men. Of the works of this mind history is the record. Man is explicable by nothing less than all his history. All the facts of history pre-exist as laws. Each law in turn is made by circumstances predominant. The creation of a thousand forests is in one acorn, and Egypt, Greece, Rome, Gaul, Britain, America, lie folded already in the first man. Epoch after epoch, camp, kingdom, empire, republic, democracy, are merely the application of this manifold spirit to the manifold world.”

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: The portrait of G.W.F. Hegel (1770-1831); Steel engraving by Lazarus Sichling after a lithograph by Julius L. Sebbers. Courtesy of Wikipedia.[end-div]

Charles Darwin Runs for Office

British voters may recall Screaming Lord Sutch, 3rd Earl of Harrow, of the Official Monster Raving Loony Party, who ran in over 40 parliamentary elections during the 1980s and 90s. He never won, but garnered a respectable number of votes and many fans (he was also a musician).

The United States followed a more dignified path in the 2012 elections, when Charles Darwin ran for a Congressional seat in Georgia. Darwin failed to win, but collected a respectable 4,000 votes. His opponent, Paul Broun, believes that the Earth “is but about 9,000 years old”. Interestingly, Representative Broun serves on the United States House Committee on Science, Space and Technology.

[div class=attrib]From Slate:[end-div]

Anti-evolution Congressman Paul Broun (R-Ga.) ran unopposed in Tuesday’s election, but nearly 4,000 voters wrote in Charles Darwin to protest their representative’s views. (Broun called evolution “lies straight from the pit of hell.”) Darwin fell more than 205,000 votes short of victory, but what would have happened if the father of evolution had out-polled Broun?

Broun still would have won. Georgia, like many other states, doesn’t count votes for write-in candidates who have not filed a notice of intent to stand for election. Even if the finally tally had been reversed, with Charles Darwin winning 209,000 votes and Paul Broun 4,000, Broun would have kept his job.

That’s not to say dead candidates can’t win elections. It happens all the time, but only when the candidate dies after being placed on the ballot. In Tuesday’s election, Orange County, Fla., tax collector Earl Wood won more than 56 percent of the vote, even though he died in October at the age of 96 after holding the office for more than 40 years. Florida law allowed the Democratic Party, of which Wood was a member, to choose a candidate to receive Wood’s votes. In Alabama, Charles Beasley won a seat on the Bibb County Commission despite dying on Oct. 12. (Beasley’s opponent lamented the challenge of running a negative campaign against a dead man.) The governor will appoint a replacement.

[div class=attrib]Read the entire article after the jump.[end-div]

The Myth of Social Mobility

There is a commonly held myth in the United States that anyone can make it; that is, even if you’re at the bottom of the income distribution curve you have the opportunity to climb up to a wealthier future. Independent research over the last couple of decades debunks this myth and paints a rather different and more disturbing reality. For instance, it shows how Americans are now less socially mobile — in the upward sense — than citizens of Canada and most countries in Europe.

[div class=attrib]From the Economist:[end-div]

THE HAMPTONS, A string of small towns on the south shore of Long Island, have long been a playground for America’s affluent. Nowadays the merely rich are being crimped by the ultra-wealthy. In August it can cost $400,000 to rent a fancy house there. The din of helicopters and private jets is omnipresent. The “Quiet Skies Coalition”, formed by a group of angry residents, protests against the noise, particularly of one billionaire’s military-size Chinook. “You can’t even play tennis,” moans an old-timer who stays near the East Hampton airport. “It’s like the third world war with GIV and GV jets.”

Thirty years ago, Loudoun County, just outside Washington, DC, in Northern Virginia, was a rural backwater with a rich history. During the war of 1812 federal documents were kept safe there from the English. Today it is the wealthiest county in America. Rolling pastures have given way to technology firms, swathes of companies that thrive on government contracts and pristine neighbourhoods with large houses. The average household income, at over $130,000, is twice the national level. The county also marks the western tip of the biggest cluster of affluence in the country. Between Loudoun County and north-west Washington, DC, there are over 800,000 people in exclusive postcodes that are home to the best-educated and wealthiest 5% of the population, dubbed “superzips” by Charles Murray, a libertarian social scientist.

THE HAMPTONS, A string of small towns on the south shore of Long Island, have long been a playground for America’s affluent. Nowadays the merely rich are being crimped by the ultra-wealthy. In August it can cost $400,000 to rent a fancy house there. The din of helicopters and private jets is omnipresent. The “Quiet Skies Coalition”, formed by a group of angry residents, protests against the noise, particularly of one billionaire’s military-size Chinook. “You can’t even play tennis,” moans an old-timer who stays near the East Hampton airport. “It’s like the third world war with GIV and GV jets.”

Thirty years ago, Loudoun County, just outside Washington, DC, in Northern Virginia, was a rural backwater with a rich history. During the war of 1812 federal documents were kept safe there from the English. Today it is the wealthiest county in America. Rolling pastures have given way to technology firms, swathes of companies that thrive on government contracts and pristine neighbourhoods with large houses. The average household income, at over $130,000, is twice the national level. The county also marks the western tip of the biggest cluster of affluence in the country. Between Loudoun County and north-west Washington, DC, there are over 800,000 people in exclusive postcodes that are home to the best-educated and wealthiest 5% of the population, dubbed “superzips” by Charles Murray, a libertarian social scientist.

[div clas=attrib]Read the entire article following the jump.[end-div]

Dragons of the Mind

[div class=attrib]From the Wall Street Journal:[end-div]

Peter Jackson’s “Hobbit” movie is on its way, and with it will come the resurrection of the vile dragon Smaug. With fiery breath, razor-sharp claws, scales as hard as shields and a vast underground lair, Smaug is portrayed in J.R.R. Tolkien’s text as a merciless killer. But where did the idea for such a bizarre beast—with such an odd mixture of traits—come from in the first place?

Historically, most monsters were spawned not from pure imagination but from aspects of the natural world that our ancestors did not understand. Whales seen breaking the surface of the ancient oceans were sea monsters, the fossilized bones of prehistoric humans were the victims of Medusa, the roars of earthquakes were thought to emanate from underground beasts. The list goes on. But tracing Smaug’s draconic heritage is more complicated.

At first glance, dinosaurs seem the obvious source for the dragon myth. Our ancestors simply ran into Tyrannosaur skulls, became terrified and came up with the idea that such monsters must still be around. It all sounds so logical, but it’s unlikely to be true.

Dragon myths were alive and well in the ancient Mediterranean world, despite the fact that the region is entirely bereft of dinosaur fossils. The Assyrians had Tiamat, a giant snake with horns (leading some to dispute whether it even qualifies as a dragon). The Greeks, for their part, had a fierce reptilian beast that guarded the golden fleece. In depicting it, they oscillated between a tiny viper and a huge snake capable of swallowing people whole. But even in this latter case, there was no fire-breathing or underground hoard, just a big reptile.

For decades, zoologists have argued that the only snakes humans ever had to seriously worry about were of the venomous variety. Last year, however, a study published in the Proceedings of the National Academy of Sciences revealed that members of Indonesian tribes are regularly eaten by enormous constrictors and that this was likely a common problem throughout human evolution. Moreover, reports by Pliny the Elder and others describe snakes of such size existing in the ancient Mediterranean world and sometimes attacking people. It seems likely that the early dragon myths were based on these real reptilian threats.

But Tolkien’s Smaug lives below the Lonely Mountain and breathes fire. Some reptiles live below ground, but none breathes anything that looks remotely like flame. Yet as strange as this trait may seem, it too may have natural origins.

Among the earliest mythical dragons that lived underground are those found in the 12th-century tales of Geoffrey of Monmouth. Monmouth recounts the story of Vortigern, an ancient British king who was forced to flee to the hills of Wales as Saxons invaded. Desperate to make a final stand, Vortigern orders a fortress to be built, but its walls keep falling over. Baffled, Vortigern seeks the advice of his wise men, who tell him that the ground must be consecrated with the blood of a child who is not born from the union between a man and a woman. Vortigern agrees and sends the wise men off to find such a child.

Not far away, in the town of Carmarthen, they come across two boys fighting. One insults the other as a bastard who has no father, and the wise men drag him back to Vortigern.

When the boy learns that he is to be killed, he tells Vortigern that his advisers have got things wrong. He declares that there are dragons below the ground and that their wrestling with one another is what keeps the walls from standing. Vortigern tests the boy’s theory out, and sure enough, as his men dig deeper, they discover the dragons’ “panting” flames.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Zmey Gorynych, the Russian three-headed dragon. Courtesy of Wikipedia.[end-div]

The Military-Industrial Complex

[tube]8y06NSBBRtY[/tube]

In his op-ed, author Aaron B. O’Connell reminds us of Eisenhower’s prescient warning to the nation about the growing power of the military-industrial complex in national affairs.

[div class=attrib]From the New York Times:[end-div]

IN 1961, President Dwight D. Eisenhower left office warning of the growing power of the military-industrial complex in American life. Most people know the term the president popularized, but few remember his argument.

In his farewell address, Eisenhower called for a better equilibrium between military and domestic affairs in our economy, politics and culture. He worried that the defense industry’s search for profits would warp foreign policy and, conversely, that too much state control of the private sector would cause economic stagnation. He warned that unending preparations for war were incongruous with the nation’s history. He cautioned that war and warmaking took up too large a proportion of national life, with grave ramifications for our spiritual health.

The military-industrial complex has not emerged in quite the way Eisenhower envisioned. The United States spends an enormous sum on defense — over $700 billion last year, about half of all military spending in the world — but in terms of our total economy, it has steadily declined to less than 5 percent of gross domestic product from 14 percent in 1953. Defense-related research has not produced an ossified garrison state; in fact, it has yielded a host of beneficial technologies, from the Internet to civilian nuclear power to GPS navigation. The United States has an enormous armaments industry, but it has not hampered employment and economic growth. In fact, Congress’s favorite argument against reducing defense spending is the job loss such cuts would entail.

Nor has the private sector infected foreign policy in the way that Eisenhower warned. Foreign policy has become increasingly reliant on military solutions since World War II, but we are a long way from the Marines’ repeated occupations of Haiti, Nicaragua and the Dominican Republic in the early 20th century, when commercial interests influenced military action. Of all the criticisms of the 2003 Iraq war, the idea that it was done to somehow magically decrease the cost of oil is the least credible. Though it’s true that mercenaries and contractors have exploited the wars of the past decade, hard decisions about the use of military force are made today much as they were in Eisenhower’s day: by the president, advised by the Joint Chiefs of Staff and the National Security Council, and then more or less rubber-stamped by Congress. Corporations do not get a vote, at least not yet.

But Eisenhower’s least heeded warning — concerning the spiritual effects of permanent preparations for war — is more important now than ever. Our culture has militarized considerably since Eisenhower’s era, and civilians, not the armed services, have been the principal cause. From lawmakers’ constant use of “support our troops” to justify defense spending, to TV programs and video games like “NCIS,” “Homeland” and “Call of Duty,” to NBC’s shameful and unreal reality show “Stars Earn Stripes,” Americans are subjected to a daily diet of stories that valorize the military while the storytellers pursue their own opportunistic political and commercial agendas. Of course, veterans should be thanked for serving their country, as should police officers, emergency workers and teachers. But no institution — particularly one financed by the taxpayers — should be immune from thoughtful criticism.

[div class=attrib]Read the entire article after the jump.[end-div]

Prodigies and the Rest of Us

[div class=attrib]From the New York Times:[end-div]

Drew Petersen didn’t speak until he was 3½, but his mother, Sue, never believed he was slow. When he was 18 months old, in 1994, she was reading to him and skipped a word, whereupon Drew reached over and pointed to the missing word on the page. Drew didn’t produce much sound at that stage, but he already cared about it deeply. “Church bells would elicit a big response,” Sue told me. “Birdsong would stop him in his tracks.”

Sue, who learned piano as a child, taught Drew the basics on an old upright, and he became fascinated by sheet music. “He needed to decode it,” Sue said. “So I had to recall what little I remembered, which was the treble clef.” As Drew told me, “It was like learning 13 letters of the alphabet and then trying to read books.” He figured out the bass clef on his own, and when he began formal lessons at 5, his teacher said he could skip the first six months’ worth of material. Within the year, Drew was performing Beethoven sonatas at the recital hall at Carnegie Hall. “I thought it was delightful,” Sue said, “but I also thought we shouldn’t take it too seriously. He was just a little boy.”

On his way to kindergarten one day, Drew asked his mother, “Can I just stay home so I can learn something?” Sue was at a loss. “He was reading textbooks this big, and they’re in class holding up a blowup M,” she said. Drew, who is now 18, said: “At first, it felt lonely. Then you accept that, yes, you’re different from everyone else, but people will be your friends anyway.” Drew’s parents moved him to a private school. They bought him a new piano, because he announced at 7 that their upright lacked dynamic contrast. “It cost more money than we’d ever paid for anything except a down payment on a house,” Sue said. When Drew was 14, he discovered a home-school program created by Harvard; when I met him two years ago, he was 16, studying at the Manhattan School of Music and halfway to a Harvard bachelor’s degree.

Prodigies are able to function at an advanced adult level in some domain before age 12. “Prodigy” derives from the Latin “prodigium,” a monster that violates the natural order. These children have differences so evident as to resemble a birth defect, and it was in that context that I came to investigate them. Having spent 10 years researching a book about children whose experiences differ radically from those of their parents and the world around them, I found that stigmatized differences — having Down syndrome, autism or deafness; being a dwarf or being transgender — are often clouds with silver linings. Families grappling with these apparent problems may find profound meaning, even beauty, in them. Prodigiousness, conversely, looks from a distance like silver, but it comes with banks of clouds; genius can be as bewildering and hazardous as a disability. Despite the past century’s breakthroughs in psychology and neuroscience, prodigiousness and genius are as little understood as autism. “Genius is an abnormality, and can signal other abnormalities,” says Veda Kaplinsky of Juilliard, perhaps the world’s pre-eminent teacher of young pianists. “Many gifted kids have A.D.D. or O.C.D. or Asperger’s. When the parents are confronted with two sides of a kid, they’re so quick to acknowledge the positive, the talented, the exceptional; they are often in denial over everything else.”

We live in ambitious times. You need only to go through the New York preschool application process, as I recently did for my son, to witness the hysteria attached to early achievement, the widespread presumption that a child’s destiny hinges on getting a baby foot on a tall ladder. Parental obsessiveness on this front reflects the hegemony of developmental psychiatry, with its insistence that first experience is formative. We now know that brain plasticity diminishes over time; it is easier to mold a child than to reform an adult. What are we to do with this information? I would hate for my children to feel that their worth is contingent on sustaining competitive advantage, but I’d also hate for them to fall short of their potential. Tiger mothers who browbeat their children into submission overemphasize a narrow category of achievement over psychic health. Attachment parenting, conversely, often sacrifices accomplishment to an ideal of unboundaried acceptance that can be equally pernicious. It’s tempting to propose some universal answer, but spending time with families of remarkably talented children showed me that what works for one child can be disastrous for another.

Children who are pushed toward success and succeed have a very different trajectory from that of children who are pushed toward success and fail. I once told Lang Lang, a prodigy par excellence and now perhaps the most famous pianist in the world, that by American standards, his father’s brutal methods — which included telling him to commit suicide, refusing any praise, browbeating him into abject submission — would count as child abuse. “If my father had pressured me like this and I had not done well, it would have been child abuse, and I would be traumatized, maybe destroyed,” Lang responded. “He could have been less extreme, and we probably would have made it to the same place; you don’t have to sacrifice everything to be a musician. But we had the same goal. So since all the pressure helped me become a world-famous star musician, which I love being, I would say that, for me, it was in the end a wonderful way to grow up.”

While it is true that some parents push their kids too hard and give them breakdowns, others fail to support a child’s passion for his own gift and deprive him of the only life that he would have enjoyed. You can err in either direction. Given that there is no consensus about how to raise ordinary children, it is not surprising that there is none about how to raise remarkable children. Like parents of children who are severely challenged, parents of exceptionally talented children are custodians of young people beyond their comprehension.

Spending time with the Petersens, I was struck not only by their mutual devotion but also by the easy way they avoided the snobberies that tend to cling to classical music. Sue is a school nurse; her husband, Joe, works in the engineering department of Volkswagen. They never expected the life into which Drew has led them, but they have neither been intimidated by it nor brash in pursuing it; it remains both a diligence and an art. “How do you describe a normal family?” Joe said. “The only way I can describe a normal one is a happy one. What my kids do brings a lot of joy into this household.” When I asked Sue how Drew’s talent had affected how they reared his younger brother, Erik, she said: “It’s distracting and different. It would be similar if Erik’s brother had a disability or a wooden leg.”

Prodigiousness manifests most often in athletics, mathematics, chess and music. A child may have a brain that processes chess moves or mathematical equations like some dream computer, which is its own mystery, but how can the mature emotional insight that is necessary to musicianship emerge from someone who is immature? “Young people like romance stories and war stories and good-and-evil stories and old movies because their emotional life mostly is and should be fantasy,” says Ken Noda, a great piano prodigy in his day who gave up public performance and now works at the Metropolitan Opera. “They put that fantasized emotion into their playing, and it is very convincing. I had an amazing capacity for imagining these feelings, and that’s part of what talent is. But it dries up, in everyone. That’s why so many prodigies have midlife crises in their late teens or early 20s. If our imagination is not replenished with experience, the ability to reproduce these feelings in one’s playing gradually diminishes.”

Musicians often talked to me about whether you achieve brilliance on the violin by practicing for hours every day or by reading Shakespeare, learning physics and falling in love. “Maturity, in music and in life, has to be earned by living,” the violinist Yehudi Menuhin once said. Who opens up or blocks access to such living? A musical prodigy’s development hinges on parental collaboration. Without that support, the child would never gain access to an instrument, the technical training that even the most devout genius requires or the emotional nurturance that enables a musician to achieve mature expression. As David Henry Feldman and Lynn T. Goldsmith, scholars in the field, have said, “A prodigy is a group enterprise.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Portrait of Wolfgang Amadeus Mozart aged six years old, by anonymous. Courtesy of Wikipedia.[end-div]

Democracy is Ugly and Petty

While this election cycle in the United States has been especially partisan this season, it’s worth remembering that politics in an open democracy is sometimes brutal, frequently nasty and often petty. Partisan fights, both metaphorical and physical, have been occuring since the Republic was founded

[div class=attrib]From the New York Times:[end-div]

As the cable news channels count down the hours before the first polls close on Tuesday, an entire election cycle will have passed since President Obama last sat down with Fox News. The organization’s standing request to interview the president is now almost two years old.

At NBC News, the journalists reporting on the Romney campaign will continue to absorb taunts from their sources about their sister cable channel, MSNBC. “You mean, Al Sharpton’s network,” as Stuart Stevens, a senior Romney adviser, is especially fond of reminding them.

Spend just a little time watching either Fox News or MSNBC, and it is easy to see why such tensions run high. In fact, by some measures, the partisan bitterness on cable news has never been as stark — and in some ways, as silly or small.

Martin Bashir, the host of MSNBC’s 4 p.m. hour, recently tried to assess why Mitt Romney seemed irritable on the campaign trail and offered a provocative theory: that he might have mental problems.

“Mrs. Romney has expressed concerns about her husband’s mental well-being,” Mr. Bashir told one of his guests. “But do you get the feeling that perhaps there’s more to this than she’s saying?”

Over on Fox News, similar psychological evaluations were under way on “Fox & Friends.” Keith Ablow, a psychiatrist and a member of the channel’s “Medical A-Team,” suggested that Joseph R. Biden Jr.’s “bizarre laughter” during the vice-presidential debate might have something to do with a larger mental health issue. “You have to put dementia on the differential diagnosis,” he noted matter-of-factly.

Neither outlet has built its reputation on moderation and restraint, but during this presidential election, research shows that both are pushing their stridency to new levels.

A Pew Research Center study found that of Fox News stories about Mr. Obama from the end of August through the end of October, just 6 percent were positive and 46 percent were negative.

Pew also found that Mr. Obama was covered far more than Mr. Romney. The president was a significant figure in 74 percent of Fox’s campaign stories, compared with 49 percent for Romney. In 2008, Pew found that the channel reported on Mr. Obama and John McCain in roughly equal amounts.

The greater disparity was on MSNBC, which gave Mr. Romney positive coverage just 3 percent of the time, Pew found. It examined 259 segments about Mr. Romney and found that 71 percent were negative.

MSNBC, whose programs are hosted by a new crop of extravagant partisans like Mr. Bashir, Mr. Sharpton and Lawrence O’Donnell, has tested the limits of good taste this year. Mr. O’Donnell was forced to apologize in April after describing the Mormon Church as nothing more than a scheme cooked up by a man who “got caught having sex with the maid and explained to his wife that God told him to do it.”

The channel’s hosts recycle talking points handed out by the Obama campaign, even using them as titles for program segments, like Mr. Bashir did recently with a segment he called “Romnesia,” referring to Mr. Obama’s term to explain his opponent’s shifting positions.

The hosts insult and mock, like Alex Wagner did in recently describing Mr. Romney’s trip overseas as “National Lampoon’s European Vacation” — a line she borrowed from an Obama spokeswoman. Mr. Romney was not only hapless, Ms. Wagner said, he also looked “disheveled” and “a little bit sweaty” in a recent appearance.

Not that they save their scorn just for their programs. Some MSNBC hosts even use the channel’s own ads promoting its slogan “Lean Forward,” to criticize Mr. Romney and the Republicans. Mr. O’Donnell accuses the Republican nominee of basing his campaign on the false notion that Mr. Obama is inciting class warfare. “You have to come up with a lie,” he says, when your campaign is based on empty rhetoric.

In her ad, Rachel Maddow breathlessly decodes the logic behind the push to overhaul state voting laws. “The idea is to shrink the electorate,” she says, “so a smaller number of people get to decide what happens to all of us.”

Such stridency has put NBC News journalists who cover Republicans in awkward and compromised positions, several people who work for the network said. To distance themselves from their sister channel, they have started taking steps to reassure Republican sources, like pointing out that they are reporting for NBC programs like “Today” and “Nightly News” — not for MSNBC.

At Fox News, there is a palpable sense that the White House punishes the outlet for its coverage, not only by withholding the president, who has done interviews with every other major network, but also by denying them access to Michelle Obama.

This fall, Mrs. Obama has done a spate of television appearances, from CNN to “Jimmy Kimmel Live” on ABC. But when officials from Fox News recently asked for an interview with the first lady, they were told no. She has not appeared on the channel since 2010, when she sat down with Mike Huckabee.

Lately the White House and Fox News have been at odds over the channel’s aggressive coverage of the attack on the American diplomatic mission in Benghazi, Libya. Fox initially raised questions over the White House’s explanation of the events that led to the attack — questions that other news organizations have since started reporting on more fully.

But the commentary on the channel quickly and often turns to accusations that the White House played politics with American lives. “Everything they told us was a lie,” Sean Hannity said recently as he and John H. Sununu, a former governor of New Hampshire and a Romney campaign supporter, took turns raising questions about how the Obama administration misled the public. “A hoax,” Mr. Hannity called the administration’s explanation. “A cover-up.”

Mr. Hannity has also taken to selectively fact-checking Mr. Obama’s claims, co-opting a journalistic tool that has proliferated in this election as news outlets sought to bring more accountability to their coverage.

Mr. Hannity’s guest fact-checkers have included hardly objective sources, like Dick Morris, the former Clinton aide turned conservative commentator; Liz Cheney, the daughter of former Vice President Dick Cheney; and Michelle Malkin, the right-wing provocateur.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of University of Maine at Farmington.[end-div]

The Beauty of Ugliness

The endless pursuit of beauty in human affairs probably pre-dates our historical record. We certainly know that ancient Egyptians used cosmetics believing them to offer magical and religious powers, in addition to aesthetic value.

Yet paradoxically beauty it is rather subjective and often fleeting. The French singer, songwriter, composer and bon viveur once said that, “ugliness is superior to beauty because it lasts longer”. Author Stephen Bayley argues in his new book “Ugly: The Aesthetics of Everything”, that beauty is downright boring.

[div class=attrib]From the Telegraph:[end-div]

Beauty is boring. And the evidence is piling up. An article in the journal Psychological Science now confirms what partygoers have known forever: that beauty and charm are no more directly linked than a high IQ and a talent for whistling.

A group of scientists set out to discover whether physically attractive people also have appealing character traits and values, and found, according to Lihi Segal-Caspi, who carried out part of the research, that “beautiful people tend to focus more on conformity and self-promotion than independence and tolerance”.

Certainly, while a room full of beautiful people might be impressively stiff with the whiff of Chanel No 5, the intellectual atmosphere will be carrying a very low charge. If positive at all.

The grizzled and gargoyle-like Parisian chanteur, and legendary lover, Serge Gainsbourg always used to pick up the ugliest girls at parties. This was not simply because predatory male folklore insists that ill-favoured women will be more “grateful”, but because Gainsbourg, a stylish contrarian, knew that the conversation would be better, the uglier the girl.

Beauty is a conformist conspiracy. And the conspirators include the fashion, cosmetics and movie businesses: a terrible Greek chorus of brainless idolatry towards abstract form. The conspirators insist that women – and, nowadays, men, too – should be un-creased, smooth, fat-free, tanned and, with the exception of the skull, hairless. Flawlessly dull. Even Hollywood once acknowledged the weakness of this proposition: Marilyn Monroe was made more attractive still by the addition of a “beauty spot”, a blemish turned into an asset.

The red carpet version of beauty is a feeble, temporary construction. Bodies corrode and erode, sag and bulge, just as cars rust and buildings develop a fine patina over time. This is not to be feared, rather to be understood and enjoyed. Anyone wishing to arrest these processes with the aid of surgery, aerosols, paint, glue, drugs, tape and Lycra must be both very stupid and very vain. Hence the problems encountered in conversation with beautiful people: stupidity and vanity rarely contribute much to wit and creativity.

Fine features may be all very well, but the great tragedy of beauty is that it is so ephemeral. Albert Camus said it “drives us to despair, offering for a minute the glimpse of an eternity that we should like to stretch out over the whole of time”. And Gainsbourg agreed when he said: “Ugliness is superior to beauty because it lasts longer.” A hegemony of beautiful perfection would be intolerable: we need a good measure of ugliness to keep our senses keen. If everything were beautiful, nothing would be.

And yet, despite the evidence against, there has been a conviction that beauty and goodness are somehow inextricably and permanently linked. Political propaganda exploited our primitive fear of ugliness, so we had Second World War American posters of Japanese looking like vampire bats. The Greeks believed that beauty had a moral character: beautiful people – discus-throwers and so on – were necessarily good people. Darwin explained our need for “beauty” in saying that breeding attractive children is a survival characteristic: I may feel the need to fuse my premium genetic material with yours, so that humanity continues in the same fine style.

This became a lazy consensus, described as the “beauty premium” by US economists Markus M Mobius and Tanya S Rosenblat. The “beauty premium” insists that as attractive children grow into attractive adults, they may find it easier to develop agreeable interpersonal communications skills because their audience reacts more favourably to them. In this beauty-related employment theory, short people are less likely to get a good job. As Randy Newman sang: “Short people got no reason to live.” So Darwin’s argument that evolutionary forces favour a certain physical type may be proven in the job market as well as the wider world.

But as soon as you try to grasp the concept of beauty, it disappears.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Does Evil Exist?

Humans have a peculiar habit of anthropomorphizing anything that moves, and for that matter, most objects that remain static as well. So, it is not surprising that evil is often personified and even stereotyped; it is said that true evil even has a home somewhere below where you currently stand.

[div class=attrib]From the Guardian:[end-div]

The friction between the presence of evil in our world and belief in a loving creator God sparks some tough questions. For many religious people these are primarily existential questions, as their faith contends with doubt and bewilderment. The biblical figure of Job, the righteous man who loses everything that is dear to him, remains a powerful example of this struggle. But the “problem of evil” is also an intellectual puzzle that has taxed the minds of philosophers and theologians for centuries.

One of the most influential responses to the problem of evil comes from St Augustine. As a young man, Augustine followed the teachings of a Christian sect known as the Manichees. At the heart of Manichean theology was the idea of a cosmic battle between the forces of good and evil. This, of course, proposes one possible solution to the problem of evil: all goodness, purity and light comes from God, and the darkness of evil has a different source.

However, Augustine came to regard this cosmic dualism as heretical, since it undermined God’s sovereignty. Of course, he wanted to hold on to the absolute goodness of God. But if God is the source of all things, where did evil come from? Augustine’s radical answer to this question is that evil does not actually come from anywhere. Rejecting the idea that evil is a positive force, he argues that it is merely a “name for nothing other than the absence of good”.

At first glance this looks like a philosophical sleight of hand. Augustine might try to define evil out of existence, but this cannot diminish the reality of the pain, suffering and cruelty that prompt the question of evil in the first place. As the 20th-century Catholic writer Charles Journet put it, the non-being of evil “can have a terrible reality, like letters carved out of stone”. Any defence of Augustine’s position has to begin by pointing out that his account of evil is metaphysical rather than empirical. In other words, he is not saying that our experience of evil is unreal. On the contrary, since a divinely created world is naturally oriented toward the good, any lack of goodness will be felt as painful, wrong and urgently in need of repair. To say that hunger is “merely” the absence of food is not to deny the intense suffering it involves.

One consequence of Augustine’s mature view of evil as “non-being”, a privation of the good, is that evil eludes our understanding. His sophisticated metaphysics of evil confirms our intuitive response of incomprehension in the face of gratuitous brutality, or of senseless “natural” evil like a child’s cancer. Augustine emphasises that evil is ultimately inexplicable, since it has no substantial existence: “No one therefore must try to get to know from me what I know that I do not know, unless, it may be, in order to learn not to know what must be known to be incapable of being known!” Interestingly, by the way, this mysticism about evil mirrors the “negative theology” which insists that God exceeds the limits of our understanding.

So, by his own admission, Augustine’s “solution” to the problem of evil defends belief in God without properly explaining the kinds of acts which exert real pressure on religious faith. He may be right to point out that the effects of evil tend to be destruction and disorder – a twisting or scarring of nature, and of souls. Nevertheless, believers and non-believers alike will feel that this fails to do justice to the power of evil. We may demand a better account of the apparent positivity of evil – of the fact, for example, that holocausts and massacres often involve meticulous planning, technical innovation and creative processes of justification.

Surprisingly, though, the basic insight of Augustinian theodicy finds support in recent science. In his 2011 book Zero Degrees of Empathy, Cambridge psychopathology professor Simon Baron-Cohen proposes “a new theory of human cruelty”. His goal, he writes, is to replace the “unscientific” term “evil” with the idea of “empathy erosion”: “People said to be cruel or evil are simply at one extreme of the empathy spectrum,” he writes. (He points out, though, that some people at this extreme display no more cruelty than those higher up the empathy scale – they are simply socially isolated.)

Loss of empathy resembles the Augustinian concept of evil in that it is a deficiency of goodness – or, to put it less moralistically, a disruption of normal functioning – rather than a positive force. In this way at least, Baron-Cohen’s theory echoes Augustine’s argument, against the Manicheans, that evil is not an independent reality but, in essence, a lack or a loss.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: Marvel Comics Vault of Evil. Courtesy of Wikia / Marvel Comics.[end-div]

Lillian Moller Gilbreth: Inventor of the Modern Kitchen

Lillian Moller Gilbreth, industrial engineer and psychologist, mother of 12 children, but non-cook, invented the modern kitchen design. Unveiled in 1929 at a Women’s Exposition, her design ideas were codified into what became known as the Kitchen Practical.

[div class=attrib]From Slate:[end-div]

The idea that housework is work now seems like a commonplace. We contract it out to housekeepers, laundromats, cleaning services, takeout places. We divvy it up: You cooked dinner, I’ll do the dishes. We count it as a second shift, as well as primary employment. But it wasn’t until the early part of the 20th century that first a literature, and then a science, developed about the best way to cook and clean. The results of this research shape the way we treat housework today, and created a template for the kitchen that remains conceptually unchanged from the 1920s. And the woman who made the kitchen better? She couldn’t cook.

If that sounds like the set-up for a comedy, that’s because it was. Lillian Moller Gilbreth, industrial psychologist and engineer, was the mother of 12 children. She and husband and partner Frank B. Gilbreth, inventors of what is known as motion study, pioneered the use of short films to watch how industrial processes and office tasks were done, breaking them down into component parts (which they called “therbligs,” Gilbreth backward) to determine how to make a job faster and less taxing. They tested many of their ideas on their children, establishing “the one best way” to take a bath, training preteens to touch type, and charting age-appropriate chores for each child. The ensuing hijinks provided enough material for memoirs written by two Gilbreth children, Cheaper by the Dozen and Belles on Their Toes.

While Frank Gilbreth was alive, he and Lillian worked for industry. She wrote or co-wrote many of his books, but often took no credit, as it was Frank with whom the male executives wanted to deal. After his sudden death in 1924, she had to re-establish herself as a solo female practitioner. According to biographer Jane Lancaster, in Making Time, Gilbreth soon saw that combining her professional expertise on motion study with her assumed expertise on women’s work gave her a marketable niche.

Frank B. Gilbreth Jr. and Ernestine Gilbreth Carey write, in Belles on Their Toes:
If the only way to enter a man’s field was through the kitchen door, that’s the way she’d enter… Mother planned, on paper, an efficiency-type kitchenette of the kind used today in a good many apartments. Under her arrangement, a person could mix a cake, put it in the oven, and do the dishes, without taking more than a couple of dozen steps.

It had to be cake, because that was one of few dishes Gilbreth made well. Gilbreth had grown up in an upper class household in California with a Chinese chef. She had worked side-by-side with Frank Gilbreth from the day they married. As she told a group of businesswomen in 1930, “We considered our time too valuable to be devoted to actual labor in the home. We were executives.”And family councils, at the Gilbreth home in Montclair, were run like board meetings.

Even though she did not do it herself, Gilbreth still considered housework unpaid labor, and as such, capable of efficiencies. The worker in the kitchen in the 1920s was often not a servant but the lady of the house, who spent an estimated 50 percent of her day there. The refrigerator had begun to arrive in middle-class homes, but was the subject of a pitched battle between gas and electric companies as to who made the superior chiller. Smaller electric appliances were also in development. “Home economists” raised the bar for domestic health and hygiene. Women became the targets of intense marketing campaigns for products large and small. Gilbreth worked for these manufacturers, and thus is complicit in the rise of consumerism for the home, but she never made explicit endorsements.

She did, however, partner with the Brooklyn Borough Gas Company to develop Gilbreth’s Kitchen Practical, unveiled in 1929 at a Women’s Exposition. The kitchen was intended to showcase the new gas-fueled appliances as well as Gilbreth’s research on motion savings. It was to replace the loose-fit kitchen of many traditional homes (including the Gilbreths’): a large room with discrete pieces of furniture around the edges. These might include a table, a freestanding cupboard or Hoosier cabinet, an icebox, a sink with a drying board and a stove. Ingredients, utensils and cookware might be across the room, or even in a separate pantry.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Kitchen Practical 1929. Courtesy of Gilbreth Network.[end-div]

It’s About Equality, Stupid

[div class=attrib]From Project Syndicate:[end-div]

The king of Bhutan wants to make us all happier. Governments, he says, should aim to maximize their people’s Gross National Happiness rather than their Gross National Product. Does this new emphasis on happiness represent a shift or just a passing fad?

It is easy to see why governments should de-emphasize economic growth when it is proving so elusive. The eurozone is not expected to grow at all this year. The British economy is contracting. Greece’s economy has been shrinking for years. Even China is expected to slow down. Why not give up growth and enjoy what we have?

No doubt this mood will pass when growth revives, as it is bound to. Nevertheless, a deeper shift in attitude toward growth has occurred, which is likely to make it a less important lodestar in the future – especially in rich countries.

The first factor to undermine the pursuit of growth was concern about its sustainability. Can we continue growing at the old rate without endangering our future?

When people started talking about the “natural” limits to growth in the 1970’s, they meant the impending exhaustion of food and non-renewable natural resources. Recently the debate has shifted to carbon emissions. As the Stern Review of 2006 emphasized, we must sacrifice some growth today to ensure that we do not all fry tomorrow.

Curiously, the one taboo area in this discussion is population. The fewer people there are, the less risk we face of heating up the planet. But, instead of accepting the natural decline in their populations, rich-country governments absorb more and more people to hold down wages and thereby grow faster.

A more recent concern focuses on the disappointing results of growth. It is increasingly understood that growth does not necessarily increase our sense of well-being. So why continue to grow?

The groundwork for this question was laid some time ago. In 1974, the economist Richard Easterlin published a famous paper, “Does Economic Growth Improve the Human Lot? Some Empirical Evidence.” After correlating per capita income and self-reported happiness levels across a number of countries, he reached a startling conclusion: probably not.

Above a rather low level of income (enough to satisfy basic needs), Easterlin found no correlation between happiness and GNP per head. In other words, GNP is a poor measure of life satisfaction.

That finding reinforced efforts to devise alternative indexes. In 1972, two economists, William Nordhaus and James Tobin, introduced a measure that they called “Net Economic Welfare,” obtained by deducting from GNP “bad” outputs, like pollution, and adding non-market activities, like leisure. They showed that a society with more leisure and less work could have as much welfare as one with more work – and therefore more GNP – and less leisure.

More recent metrics have tried to incorporate a wider range of “quality of life” indicators. The trouble is that you can measure quantity of stuff, but not quality of life. How one combines quantity and quality in some index of “life satisfaction” is a matter of morals rather than economics, so it is not surprising that most economists stick to their quantitative measures of “welfare.”

But another finding has also started to influence the current debate on growth: poor people within a country are less happy than rich people. In other words, above a low level of sufficiency, peoples’ happiness levels are determined much less by their absolute income than by their income relative to some reference group. We constantly compare our lot with that of others, feeling either superior or inferior, whatever our income level; well-being depends more on how the fruits of growth are distributed than on their absolute amount.

Put another way, what matters for life satisfaction is the growth not of mean income but of median income – the income of the typical person. Consider a population of ten people (say, a factory) in which the managing director earns $150,000 a year and the other nine, all workers, earn $10,000 each. The mean average of their incomes is $25,000, but 90% earn $10,000. With this kind of income distribution, it would be surprising if growth increased the typical person’s sense of well-being.

[div class=attrib]Read the entire article after the jump.[end-div]

Connectedness: A Force For Good

The internet has the potential to make our current political process obsolete. A review of “The End of Politics” by British politician Douglas Carswell shows how connectedness provides a significant opportunity to reshape the political process, and in some cases completely undermine government, for the good.

[div class=attrib]Charles Moore for the Telegraph:[end-div]

I think I can help you tackle this thought-provoking book. First of all, the title misleads. Enchanting though the idea will sound to many people, this is not about the end of politics. It is, after all, written by a Member of Parliament, Douglas Carswell (Con., Clacton) and he is fascinated by the subject. There’ll always be politics, he is saying, but not as we know it.

Second, you don’t really need to read the first half. It is essentially a passionately expressed set of arguments about why our current political arrangements do not work. It is good stuff, but there is plenty of it in the more independent-minded newspapers most days. The important bit is Part Two, beginning on page 145 and running for a modest 119 pages. It is called “The Birth of iDemocracy”.

Mr Carswell resembles those old barometers in which, in bad weather (Part One), a man with a mackintosh, an umbrella and a scowl comes out of the house. In good weather (Part Two), he pops out wearing a white suit, a straw hat and a broad smile. What makes him happy is the feeling that the digital revolution can restore to the people the power which, in the early days of the universal franchise, they possessed – and much, much more. He believes that the digital revolution has at last harnessed technology to express the “collective brain” of humanity. We develop our collective intelligence by exchanging the properties of our individual ones.

Throughout history, we have been impeded in doing this by physical barriers, such as distance, and by artificial ones, such as priesthoods of bureaucrats and experts. Today, i-this and e-that are cutting out these middlemen. He quotes the internet sage, Clay Shirky: “Here comes everybody”. Mr Carswell directs magnificent scorn at the aides to David Cameron who briefed the media that the Prime Minister now has an iPad app which will allow him, at a stroke of his finger, “to judge the success or failure of ministers with reference to performance-related data”.

The effect of the digital revolution is exactly the opposite of what the aides imagine. Far from now being able to survey everything, always, like God, the Prime Minister – any prime minister – is now in an unprecedentedly weak position in relation to the average citizen: “Digital technology is starting to allow us to choose for ourselves things that until recently Digital Dave and Co decided for us.”

A non-physical business, for instance, can often decide pretty freely where, for the purposes of taxation, it wants to live. Naturally, it will choose benign jurisdictions. Governments can try to ban it from doing so, but they will either fail, or find that they are cutting off their nose to spite their face. The very idea of a “tax base”, on which treasuries depend, wobbles when so much value lies in intellectual property and intellectual property is mobile. So taxes need to be flatter to keep their revenues up. If they are flatter, they will be paid by more people.

Therefore it becomes much harder for government to grow, since most people do not want to pay more.

[div class=attrib]Read the entire article after the jump.[end-div]

The United Swing States of America

Frank Jacobs over at Strange Maps has found a timely reminder that shows the inordinate influence that a few voters in several crucial States have over the rest of us.

[div class=attrib]From Strange Maps:[end-div]

At the stroke of midnight on November 6th, the 21 registered voters of Dixville Notch, gathering in the wood-panelled Ballot Room of the Balsams Grand Resort Hotel, will have just one minute to cast their vote. Speed is of the essence, if the tiny New Hampshire town is to uphold its reputation (est. 1960) as the first place to declare its results in the US presidential elections.

Later that day, well over 200 million other American voters will face the same choice as the good folks of the Notch: returning Barack Obama to the White House for a second and final four-year term, or electing Mitt Romney as the 45th President of the United States.

The winner of that contest will not be determined by whoever wins a simple majority (i.e. 50% of all votes cast, plus at least one). Like many electoral processes across the world, the system to elect the next president of the United States is riddled with idiosyncrasies and peculiarities – the quadrennial quorum in Dixville Notch being just one example.

Even though most US Presidents have indeed gained office by winning the popular vote, but this is not always the case. What is needed, is winning the electoral vote. For the US presidential election is an indirect one: depending on the outcome in each of the 50 states, an Electoral College convenes in Washington DC to elect the President.

The total of 538 electors is distributed across the states in proportion to their population size, and is regularly adjusted to reflect increases or decreases. In 2008 Louisiana had 9 electors and South Carolina had 8; reflecting a relative population decrease, resp. increase, those numbers are now reversed.

Maine and Nebraska are the only states to assign their electors proportionally; the other 48 states (and DC) operate on the ABBA principle: however slight the majority of either candidate in any of those states, he would win all its electoral votes. This rather convoluted system underlines the fact that the US Presidential elections are the sum of 50, state-level contests. It also brings into focus that some states are more important than others.

Obviously, in this system the more populous states carry much more weight than the emptier ones. Consider the map of the United States, and focus on the 17 states west of the straight-ish line of state borders from North Dakota-Minnesota in the north to Texas-Louisiana in the south. Just two states – Texas and California – outweigh the electoral votes of the 15 others.

So presidential candidates concentrate their efforts on the states where they can hope to gain the greatest advantage. This excludes the fairly large number of states that are solidly ‘blue’ (i.e. Democratic) or ‘red’ (Republican). Texas, for example, is reliably Republican, while California can be expected to fall in the Democratic column.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Map courtesy of Strange Maps / Big Think.[end-div]

Human Civilization and Weapons Go Hand in Hand

There is great irony in knowing that we humans would not be as civilized were it not for our passion for lethal, projectile weapons.

[div class=attrib]From the New Scientist:[end-div]

IT’S about 2 metres long, made of tough spruce wood and carved into a sharp point at one end. The widest part, and hence its centre of gravity, is in the front third, suggesting it was thrown like a javelin. At 400,000 years old, this is the world’s oldest spear. And, according to a provocative theory, on its carved length rests nothing less than the foundation of human civilisation as we know it, including democracy, class divisions and the modern nation state.

At the heart of this theory is a simple idea: the invention of weapons that could kill at a distance meant that power became uncoupled from physical strength. Even the puniest subordinate could now kill an alpha male, with the right weapon and a reasonable aim. Those who wanted power were forced to obtain it by other means – persuasion, cunning, charm – and so began the drive for the cognitive attributes that make us human. “In short, 400,000 years of evolution in the presence of lethal weapons gave rise to Homo sapiens,” says Herbert Gintis, an economist at the Santa Fe Institute in New Mexico who studies the evolution of social complexity and cooperation.

The puzzle of how humans became civilised has received new impetus from studies of the evolution of social organisation in other primates. These challenge the long-held view that political structure is a purely cultural phenomenon, suggesting that genes play a role too. If they do, the fact that we alone of all the apes have built highly complex societies becomes even more intriguing. Earlier this year, an independent institute called the Ernst Strüngmann Forum assembled a group of scientists in Frankfurt, Germany, to discuss how this complexity came about. Hot debate centred on the possibility that, at pivotal points in history, advances in lethal weapons technology drove human societies to evolve in new directions.

The idea that weapons have catalysed social change came to the fore three decades ago, when British anthropologist James Woodburn spent time with the Hadza hunter-gatherers of Tanzania. Their lifestyle, which has not changed in millennia, is thought to closely resemble that of our Stone Age ancestors, and Woodburn observed that they are fiercely egalitarian. Although the Hadza people include individuals who take a lead in different arenas, no one person has overriding authority. They also have mechanisms for keeping their leaders from growing too powerful – not least, the threat that a bully could be ambushed or killed in his sleep. The hunting weapon, Woodburn suggested, acts as an equaliser.

Some years later, anthropologist Christopher Boehm at the University of Southern California pointed out that the social organisation of our closest primate relative, the chimpanzee, is very different. They live in hierarchical, mixed-sex groups in which the alpha male controls access to food and females. In his 2000 book, Hierarchy in the Forest, Boehm proposed that egalitarianism arose in early hominin societies as a result of the reversal of this strength-based dominance hierarchy – made possible, in part, by projectile weapons. However, in reviving Woodburn’s idea, Boehm also emphasised the genetic heritage that we share with chimps. “We are prone to the formation of hierarchies, but also prone to form alliances in order to keep from being ruled too harshly or arbitrarily,” he says. At the Strüngmann forum, Gintis argued that this inherent tension accounts for much of human history, right up to the present day.

[div class=attrib]Read the entire article following the jump.[end-div]

[div class=attrib]Image: M777 howitzer. Courtesy of Wikipedia.[end-div]

Want Your Kids to Be Conservative or Liberal?

Researchers have confirmed what we already know: Parents who endorse a more authoritarian parenting style towards their toddlers are more likely to have children who are ideologically conservative when they reach age 18; parents who support more egalitarian parenting are more likely to have children who grow up to be liberal.

[div class=attrib]From the Pacific Standard:[end-div]

Parents: Do you find yourselves arguing with your adult children over who deserves to win the upcoming election? Does it confuse and frustrate you to realize your political viewpoints are so different?

Newly published research suggests you may only have yourself to blame.
Providing the best evidence yet to back up a decades-old theory, researchers writing in the journal Psychological Science report a link between a mother’s attitude toward parenting and the political ideology her child eventually adopts. In short, authoritarian parents are more prone to produce conservatives, while those who gave their kids more latitude are more likely to produce liberals.

This dynamic was theorized as early as 1950. But until now, almost all the research supporting it has been based on retrospective reports, with parents assessing their child-rearing attitudes in hindsight.

This new study, by a team led by psychologist R. Chris Fraley of the University of Illinois at Urbana-Champaign, begins with new mothers describing their intentions and approach in 1991, and ends with a survey of their children 18 years later. In between, it features an assessment of the child’s temperament at age 4.

The study looked at roughly 700 American children and their parents, who were recruited for the National Institute of Child Health and Human Development’s Study of Early Child Care and Youth Development. When each child was one month old, his or her mother completed a 30-item questionnaire designed to reveal her approach to parenting.

Those who strongly agreed with such statements as “the most important thing to teach children is absolute obedience to whoever is in authority” were categorized as holding authoritarian parenting attitudes. Those who robustly endorsed such sentiments as “children should be allowed to disagree with their parents” were categorized as holding egalitarian parenting attitudes.

When their kids were 54 months old, the mothers assessed their child’s temperament by answering 80 questions about their behavior. The children were evaluated for such traits as shyness, restlessness, attentional focusing (determined by their ability to follow directions and complete tasks) and fear.

Finally, at age 18, the youngsters completed a 28-item survey measuring their political attitudes on a liberal-to-conservative scale.

“Parents who endorsed more authoritarian parenting attitudes when their children were one month old were more likely to have children who were conservative in their ideologies at age 18,” the researchers report. “Parents who endorsed more egalitarian parenting attitudes were more likely to have children who were liberal.”

Temperament at age 4—which, of course, was very likely impacted by those parenting styles—was also associated with later ideological leanings.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of the Daily Show with Jon Stewart and the Colbert Report via Wired.[end-div]

The Great Blue Monday Fallacy

A yearlong survey of moodiness shows that the so-called Monday Blues may be more figment of the imagination than fact.

[div class=attrib]From the New York Times:[end-div]

DESPITE the beating that Mondays have taken in pop songs — Fats Domino crooned “Blue Monday, how I hate blue Monday” — the day does not deserve its gloomy reputation.

Two colleagues and I recently published an analysis of a remarkable yearlong survey by the Gallup Organization, which conducted 1,000 live interviews a day, asking people across the United States to recall their mood in the prior day. We scoured the data for evidence that Monday was bluer than Tuesday or Wednesday. We couldn’t find any.

Mood was evaluated with several adjectives measuring positive or negative feelings. Spanish-only speakers were queried in Spanish. Interviewers spoke to people in every state on cellphones and land lines. The data unequivocally showed that Mondays are as pleasant to Americans as the three days that follow, and only a trifle less joyful than Fridays. Perhaps no surprise, people generally felt good on the weekend — though for retirees, the distinction between weekend and weekdays was only modest.

Likewise, day-of-the-week mood was gender-blind. Over all, women assessed their daily moods more negatively than men did, but relative changes from day to day were similar for both sexes.

And yet still, the belief in blue Mondays persists.

Several years ago, in another study, I examined expectations about mood and day of the week: two-thirds of the sample nominated Monday as the “worst” day of the week. Other research has confirmed that this sentiment is widespread, despite the fact that, well, we don’t really feel any gloomier on that day.

The question is, why? Why do we believe something that our own immediate experience indicates simply isn’t true?

As it turns out, the blue Monday mystery highlights a phenomenon familiar to behavioral scientists: that beliefs or judgments about experience can be at odds with actual experience. Indeed, the disconnection between beliefs and experience is common.

Vacations, for example, are viewed more pleasantly after they are over compared with how they were experienced at the time. And motorists who drive fancy cars report having more fun driving than those who own more modest vehicles, though in-car monitoring shows this isn’t the case. The same is often true in reverse as well: we remember pain or symptoms of illness at higher levels than real-time experience suggests, in part because we ignore symptom-free periods in between our aches and pains.

HOW do we make sense of these findings? The human brain has vast, but limited, capacities to store, retrieve and process information. Yet we are often confronted with questions that challenge these capacities. And this is often when the disconnect between belief and experience occurs. When information isn’t available for answering a question — say, when it did not make it into our memories in the first place — we use whatever information is available, even if it isn’t particularly relevant to the question at hand.

When asked about pain for the last week, most people cannot completely remember all of its ups and downs over seven days. However, we are likely to remember it at its worst and may use that as a way of summarizing pain for the entire week. When asked about our current satisfaction with life, we may focus on the first things that come to mind — a recent spat with a spouse or maybe a compliment from the boss at work.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: “I Don’t Like Mondays” single cover. Courtesy of The Boomtown Rats / Ensign Records.[end-div]

La Serrata: Why the Rise Always Followed by the Fall

Humans do learn from their mistakes. Yet, history does repeat. Nations will continue to rise, and inevitably fall. Why? Chrystia Freeland, author of “Plutocrats: The Rise of the New Global Super-Rich and the Fall of Everyone Else,” offers an insightful analysis based in part on 14th century Venice.

[div class=attrib]From the New York Times:[end-div]

IN the early 14th century, Venice was one of the richest cities in Europe. At the heart of its economy was the colleganza, a basic form of joint-stock company created to finance a single trade expedition. The brilliance of the colleganza was that it opened the economy to new entrants, allowing risk-taking entrepreneurs to share in the financial upside with the established businessmen who financed their merchant voyages.

Venice’s elites were the chief beneficiaries. Like all open economies, theirs was turbulent. Today, we think of social mobility as a good thing. But if you are on top, mobility also means competition. In 1315, when the Venetian city-state was at the height of its economic powers, the upper class acted to lock in its privileges, putting a formal stop to social mobility with the publication of the Libro d’Oro, or Book of Gold, an official register of the nobility. If you weren’t on it, you couldn’t join the ruling oligarchy.

The political shift, which had begun nearly two decades earlier, was so striking a change that the Venetians gave it a name: La Serrata, or the closure. It wasn’t long before the political Serrata became an economic one, too. Under the control of the oligarchs, Venice gradually cut off commercial opportunities for new entrants. Eventually, the colleganza was banned. The reigning elites were acting in their immediate self-interest, but in the longer term, La Serrata was the beginning of the end for them, and for Venetian prosperity more generally. By 1500, Venice’s population was smaller than it had been in 1330. In the 17th and 18th centuries, as the rest of Europe grew, the city continued to shrink.

The story of Venice’s rise and fall is told by the scholars Daron Acemoglu and James A. Robinson, in their book “Why Nations Fail: The Origins of Power, Prosperity, and Poverty,” as an illustration of their thesis that what separates successful states from failed ones is whether their governing institutions are inclusive or extractive. Extractive states are controlled by ruling elites whose objective is to extract as much wealth as they can from the rest of society. Inclusive states give everyone access to economic opportunity; often, greater inclusiveness creates more prosperity, which creates an incentive for ever greater inclusiveness.

The history of the United States can be read as one such virtuous circle. But as the story of Venice shows, virtuous circles can be broken. Elites that have prospered from inclusive systems can be tempted to pull up the ladder they climbed to the top. Eventually, their societies become extractive and their economies languish.

That was the future predicted by Karl Marx, who wrote that capitalism contained the seeds of its own destruction. And it is the danger America faces today, as the 1 percent pulls away from everyone else and pursues an economic, political and social agenda that will increase that gap even further — ultimately destroying the open system that made America rich and allowed its 1 percent to thrive in the first place.

You can see America’s creeping Serrata in the growing social and, especially, educational chasm between those at the top and everyone else. At the bottom and in the middle, American society is fraying, and the children of these struggling families are lagging the rest of the world at school.

Economists point out that the woes of the middle class are in large part a consequence of globalization and technological change. Culture may also play a role. In his recent book on the white working class, the libertarian writer Charles Murray blames the hollowed-out middle for straying from the traditional family values and old-fashioned work ethic that he says prevail among the rich (whom he castigates, but only for allowing cultural relativism to prevail).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The Grand Canal and the Church of the Salute (1730) by Canaletto. Courtesy of Museum of Fine Arts, Houston / WikiCommons.[end-div]

Mourning the Lost Art of Handwriting

In this age of digital everything handwriting does still matter. Some of you may even still have a treasured fountain pen. Novelist Philip Hensher suggests why handwriting has import and value in his new book, The Missing Ink.

[div class=attrib]From the Guardian:[end-div]

About six months ago, I realised that I had no idea what the handwriting of a good friend of mine looked like. I had known him for over a decade, but somehow we had never communicated using handwritten notes. He had left voice messages for me, emailed me, sent text messages galore. But I don’t think I had ever had a letter from him written by hand, a postcard from his holidays, a reminder of something pushed through my letter box. I had no idea whether his handwriting was bold or crabbed, sloping or upright, italic or rounded, elegant or slapdash.

It hit me that we are at a moment when handwriting seems to be about to vanish from our lives altogether. At some point in recent years, it has stopped being a necessary and inevitable intermediary between people – a means by which individuals communicate with each other, putting a little bit of their personality into the form of their message as they press the ink-bearing point on to the paper. It has started to become just one of many options, and often an unattractive, elaborate one.

For each of us, the act of putting marks on paper with ink goes back as far as we can probably remember. At some point, somebody comes along and tells us that if you make a rounded shape and then join it to a straight vertical line, that means the letter “a”, just like the ones you see in the book. (But the ones in the book have a little umbrella over the top, don’t they? Never mind that, for the moment: this is how we make them for ourselves.) If you make a different rounded shape, in the opposite direction, and a taller vertical line, then that means the letter “b”. Do you see? And then a rounded shape, in the same direction as the first letter, but not joined to anything – that makes a “c”. And off you go.

Actually, I don’t think I have any memory of this initial introduction to the art of writing letters on paper. Our handwriting, like ourselves, seems always to have been there.

But if I don’t have any memory of first learning to write, I have a clear memory of what followed: instructions in refinements, suggestions of how to purify the forms of your handwriting.

You longed to do “joined-up writing”, as we used to call the cursive hand when we were young. Instructed in print letters, I looked forward to the ability to join one letter to another as a mark of huge sophistication. Adult handwriting was unreadable, true, but perhaps that was its point. I saw the loops and impatient dashes of the adult hand as a secret and untrustworthy way of communicating that one day I would master.

There was, also, wanting to make your handwriting more like other people’s. Often, this started with a single letter or figure. In the second year at school, our form teacher had a way of writing a 7 in the European way, with a cross-bar. A world of glamour and sophistication hung on that cross-bar; it might as well have had a beret on, be smoking Gitanes in the maths cupboard.

Your hand is formed by aspiration to the hand of others – by the beautiful italic strokes of a friend which seem altogether wasted on a mere postcard, or a note on your door reading “Dropped by – will come back later”. It’s formed, too, by anti-aspiration, the desire not to be like Denise in the desk behind who reads with her mouth open and whose writing, all bulging “m”s and looping “p”s, contains the atrocity of a little circle on top of every i. Or still more horrible, on occasion, usually when she signs her name, a heart. (There may be men in the world who use a heart-shaped jot, as the dot over the i is called, but I have yet to meet one. Or run a mile from one.)

Those other writing apparatuses, mobile phones, occupy a little bit more of the same psychological space as the pen. Ten years ago, people kept their mobile phone in their pockets. Now, they hold them permanently in their hand like a small angry animal, gazing crossly into our faces, in apparent need of constant placation. Clearly, people do regard their mobile phones as, in some degree, an extension of themselves. And yet we have not evolved any of those small, pleasurable pieces of behaviour towards them that seem so ordinary in the case of our pens. If you saw someone sucking one while they thought of the next phrase to text, you would think them dangerously insane.

We have surrendered our handwriting for something more mechanical, less distinctively human, less telling about ourselves and less present in our moments of the highest happiness and the deepest emotion. Ink runs in our veins, and shows the world what we are like. The shaping of thought and written language by a pen, moved by a hand to register marks of ink on paper, has for centuries, millennia, been regarded as key to our existence as human beings. In the past, handwriting has been regarded as almost the most powerful sign of our individuality. In 1847, in an American case, a witness testified without hesitation that a signature was genuine, though he had not seen an example of the handwriting for 63 years: the court accepted his testimony.

Handwriting is what registers our individuality, and the mark which our culture has made on us. It has been seen as the unknowing key to our souls and our innermost nature. It has been regarded as a sign of our health as a society, of our intelligence, and as an object of simplicity, grace, fantasy and beauty in its own right. Yet at some point, the ordinary pleasures and dignity of handwriting are going to be replaced permanently.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Stipula fountain pen. Courtesy of Wikipedia.[end-div]