All posts by Mike

High Fructose Corn Syrup = Corn Sugar?

Hats off to the global agro-industrial complex that feeds most of the Earth’s inhabitants. With high fructose corn syrup (HFCS) getting an increasingly bad rap for helping to expand our waistlines and catalyze our diabetes, the industry is becoming more creative.

However, it’s only the type of “creativity” that a cynic would come to expect from a faceless, trillion dollar industry; it’s not a fresh, natural innovation. The industry wants to rename HFCS to “corn sugar”, making it sound healthier and more natural in the process.

[div class=attrib]From the New York Times:[end-div]

The United States Food and Drug Administration has rejected a request from the Corn Refiners Association to change the name of high-fructose corn syrup.

The association, which represents the companies that make the syrup, had petitioned the F.D.A. in September 2010 to begin calling the much-maligned sweetener “corn sugar.” The request came on the heels of a national advertising campaign promoting the syrup as a natural ingredient made from corn.

But in a letter, Michael M. Landa, director of the Center for Food Safety and Applied Nutrition at the F.D.A., denied the petition, saying that the term “sugar” is used only for food “that is solid, dried and crystallized.”

“HFCS is an aqueous solution sweetener derived from corn after enzymatic hydrolysis of cornstarch, followed by enzymatic conversion of glucose (dextrose) to fructose,” the letter stated. “Thus, the use of the term ‘sugar’ to describe HFCS, a product that is a syrup, would not accurately identify or describe the basic nature of the food or its characterizing properties.”

In addition, the F.D.A. concluded that the term “corn sugar” has been used to describe the sweetener dextrose and therefore should not be used to describe high-fructose corn syrup. The agency also said the term “corn sugar” could pose a risk to consumers who have been advised to avoid fructose because of a hereditary fructose intolerance or fructose malabsorption.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Fructose vs. D-Glucose Structural Formulae. Courtesy of Wikipedia.[end-div]

Ray Bradbury – His Books Will Not Burn

“Monday burn Millay, Wednesday Whitman, Friday Faulkner, burn ’em to ashes, then burn the ashes. That’s our official slogan.” [From Fahrenheit 451].

Ray Bradbury left our planet on June 5. He was 91 years old.

Yet, a part of him lives on Mars. A digital copy of Bradbury’s “The Martian Chronicles”, along with works by other science fiction authors, reached the Martian northern plains in 2008, courtesy of NASA’s Phoenix Mars Lander spacecraft.

Ray Bradbury is likely to be best-remembered for his seminal science fiction work, Fahrenheit 451. The literary community will remember him as one of the world’s preeminent authors of short-stories and novellas. In fact, he also wrote plays, screenplays, children’s books and works of literary criticism. Many of his over 400 works, dating from the 1950’s to the present day, have greatly influenced contemporary writers and artists. He had a supreme gift for melding poetry with prose, dark vision with humor and social commentary with imagined worlds. Bradbury received the U.S. National Medal of Arts in 2004.

He will be missed; his books will not burn.

[div class=attrib]From the New York Times:[end-div]

By many estimations Mr. Bradbury was the writer most responsible for bringing modern science fiction into the literary mainstream. His name would appear near the top of any list of major science-fiction writers of the 20th century, beside those of Isaac Asimov, Arthur C. Clarke, Robert A. Heinlein and the Polish author Stanislaw Lem. His books have been taught in schools and colleges, where many a reader has been introduced to them decades after they first appeared. Many have said his stories fired their own imaginations.

More than eight million copies of his books have been sold in 36 languages. They include the short-story collections “The Martian Chronicles,” “The Illustrated Man” and “The Golden Apples of the Sun,” and the novels “Fahrenheit 451” and “Something Wicked This Way Comes.”

Though none won a Pulitzer Prize, Mr. Bradbury received a Pulitzer citation in 2007 “for his distinguished, prolific and deeply influential career as an unmatched author of science fiction and fantasy.”

His writing career stretched across 70 years, to the last weeks of his life. The New Yorker published an autobiographical essay by him in its June 4th double issue devoted to science fiction. There he recalled his “hungry imagination” as a boy in Illinois.

“It was one frenzy after one elation after one enthusiasm after one hysteria after another,” he wrote, noting, “You rarely have such fevers later in life that fill your entire day with emotion.”

Mr. Bradbury sold his first story to a magazine called Super Science Stories in his early 20s. By 30 he had made his reputation with “The Martian Chronicles,” a collection of thematically linked stories published in 1950.

The book celebrated the romance of space travel while condemning the social abuses that modern technology had made possible, and its impact was immediate and lasting. Critics who had dismissed science fiction as adolescent prattle praised “Chronicles” as stylishly written morality tales set in a future that seemed just around the corner.

Mr. Bradbury was hardly the first writer to represent science and technology as a mixed bag of blessings and abominations. The advent of the atomic bomb in 1945 left many Americans deeply ambivalent toward science. The same “super science” that had ended World War II now appeared to threaten the very existence of civilization. Science-fiction writers, who were accustomed to thinking about the role of science in society, had trenchant things to say about the nuclear threat.

But the audience for science fiction, published mostly in pulp magazines, was small and insignificant. Mr. Bradbury looked to a larger audience: the readers of mass-circulation magazines like Mademoiselle and The Saturday Evening Post. These readers had no patience for the technical jargon of the science fiction pulps. So he eliminated the jargon; he packaged his troubling speculations about the future in an appealing blend of cozy colloquialisms and poetic metaphors.

Though his books, particularly “The Martian Chronicles,” became a staple of high school and college English courses, Mr. Bradbury himself disdained formal education. He went so far as to attribute his success as a writer to his never having gone to college.

Instead, he read everything he could get his hands on: Edgar Allan Poe, Jules Verne, H. G. Wells, Edgar Rice Burroughs, Thomas Wolfe, Ernest Hemingway. He paid homage to them in 1971 in the essay “How Instead of Being Educated in College, I Was Graduated From Libraries.” (Late in life he took an active role in fund-raising efforts for public libraries in Southern California.)

Mr. Bradbury referred to himself as an “idea writer,” by which he meant something quite different from erudite or scholarly. “I have fun with ideas; I play with them,” he said. “ I’m not a serious person, and I don’t like serious people. I don’t see myself as a philosopher. That’s awfully boring.”

He added, “My goal is to entertain myself and others.”

He described his method of composition as “word association,” often triggered by a favorite line of poetry.

Mr. Bradbury’s passion for books found expression in his dystopian novel “Fahrenheit 451,” published in 1953. But he drew his primary inspiration from his childhood. He boasted that he had total recall of his earliest years, including the moment of his birth. Readers had no reason to doubt him.As for the protagonists of his stories, no matter how far they journeyed from home, they learned that they could never escape the past.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Ray Bradbury, 1975. Courtesy of Wikipedia.[end-div]

The Most Beautiful Railway Stations

[div class=attrib]From Flavorwire:[end-div]

In 1972, Pulitzer Prize-winning author, and The New York Times’ very first architecture critic, Ada Louise Huxtable observed that “nothing was more up-to-date when it was built, or is more obsolete today, than the railroad station.” A comment on the emerging age of the jetliner and a swanky commercial air travel industry that made the behemoth train stations of the time appear as cumbersome relics of an outdated industrial era, we don’t think the judgment holds up today — at all. Like so many things that we wrote off in favor of what was seemingly more modern and efficient (ahem, vinyl records and Polaroid film), the train station is back and better than ever. So, we’re taking the time to look back at some of the greatest stations still standing.

[div class=attrib]See other beautiful stations and read the entire article after the jump.[end-div]

[div class=attrib]Image: Grand Central Terminal — New York City, New York. Courtesy of Flavorwire.[end-div]

FOMO: An Important New Acronym

FOMO is an increasing “problem” for college students and other young adults. Interestingly, and somewhat ironically, FOMO seems to be a more chronic issue in a culture mediated by online social networks. So, what is FOMO? And do you have it?

[div class=attrib]From the Washington Post:[end-div]

Over the past academic year, there has been an explosion of new or renewed campus activities, pop culture phenomena, tech trends, generational shifts, and social movements started by or significantly impacting students. Most can be summed up in a single word.

As someone who monitors student life and student media daily, I’ve noticed a small number of words appearing more frequently, prominently or controversially during the past two semesters on campuses nationwide. Some were brand-new. Others were redefined or reached a tipping point of interest or popularity. And still others showed a remarkable staying power, carrying over from semesters and years past.

I’ve selected 15 as finalists for what I am calling the “2011-2012 College Word of the Year Contest.” Okay, a few are actually acronyms or short phrases. But altogether the terms — whether short-lived or seemingly permanent — offer a unique glimpse at what students participated in, talked about, fretted over, and fought for this past fall and spring.

As Time Magazine’s Touré confirms, “The words we coalesce around as a society say so much about who we are. The language is a mirror that reflects our collective soul.”

Let’s take a quick look in the collegiate rearview mirror. In alphabetical order, here are my College Word of the Year finalists.

1) Boomerangers: Right after commencement, a growing number of college graduates are heading home, diploma in hand and futures on hold. They are the boomerangers, young 20-somethings who are spending their immediate college afterlife in hometown purgatory. A majority move back into their childhood bedroom due to poor employment or graduate school prospects or to save money so they can soon travel internationally, engage in volunteer work or launch their own business.

A brief homestay has long been an option favored by some fresh graduates, but it’s recently reemerged in the media as a defining activity of the current student generation.

“Graduation means something completely different than it used to 30 years ago,” student columnist Madeline Hennings wrote in January for the Collegiate Times at Virginia Tech. “At my age, my parents were already engaged, planning their wedding, had jobs, and thinking about starting a family. Today, the economy is still recovering, and more students are moving back in with mom and dad.”

2) Drunkorexia: This five-syllable word has become the most publicized new disorder impacting college students. Many students, researchers and health professionals consider it a dangerous phenomenon. Critics, meanwhile, dismiss it as a media-driven faux-trend. And others contend it is nothing more than a fresh label stamped onto an activity that students have been carrying out for years.

The affliction, which leaves students hungry and at times hung over, involves “starving all day to drink at night.” As a March report in Daily Pennsylvanian at the University of Pennsylvania further explained, it centers on students “bingeing or skipping meals in order to either compensate for alcohol calories consumed later at night, or to get drunk faster… At its most severe, it is a combination of an eating disorder and alcohol dependency.”

4) FOMO: Students are increasingly obsessed with being connected — to their high-tech devices, social media chatter and their friends during a night, weekend or roadtrip in which something worthy of a Facebook status update or viral YouTube video might occur.  (For an example of the latter, check out this young woman “tree dancing“ during a recent music festival.)

This ever-present emotional-digital anxiety now has a defining acronym: FOMO or Fear of Missing Out.  Recent Georgetown University graduate Kinne Chapin confirmed FOMO “is a widespread problem on college campuses. Each weekend, I have a conversation with a friend of mine in which one of us expresses the following: ‘I’m not really in the mood to go out, but I feel like I should.’ Even when we’d rather catch up on sleep or melt our brain with some reality television, we feel compelled to seek bigger and better things from our weekend. We fear that if we don’t partake in every Saturday night’s fever, something truly amazing will happen, leaving us hopelessly behind.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Urban Dictionary.[end-div]

Java by the Numbers

If you think the United States is a nation of coffee drinkers, thing again. The U.S., only ranks eighth in terms of annual java consumption per person. Way out in front is Finland. Makes one wonder if there is a correlation of coffee drinking and heavy metal music.

[div class=attrib]Infographic courtesy of Hamilton Beach.[end-div]

Why Daydreaming is Good

Most of us, editor of theDiagonal included, have known this for a while. We’ve known that letting the mind wander aimlessly is crucial to creativity and problem-solving.

[div class=attrib]From Wired:[end-div]

It’s easy to underestimate boredom. The mental condition, after all, is defined by its lack of stimulation; it’s the mind at its most apathetic. This is why the poet Joseph Brodsky described boredom as a “psychological Sahara,” a cognitive desert “that starts right in your bedroom and spurns the horizon.” The hands of the clock seem to stop; the stream of consciousness slows to a drip. We want to be anywhere but here.

However, as Brodsky also noted, boredom and its synonyms can also become a crucial tool of creativity. “Boredom is your window,” the poet declared. “Once this window opens, don’t try to shut it; on the contrary, throw it wide open.”

Brodsky was right. The secret isn’t boredom per se: It’s how boredom makes us think. When people are immersed in monotony, they automatically lapse into a very special form of brain activity: mind-wandering. In a culture obsessed with efficiency, mind-wandering is often derided as a lazy habit, the kind of thinking we rely on when we don’t really want to think. (Freud regarded mind-wandering as an example of “infantile” thinking.) It’s a sign of procrastination, not productivity.

In recent years, however, neuroscience has dramatically revised our views of mind-wandering. For one thing, it turns out that the mind wanders a ridiculous amount. Last year, the Harvard psychologists Daniel Gilbert and Matthew A. Killingsworth published a fascinating paper in Science documenting our penchant for disappearing down the rabbit hole of our own mind. The scientists developed an iPhone app that contacted 2,250 volunteers at random intervals, asking them about their current activity and levels of happiness. It turns out that people were engaged in mind-wandering 46.9 percent of the time. In fact, the only activity in which their minds were not constantly wandering was love making. They were able to focus for that.

What’s happening inside the brain when the mind wanders? A lot. In 2009, a team led by Kalina Christoff of UBC and Jonathan Schooler of UCSB used “experience sampling” inside an fMRI machine to capture the brain in the midst of a daydream. (This condition is easy to induce: After subjects were given an extremely tedious task, they started to mind-wander within seconds.) Although it’s been known for nearly a decade that mind wandering is a metabolically intense process — your cortex consumes lots of energy when thinking to itself — this study further helped to clarify the sequence of mental events:

Activation in medial prefrontal default network regions was observed both in association with subjective self-reports of mind wandering and an independent behavioral measure (performance errors on the concurrent task). In addition to default network activation, mind wandering was associated with executive network recruitment, a finding predicted by behavioral theories of off-task thought and its relation to executive resources. Finally, neural recruitment in both default and executive network regions was strongest when subjects were unaware of their own mind wandering, suggesting that mind wandering is most pronounced when it lacks meta-awareness. The observed parallel recruitment of executive and default network regions—two brain systems that so far have been assumed to work in opposition—suggests that mind wandering may evoke a unique mental state that may allow otherwise opposing networks to work in cooperation.

Two things worth noting here. The first is the reference to the default network. The name is literal: We daydream so easily and effortlessly that it appears to be our default mode of thought. The second is the simultaneous activation in executive and default regions, suggesting that mind wandering isn’t quite as mindless as we’d long imagined. (That’s why it seems to require so much executive activity.) Instead, a daydream seems to exist in the liminal space between sleep dreaming and focused attentiveness, in which we are still awake but not really present.

Last week, a team of Austrian scientists expanded on this result in PLoS ONE. By examining 17 patients with unresponsive wakefulness syndrome (UWS), 8 patients in a minimally conscious state (MCS), and 25 healthy controls, the researchers were able to detect the brain differences along this gradient of consciousness. The key difference was an inability among the most unresponsive patients to “deactivate” their default network. This suggests that these poor subjects were trapped within a daydreaming loop, unable to exercise their executive regions to pay attention to the world outside. (Problems with the deactivation of the default network have also been observed in patients with Alzheimer’s and schizophrenia.) The end result is that their mind’s eye is always focused inwards.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A daydreaming gentleman; from an original 1912 postcard published in Germany. Courtesy of Wikipedia.[end-div]

Killer Ideas

It’s possible that most households on the planet have one. It’s equally possible that most humans have used one — excepting members of PETA (People for the Ethical Treatment of Animals) and other tolerant souls.

United States Patent 640,790 covers a simple and effective technology, invented by Robert Montgomery. The patent for a “Fly Killer”, or fly swatter as it is now more commonly known, was issued in 1900.

Sometimes the simplest design is the most pervasive and effective.

[div class=attrib]From the New York Times:[end-div]

The first modern fly-destruction device was invented in 1900 by Robert R. Montgomery, an entrepreneur based in Decatur, Ill. Montgomery was issued Patent No. 640,790 for the Fly-Killer, a “cheap device of unusual elasticity and durability” made of wire netting, “preferably oblong,” attached to a handle. The material of the handle remained unspecified, but the netting was crucial: it reduced wind drag, giving the swatter a “whiplike swing.” By 1901, Montgomery’s invention was advertised in Ladies’ Home Journal as a tool that “kills without crushing” and “soils nothing,” unlike, say, a rolled-up newspaper might.

Montgomery sold the patent rights in 1903 to an industrialist named John L. Bennett, who later invented the beer can. Bennett improved the design — stitching around the edge of the netting to keep it from fraying — but left the name.

The various fly-killing implements on the market at the time got the name “swatter” from Samuel Crumbine, secretary of the Kansas Board of Health. In 1905, he titled one of his fly bulletins, which warned of flyborne diseases, “Swat the Fly,” after a chant he heard at a ballgame. Crumbine took an invention known as the Fly Bat — a screen attached to a yardstick — and renamed it the Fly Swatter, which became the generic term we use today.

Fly-killing technology has advanced to include fly zappers (electrified tennis rackets that roast flies on contact) and fly guns (spinning discs that mulch insects). But there will always be less techy solutions: flypaper (sticky tape that traps the bugs), Fly Bottles (glass containers lined with an attractive liquid substance) and the Venus’ flytrap (a plant that eats insects).

During a 2009 CNBC interview, President Obama killed a fly with his bare hands, triumphantly exclaiming, “I got the sucker!” PETA was less gleeful, calling it a public “execution” and sending the White House a device that traps flies so that they may be set free.

But for the rest of us, as the product blogger Sean Byrne notes, “it’s hard to beat the good old-fashioned fly swatter.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Goodgrips.[end-div]

Philip K. Dick – Future Gnostic

Simon Critchley, professor of philosophy, continues his serialized analysis of Philip K. Dick. Part I first appeared here. Part II examines the events around 2-3-74 that led to Dick’s 8,000 page Gnostic treatise “Exegesis”.

[div class=attrib]From the New York Times:[end-div]

In the previous post, we looked at the consequences and possible philosophic import of the events of February and March of 1974 (also known as 2-3-74) in the life and work of Philip K. Dick, a period in which a dose of sodium pentathol, a light-emitting fish pendant and decades of fiction writing and quasi-philosophic activity came together in revelation that led to Dick’s 8,000-page “Exegesis.”

So, what is the nature of the true reality that Dick claims to have intuited during psychedelic visions of 2-3-74? Does it unwind into mere structureless ranting and raving or does it suggest some tradition of thought or belief? I would argue the latter. This is where things admittedly get a little weirder in an already weird universe, so hold on tight.

In the very first lines of “Exegesis” Dick writes, “We see the Logos addressing the many living entities.” Logos is an important concept that litters the pages of “Exegesis.” It is a word with a wide variety of meaning in ancient Greek, one of which is indeed “word.” It can also mean speech, reason (in Latin, ratio) or giving an account of something. For Heraclitus, to whom Dick frequently refers, logos is the universal law that governs the cosmos of which most human beings are somnolently ignorant. Dick certainly has this latter meaning in mind, but — most important — logos refers to the opening of John’s Gospel, “In the beginning was the word” (logos), where the word becomes flesh in the person of Christ.

But the core of Dick’s vision is not quite Christian in the traditional sense; it is Gnostical: it is the mystical intellection, at its highest moment a fusion with a transmundane or alien God who is identified with logos and who can communicate with human beings in the form of a ray of light or, in Dick’s case, hallucinatory visions.

There is a tension throughout “Exegesis” between a monistic view of the cosmos (where there is just one substance in the universe, which can be seen in Dick’s references to Spinoza’s idea as God as nature, Whitehead’s idea of reality as process and Hegel’s dialectic where “the true is the whole”) and a dualistic or Gnostical view of the cosmos, with two cosmic forces in conflict, one malevolent and the other benevolent. The way I read Dick, the latter view wins out. This means that the visible, phenomenal world is fallen and indeed a kind of prison cell, cage or cave.

Christianity, lest it be forgotten, is a metaphysical monism where it is the obligation of every Christian to love every aspect of creation – even the foulest and smelliest – because it is the work of God. Evil is nothing substantial because if it were it would have to be caused by God, who is by definition good. Against this, Gnosticism declares a radical dualism between the false God who created this world – who is usually called the “demiurge” – and the true God who is unknown and alien to this world. But for the Gnostic, evil is substantial and its evidence is the world. There is a story of a radical Gnostic who used to wash himself in his own saliva in order to have as little contact as possible with creation. Gnosticism is the worship of an alien God by those alienated from the world.

The novelty of Dick’s Gnosticism is that the divine is alleged to communicate with us through information. This is a persistent theme in Dick, and he refers to the universe as information and even Christ as information. Such information has a kind of electrostatic life connected to the theory of what he calls orthogonal time. The latter is rich and strange idea of time that is completely at odds with the standard, linear conception, which goes back to Aristotle, as a sequence of now-points extending from the future through the present and into the past. Dick explains orthogonal time as a circle that contains everything rather than a line both of whose ends disappear in infinity. In an arresting image, Dick claims that orthogonal time contains, “Everything which was, just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.”

It is like that seemingly endless final chord in the Beatles’ “A Day in the Life” that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.

[div class=attrib]Read the entire article after the jump.[end-div]

Heinz and the Clear Glass Bottle

[div class=attrib]From Anthropology in Practice:[end-div]

Do me a favor: Go open your refrigerator and look at the labels on your condiments. Alternatively, if you’re at work, open your drawer and flip through your stash of condiment packets. (Don’t look at me like that. I know you have a stash. Or you know where to find one. It’s practically Office Survival 101.) Go on. I’ll wait.

So tell me, what brands are hanging out in your fridge? (Or drawer?) Hellmann’s? French’s? Heinz? Even if you aren’t a slave to brand names and you typically buy whatever is on sale or the local supermarket brand, if you’ve ever eaten out or purchased a meal to-go that required condiments, you’ve likely been exposed to one of these brands for mayonnaise, mustard, or ketchup. And given the broad reach of Heinz, I’d be surprised if the company didn’t get a mention. So what are the origins of Heinz—the man and the brand? Why do we adorn our hamburgers and hotdogs with his products over others? It boils down to trust—carefully crafted trust, which obscures the image of Heinz as a food corporation and highlights a sense of quality, home-made goods.

Henry Heinz was born in 1844 to German immigrant parents near Pittsburgh, Pennsylvania. His father John owned a brickyard in Sharpsburg, and his mother Anna was a homemaker with a talent for gardening. Henry assisted both of them—in the brickyard before and after school, and in the garden when time permitted. He also sold surplus produce to local grocers. Henry proved to have quite a green thumb himself and at the age of twelve, he had his own plot, a horse, a cart, and a list of customers.

Henry’s gardening proficiency was in keeping with the times—most households were growing or otherwise making their own foods at home in the early nineteenth century, space permitting. The market for processed food was hampered by distrust in the quality offered:

Food quality and safety were growing concerns in the mid nineteenth-century cities. These issues were not new. Various local laws had mandated inspection of meat and flour exports since the colonial period. Other ordinances had regulated bread prices and ingredients, banning adulterants, such as chalk and ground beans. But as urban areas and the sources of food supplying these areas expanded, older controls weakened. Public anxiety about contaminated food, including milk, meat, eggs, and butter mounted. So, too, did worries about adulterated chocolate, sugar, vinegar, molasses, and other foods.

Contaminants included lead (in peppers and mustard) and ground stone (in flour and sugar). So it’s not surprising that people were hesitant about purchasing pre-packaged products. However, American society was on the brink of a social change that would make people more receptive to processed foods: industrialization was accelerating. As a result, an increase in urbanization reduced the amount of space available for gardens and livestock, incomes rose so that more people could afford prepared foods, and women’s roles shifted to allow for wage labor. In fact, between 1859 and 1899, the output of the food processing industry expanded 1500%, and by 1900, manufactured food comprised about a third of commodities produced in the US.

So what led the way for this adoption of packaged foods? Believe it or not, horseradish.

Horseradish was particularly popular among English and German immigrant communities. It was used to flavor potatoes, cabbage, bread, meats, and fish—and some people even attributed medicinal properties to the condiment. It was also extremely time consuming to make: the root had to be grated, packed in vinegar and spices, and sealed in jars or pots. The potential market for prepared horseradish existed, but customers were suspicious of the contents of the green and brown glass bottles that served as packaging. Turnip and wood-fibers were popular fillers, and the opaque coloring of the bottles made it hard to judge the caliber of the contents.

Heinz understood this—and saw the potential for selling consumers, especially women—something that they desperately wanted: time. In his teens, he began to bottle horseradish using his mother’s recipe—without fillers—in clear glass, and sold his products to local grocers and hotel owners. He emphasized the purity of his product and noted he had nothing to hide because he used clear glass so you could view the contents of his product. His strategy worked: By 1861, he was growing three and a half acres of horseradish to meet demand, and had made $2400.00 by year’s end (roughly $93,000.00 in 2012).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Henry J. Heinz (1844-1919). Courtesy of Wikipedia.[end-div]

Whitewashing Prejudice One Word at a Time

[div class=attrib]From Salon:[end-div]

The news of recent research documenting how readers identify with the main characters in stories has mostly been taken as confirmation of the value of literary role models. Lisa Libby, an assistant professor at Ohio State University and co-author of a study published in the Journal of Personality and Social Psychology, explained that subjects who read a short story in which the protagonist overcomes obstacles in order to vote were more likely to vote themselves several days later.

The suggestibility of readers isn’t news. Johann Wolfgang von Goethe’s novel of a sensitive young man destroyed by unrequited love, “The Sorrows of Young Werther,” inspired a rash of suicides by would-be Werthers in the late 1700s. Jack Kerouac has launched a thousand road trips. Still, this is part of science’s job: Running empirical tests on common knowledge — if for no other reason than because common knowledge (and common sense) is often wrong.

A far more unsettling finding is buried in this otherwise up-with-reading news item. The Ohio State researchers gave 70 heterosexual male readers stories about a college student much like themselves. In one version, the character was straight. In another, the character is described as gay early in the story. In a third version the character is gay, but this isn’t revealed until near the end. In each case, the readers’ “experience-taking” — the name these researchers have given to the act of immersing oneself in the perspective, thoughts and emotions of a story’s protagonist — was measured.

The straight readers were far more likely to take on the experience of the main character if they weren’t told until late in the story that he was different from themselves. This, too, is not so surprising. Human beings are notorious for extending more of their sympathy to people they perceive as being of their own kind. But the researchers also found that readers of the “gay-late” story showed “significantly more favorable attitudes toward homosexuals” than the other two groups of readers, and that they were less likely to attribute stereotypically gay traits, such as effeminacy, to the main character. The “gay-late” story actually reduced their biases (conscious or not) against gays, and made them more empathetic. Similar results were found when white readers were given stories about black characters to read.

What can we do with this information? If we subscribe to the idea that literature ought to improve people’s characters — and that’s the sentiment that seems to be lurking behind the study itself — then perhaps authors and publishers should be encouraged to conceal a main character’s race or sexual orientation from readers until they become invested in him or her. Who knows how much J.K. Rowling’s revelation that Albus Dumbledore is gay, announced after the publication of the final Harry Potter book, has helped to combat homophobia? (Although I confess that I find it hard to believe there were that many homophobic Potter fans in the first place.)

[div class=attrib]Read the entire article after the jump.[end-div]

Men are From LinkedIn, Women are From Pinterest

No surprise. Women and men use online social networks differently. A new study of online behavior by researchers in Vienna, Austria, shows that the sexes organize their networks very differently and for different reasons.

[div class=attrib]From Technology Review:[end-div]

One of the interesting insights that social networks offer is the difference between male and female behaviour.

In the past, behavioural differences have been hard to measure. Experiments could only be done on limited numbers of individuals and even then, the process of measurement often distorted people’s behaviour.

That’s all changed with the advent of massive online participation in gaming, professional and friendship  networks. For the first time, it has become possible to quantify exactly how the genders differ in their approach to things like risk and communication.

Gender specific studies are surprisingly rare, however. Nevertheless a growing body if evidence is emerging that social networks reflect many of the social and evolutionary differences that we’ve long suspected.

Earlier this year, for example, we looked at a remarkable study of a mobile phone network that demonstrated the different reproductive strategies that men and women employ throughout their lives, as revealed by how often they call friends, family and potential mates.

Today, Michael Szell and Stefan Thurner at the Medical University of Vienna in Austria say they’ve found significance differences in the way men and women manage their social networks in an online game called Pardus with over 300,000 players.

In this game, players  explore various solar systems in a virtual universe. On the way, they can mark other players as friends or enemies, exchange messages, gain wealth by trading  or doing battle but can also be killed.

The interesting thing about online games is that almost every action of every player is recorded, mostly without the players being consciously aware of this. That means measurement bias is minimal.

The networks of friends and enemies that are set up also differ in an important way from those on social networking sites such as Facebook. That’s because players can neither see nor influence other players’ networks. This prevents the kind of clustering and herding behaviour that sometimes dominates  other social networks.

Szell and Thurner say the data reveals clear and significant differences between men and women in Pardus.

For example, men and women  interact with the opposite sex differently.  “Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females,” say Szell and Thurner.

Women are also significantly more risk averse than men as measured by the amount of fighting they engage in and their likelihood of dying.

They are also more likely to be friends with each other than men.

These results are more or less as expected. More surprising is the finding that women tend to be more wealthy than men, probably because they engage more in economic than destructive behaviour.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of InformationWeek.[end-div]

What Happened to TED?

No, not Ted Nugent or Ted Koppel or Ted Turner; we are talking about the TED.

Alex Pareene over at Salon offers a well rounded critique of TED. TED is a global forum of “ideas worth spreading” centered around annual conferences, loosely woven around themes of technology, entertainment and design (TED).

Richard Wurman started TED in 1984 as a self-congratulatory networking event for Silicon Valley insiders. Since changing hands in 2002, TED has grown into a worldwide brand, but still self-congratulatory, only more exclusive. Currently, it costs $6,000 annually to be admitted to the elite idea sharing club.

By way of background, TED’s mission statement follows:

We believe passionately in the power of ideas to change attitudes, lives and ultimately, the world. So we’re building here a clearinghouse that offers free knowledge and inspiration from the world’s most inspired thinkers, and also a community of curious souls to engage with ideas and each other.

[div class=attrib]From Salon:[end-div]

There was a bit of a scandal last week when it was reported that a TED Talk on income equality had been censored. That turned out to be not quite the entire story. Nick Hanauer, a venture capitalist with a book out on income inequality, was invited to speak at a TED function. He spoke for a few minutes, making the argument that rich people like himself are not in fact job creators and that they should be taxed at a higher rate.

The talk seemed reasonably well-received by the audience, but TED “curator” Chris Anderson told Hanauer that it would not be featured on TED’s site, in part because the audience response was mixed but also because it was too political and this was an “election year.”

Hanauer had his PR people go to the press immediately and accused TED of censorship, which is obnoxious — TED didn’t have to host his talk, obviously, and his talk was not hugely revelatory for anyone familiar with recent writings on income inequity from a variety of experts — but Anderson’s responses were still a good distillation of TED’s ideology.

In case you’re unfamiliar with TED, it is a series of short lectures on a variety of subjects that stream on the Internet, for free. That’s it, really, or at least that is all that TED is to most of the people who have even heard of it. For an elite few, though, TED is something more: a lifestyle, an ethos, a bunch of overpriced networking events featuring live entertainment from smart and occasionally famous people.

Before streaming video, TED was a conference — it is not named for a person, but stands for “technology, entertainment and design” — organized by celebrated “information architect” (fancy graphic designer) Richard Saul Wurman. Wurman sold the conference, in 2002, to a nonprofit foundation started and run by former publisher and longtime do-gooder Chris Anderson (not the Chris Anderson of Wired). Anderson grew TED from a woolly conference for rich Silicon Valley millionaire nerds to a giant global brand. It has since become a much more exclusive, expensive elite networking experience with a much more prominent public face — the little streaming videos of lectures.

It’s even franchising — “TEDx” events are licensed third-party TED-style conferences largely unaffiliated with TED proper — and while TED is run by a nonprofit, it brings in a tremendous amount of money from its members and corporate sponsorships. At this point TED is a massive, money-soaked orgy of self-congratulatory futurism, with multiple events worldwide, awards and grants to TED-certified high achievers, and a list of speakers that would cost a fortune if they didn’t agree to do it for free out of public-spiritedness.

According to a 2010 piece in Fast Company, the trade journal of the breathless bullshit industry, the people behind TED are “creating a new Harvard — the first new top-prestige education brand in more than 100 years.” Well! That’s certainly saying… something. (What it’s mostly saying is “This is a Fast Company story about some overhyped Internet thing.”)

To even attend a TED conference requires not just a donation of between $7,500 and $125,000, but also a complicated admissions process in which the TED people determine whether you’re TED material; so, as Maura Johnston says, maybe it’s got more in common with Harvard than is initially apparent.

Strip away the hype and you’re left with a reasonably good video podcast with delusions of grandeur. For most of the millions of people who watch TED videos at the office, it’s a middlebrow diversion and a source of factoids to use on your friends. Except TED thinks it’s changing the world, like if “This American Life” suddenly mistook itself for Doctors Without Borders.

The model for your standard TED talk is a late-period Malcolm Gladwell book chapter. Common tropes include:

  • Drastically oversimplified explanations of complex problems.
  • Technologically utopian solutions to said complex problems.
  • Unconventional (and unconvincing) explanations of the origins of said complex problems.

Staggeringly obvious observations presented as mind-blowing new insights.
What’s most important is a sort of genial feel-good sense that everything will be OK, thanks in large part to the brilliance and beneficence of TED conference attendees. (Well, that and a bit of Vegas magician-with-PowerPoint stagecraft.)

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Multi-millionaire Nick Hanauer delivers a speech at TED Talks. Courtesy of Time.[end-div]

Human Evolution: Stalled

It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?

[div class=attrib]From the Wall Street Journal:[end-div]

If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.

I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).

More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.

Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.

Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.

Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.

So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The “Robot Evolution”. Courtesy of STRK3.[end-div]

Facebook: What Next?

Yawn…

The Facebook IPO (insider profit opportunity rather than Initial Public Offering) finally came and went. Much like its 900 million members, Facebook executives managed to garner enough fleeting “likes” from its Wall Street road show to ensure temporary short-term hype and big returns for key insiders. But, beneath the hyperbole lies a basic question that goes to the heart of its stratospheric valuation: Does Facebook have a long-term strategy beyond the rapidly deflating ad revenue model?

[div class=attrib]From Technology Review:[end-div]

Facebook is not only on course to go bust, but will take the rest of the ad-supported Web with it.

Given its vast cash reserves and the glacial pace of business reckonings, that will sound hyperbolic. But that doesn’t mean it isn’t true.

At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.

The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

At the same time, network technology allows advertisers to more precisely locate and assemble audiences outside of branded channels. Instead of having to go to CNN for your audience, a generic CNN-like audience can be assembled outside CNN’s walls and without the CNN-brand markup. This has resulted in the now famous and cruelly accurate formulation that $10 of offline advertising becomes $1 online.

I don’t know anyone in the ad-Web business who isn’t engaged in a relentless, demoralizing, no-exit operation to realign costs with falling per-user revenues, or who isn’t manically inflating traffic to compensate for ever-lower per-user value.

Facebook, however, has convinced large numbers of otherwise intelligent people that the magic of the medium will reinvent advertising in a heretofore unimaginably profitable way, or that the company will create something new that isn’t advertising, which will produce even more wonderful profits. But at a forward profit-to-earnings ratio of 56 (as of the close of trading on May 21), these innovations will have to be something like alchemy to make the company worth its sticker price. For comparison, Google trades at a forward P/E ratio of 12. (To gauge how much faith investors have that Google, Facebook, and other Web companies will extract value from their users, see our recent chart.)

Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.

Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.

It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.

On the other hand, Facebook is, everyone has come to agree, profoundly different from the Web. First of all, it exerts a new level of hegemonic control over users’ experiences. And it has its vast scale: 900 million, soon a billion, eventually two billion (one of the problems with the logic of constant growth at this scale and speed, of course, is that eventually it runs out of humans with computers or smart phones). And then it is social. Facebook has, in some yet-to-be-defined way, redefined something. Relationships? Media? Communications? Communities? Something big, anyway.

The subtext—an overt subtext—of the popular account of Facebook is that the network has a proprietary claim and special insight into social behavior. For enterprises and advertising agencies, it is therefore the bridge to new modes of human connection.

Expressed so baldly, this account is hardly different from what was claimed for the most aggressively boosted companies during the dot-com boom. But there is, in fact, one company that created and harnessed a transformation in behavior and business: Google. Facebook could be, or in many people’s eyes should be, something similar. Lost in such analysis is the failure to describe the application that will drive revenues.

[div class=attrib]Read the entire article after the jump.[end-div]

Something Out of Nothing

The debate on how the universe came to be rages on. Perhaps, however, we are a little closer to understanding why there is “something”, including us, rather than “nothing”.

[div class=attrib]From Scientific American:[end-div]

Why is there something rather than nothing? This is one of those profound questions that is easy to ask but difficult to answer. For millennia humans simply said, “God did it”: a creator existed before the universe and brought it into existence out of nothing. But this just begs the question of what created God—and if God does not need a creator, logic dictates that neither does the universe. Science deals with natural (not supernatural) causes and, as such, has several ways of exploring where the “something” came from.

Multiple universes. There are many multiverse hypotheses predicted from mathematics and physics that show how our universe may have been born from another universe. For example, our universe may be just one of many bubble universes with varying laws of nature. Those universes with laws similar to ours will produce stars, some of which collapse into black holes and singularities that give birth to new universes—in a manner similar to the singularity that physicists believe gave rise to the big bang.

M-theory. In his and Leonard Mlodinow’s 2010 book, The Grand Design, Stephen Hawking embraces “M-theory” (an extension of string theory that includes 11 dimensions) as “the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself.”

Quantum foam creation. The “nothing” of the vacuum of space actually consists of subatomic spacetime turbulence at extremely small distances measurable at the Planck scale—the length at which the structure of spacetime is dominated by quantum gravity. At this scale, the Heisenberg uncertainty principle allows energy to briefly decay into particles and antiparticles, thereby producing “something” from “nothing.”

Nothing is unstable. In his new book, A Universe from Nothing, cosmologist Lawrence M. Krauss attempts to link quantum physics to Einstein’s general theory of relativity to explain the origin of a universe from nothing: “In quantum gravity, universes can, and indeed always will, spontaneously appear from nothing. Such universes need not be empty, but can have matter and radiation in them, as long as the total energy, including the negative energy associated with gravity [balancing the positive energy of matter], is zero.” Furthermore, “for the closed universes that might be created through such mechanisms to last for longer than infinitesimal times, something like inflation is necessary.” Observations show that the universe is in fact flat (there is just enough matter to slow its expansion but not to halt it), has zero total energy and underwent rapid inflation, or expansion, soon after the big bang, as described by inflationary cosmology. Krauss concludes: “Quantum gravity not only appears to allow universes to be created from noth ing—meaning … absence of space and time—it may require them. ‘Nothing’—in this case no space, no time, no anything!—is unstable.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: There’s Nothing Out There. Courtesy of Rolfe Kanefsky / Image Entertainment.[end-div]

Philip K. Dick – Mystic, Epileptic, Madman, Fictionalizing Philosopher

Professor of philosophy Simon Critchley has an insightful examination (serialized) of Philip K. Dick’s writings. Philip K. Dick had a tragically short, but richly creative writing career. Since his death twenty years ago, many of his novels have profoundly influenced contemporary culture.

[div class=attrib]From the New York Times:[end-div]

Philip K. Dick is arguably the most influential writer of science fiction in the past half century. In his short and meteoric career, he wrote 121 short stories and 45 novels. His work was successful during his lifetime but has grown exponentially in influence since his death in 1982. Dick’s work will probably be best known through the dizzyingly successful Hollywood adaptations of his work, in movies like “Blade Runner” (based on “Do Androids Dream of Electric Sheep?”), “Total Recall,” “Minority Report,” “A Scanner Darkly” and, most recently, “The Adjustment Bureau.” Yet few people might consider Dick a thinker. This would be a mistake.

Dick’s life has long passed into legend, peppered with florid tales of madness and intoxication. There are some who consider such legend something of a diversion from the character of Dick’s literary brilliance. Jonathan Lethem writes — rightly in my view — “Dick wasn’t a legend and he wasn’t mad. He lived among us and was a genius.” Yet Dick’s life continues to obtrude massively into any assessment of his work.

Everything turns here on an event that “Dickheads” refer to with the shorthand “the golden fish.” On Feb. 20, 1974, Dick was hit with the force of an extraordinary revelation after a visit to the dentist for an impacted wisdom tooth for which he had received a dose of sodium pentothal. A young woman delivered a bottle of Darvon tablets to his apartment in Fullerton, Calif. She was wearing a necklace with the pendant of a golden fish, an ancient Christian symbol that had been adopted by the Jesus counterculture movement of the late 1960s.

The fish pendant, on Dick’s account, began to emit a golden ray of light, and Dick suddenly experienced what he called, with a nod to Plato, anamnesis: the recollection or total recall of the entire sum of knowledge. Dick claimed to have access to what philosophers call the faculty of “intellectual intuition”: the direct perception by the mind of a metaphysical reality behind screens of appearance. Many philosophers since Kant have insisted that such intellectual intuition is available only to human beings in the guise of fraudulent obscurantism, usually as religious or mystical experience, like Emmanuel Swedenborg’s visions of the angelic multitude. This is what Kant called, in a lovely German word, “die Schwärmerei,” a kind of swarming enthusiasm, where the self is literally en-thused with the God, o theos. Brusquely sweeping aside the careful limitations and strictures that Kant placed on the different domains of pure and practical reason, the phenomenal and the noumenal, Dick claimed direct intuition of the ultimate nature of what he called “true reality.”

Yet the golden fish episode was just the beginning. In the following days and weeks, Dick experienced and indeed enjoyed a couple of nightlong psychedelic visions with phantasmagoric visual light shows. These hypnagogic episodes continued off and on, together with hearing voices and prophetic dreams, until his death eight years later at age 53. Many very weird things happened — too many to list here — including a clay pot that Dick called “Ho On” or “Oh Ho,” which spoke to him about various deep spiritual issues in a brash and irritable voice.

Now, was this just bad acid or good sodium pentothal? Was Dick seriously bonkers? Was he psychotic? Was he schizophrenic? (He writes, “The schizophrenic is a leap ahead that failed.”) Were the visions simply the effect of a series of brain seizures that some call T.L.E. — temporal lobe epilepsy? Could we now explain and explain away Dick’s revelatory experience by some better neuroscientific story about the brain? Perhaps. But the problem is that each of these causal explanations misses the richness of the phenomena that Dick was trying to describe and also overlooks his unique means for describing them.

The fact is that after Dick experienced the events of what he came to call “2-3-74” (the events of February and March of that year), he devoted the rest of his life to trying to understand what had happened to him. For Dick, understanding meant writing. Suffering from what we might call “chronic hypergraphia,” between 2-3-74 and his death, Dick wrote more than 8,000 pages about his experience. He often wrote all night, producing 20 single-spaced, narrow-margined pages at a go, largely handwritten and littered with extraordinary diagrams and cryptic sketches.

The unfinished mountain of paper, assembled posthumously into some 91 folders, was called “Exegesis.” The fragments were assembled by Dick’s friend Paul Williams and then sat in his garage in Glen Ellen, Calif., for the next several years. A beautifully edited selection of these texts, with a golden fish on the cover, was finally published at the end of 2011, weighing in at a mighty 950 pages. But this is still just a fraction of the whole.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Philip K. Dick by R.Crumb. Courtesy of Wired.[end-div]

Death May Not Be as Bad For You as You Think

Professor of philosopher Shelly Kagan has an interesting take on death. After all, how bad can something be for you if you’re not alive to experience it?

[div class=attrib]From the Chronicle:[end-div]

We all believe that death is bad. But why is death bad?

In thinking about this question, I am simply going to assume that the death of my body is the end of my existence as a person. (If you don’t believe me, read the first nine chapters of my book.) But if death is my end, how can it be bad for me to die? After all, once I’m dead, I don’t exist. If I don’t exist, how can being dead be bad for me?

People sometimes respond that death isn’t bad for the person who is dead. Death is bad for the survivors. But I don’t think that can be central to what’s bad about death. Compare two stories.

Story 1. Your friend is about to go on the spaceship that is leaving for 100 Earth years to explore a distant solar system. By the time the spaceship comes back, you will be long dead. Worse still, 20 minutes after the ship takes off, all radio contact between the Earth and the ship will be lost until its return. You’re losing all contact with your closest friend.

Story 2. The spaceship takes off, and then 25 minutes into the flight, it explodes and everybody on board is killed instantly.

Story 2 is worse. But why? It can’t be the separation, because we had that in Story 1. What’s worse is that your friend has died. Admittedly, that is worse for you, too, since you care about your friend. But that upsets you because it is bad for her to have died. But how can it be true that death is bad for the person who dies?

In thinking about this question, it is important to be clear about what we’re asking. In particular, we are not asking whether or how the process of dying can be bad. For I take it to be quite uncontroversial—and not at all puzzling—that the process of dying can be a painful one. But it needn’t be. I might, after all, die peacefully in my sleep. Similarly, of course, the prospect of dying can be unpleasant. But that makes sense only if we consider death itself to be bad. Yet how can sheer nonexistence be bad?

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

Despite the overall plausibility of the deprivation account, though, it’s not all smooth sailing. For one thing, if something is true, it seems as though there’s got to be a time when it’s true. Yet if death is bad for me, when is it bad for me? Not now. I’m not dead now. What about when I’m dead? But then, I won’t exist. As the ancient Greek philosopher Epicurus wrote: “So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more.”

If death has no time at which it’s bad for me, then maybe it’s not bad for me. Or perhaps we should challenge the assumption that all facts are datable. Could there be some facts that aren’t?

Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun, but he bleeds slowly, and doesn’t die until Wednesday. Meanwhile, on Tuesday, I have a heart attack and die. I killed John, but when? No answer seems satisfactory! So maybe there are undatable facts, and death’s being bad for me is one of them.

Alternatively, if all facts can be dated, we need to say when death is bad for me. So perhaps we should just insist that death is bad for me when I’m dead. But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don’t exist? Isn’t it true that something can be bad for you only if you exist? Call this idea the existence requirement.

Should we just reject the existence requirement? Admittedly, in typical cases—involving pain, blindness, losing your job, and so on—things are bad for you while you exist. But maybe sometimes you don’t even need to exist for something to be bad for you. Arguably, the comparative bads of deprivation are like that.

Unfortunately, rejecting the existence requirement has some implications that are hard to swallow. For if nonexistence can be bad for somebody even though that person doesn’t exist, then nonexistence could be bad for somebody who never exists. It can be bad for somebody who is a merely possible person, someone who could have existed but never actually gets born.

t’s hard to think about somebody like that. But let’s try, and let’s call him Larry. Now, how many of us feel sorry for Larry? Probably nobody. But if we give up on the existence requirement, we no longer have any grounds for withholding our sympathy from Larry. I’ve got it bad. I’m going to die. But Larry’s got it worse: He never gets any life at all.

Moreover, there are a lot of merely possible people. How many? Well, very roughly, given the current generation of seven billion people, there are approximately three million billion billion billion different possible offspring—almost all of whom will never exist! If you go to three generations, you end up with more possible people than there are particles in the known universe, and almost none of those people get to be born.

If we are not prepared to say that that’s a moral tragedy of unspeakable proportions, we could avoid this conclusion by going back to the existence requirement. But of course, if we do, then we’re back with Epicurus’ argument. We’ve really gotten ourselves into a philosophical pickle now, haven’t we? If I accept the existence requirement, death isn’t bad for me, which is really rather hard to believe. Alternatively, I can keep the claim that death is bad for me by giving up the existence requirement. But then I’ve got to say that it is a tragedy that Larry and the other untold billion billion billions are never born. And that seems just as unacceptable.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Still photograph from Ingmar Bergman’s “The Seventh Seal”. Courtesy of the Guardian.[end-div]

Reconnecting with Our Urban Selves

Christopher Mims over at the Technology Review revisits a recent study of our social networks, both real-world and online. It’s startling to see the growth in our social isolation despite the corresponding growth in technologies that increase our ability to communicate and interact with one another. Is the suburbanization of our species to blame, and can Facebook save us?

[div class=attrib]From Technology Review:[end-div]

In 2009, the Pew Internet Trust published a survey worth resurfacing for what it says about the significance of Facebook. The study was inspired by earlier research that “argued that since 1985 Americans have become more socially isolated, the size of their discussion networks has declined, and the diversity of those people with whom they discuss important matters has decreased.”

In particular, the study found that Americans have fewer close ties to those from their neighborhoods and from voluntary associations. Sociologists Miller McPherson, Lynn Smith-Lovin and Matthew Brashears suggest that new technologies, such as the internet and mobile phone, may play a role in advancing this trend.

If you read through all the results from Pew’s survey, you’ll discover two surprising things:

1. “Use of newer information and communication technologies (ICTs), such as the internet and mobile phones, is not the social change responsible for the restructuring of Americans’ core networks. We found that ownership of a mobile phone and participation in a variety of internet activities were associated with larger and more diverse core discussion networks.”

2. However, Americans on the whole are more isolated than they were in 1985. “The average size of Americans’ core discussion networks has declined since 1985; the mean network size has dropped by about one-third or a loss of approximately one confidant.” In addition, “The diversity of core discussion networks has markedly declined; discussion networks are less likely to contain non-kin – that is, people who are not relatives by blood or marriage.”

In other words, the technologies that have isolated Americans are anything but informational. It’s not hard to imagine what they are, as there’s been plenty of research on the subject. These technologies are the automobile, sprawl and suburbia. We know that neighborhoods that aren’t walkable decrease the number of our social connections and increase obesity. We know that commutes make us miserable, and that time spent in an automobile affects everything from our home life to our level of anxiety and depression.

Indirect evidence for this can be found in the demonstrated preferences of Millenials, who are opting for cell phones over automobiles and who would rather live in the urban cores their parents abandoned, ride mass transit and in all other respects physically re-integrate themselves with the sort of village life that is possible only in the most walkable portions of cities.

Meanwhile, it’s worth contemplating one of the primary factors that drove Facebook’s adoption by (soon) 1 billion people: Loneliness. Americans have less support than ever — one in eight in the Pew survey reported having no “discussion confidants.”

It’s clear that for all our fears about the ability of our mobile devices to isolate us in public, the primary way they’re actually used is for connection.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Typical suburban landscape. Courtesy of Treehugger.[end-div]

The Illusion of Free Will

A plethora of recent articles and books from the neuroscience community adds weight to the position that human free will does not exist. Our exquisitely complex brains construct a rather compelling illusion, however we are just observers, held captive to impulses that are completely driven by our biology. And, for that matter, much of this biological determinism is unavailable to our conscious minds.

James Atlas provides a recent summary of current thinking.

[div class=attrib]From the New York Times:[end-div]

WHY are we thinking so much about thinking these days? Near the top of best-seller lists around the country, you’ll find Jonah Lehrer’s “Imagine: How Creativity Works,” followed by Charles Duhigg’s book “The Power of Habit: Why We Do What We Do in Life and Business,” and somewhere in the middle, where it’s held its ground for several months, Daniel Kahneman’s “Thinking, Fast and Slow.” Recently arrived is “Subliminal: How Your Unconscious Mind Rules Your Behavior,” by Leonard Mlodinow.

It’s the invasion of the Can’t-Help-Yourself books.

Unlike most pop self-help books, these are about life as we know it — the one you can change, but only a little, and with a ton of work. Professor Kahneman, who won the Nobel Prize in economic science a decade ago, has synthesized a lifetime’s research in neurobiology, economics and psychology. “Thinking, Fast and Slow” goes to the heart of the matter: How aware are we of the invisible forces of brain chemistry, social cues and temperament that determine how we think and act? Has the concept of free will gone out the window?

These books possess a unifying theme: The choices we make in day-to-day life are prompted by impulses lodged deep within the nervous system. Not only are we not masters of our fate; we are captives of biological determinism. Once we enter the portals of the strange neuronal world known as the brain, we discover that — to put the matter plainly — we have no idea what we’re doing.

Professor Kahneman breaks down the way we process information into two modes of thinking: System 1 is intuitive, System 2 is logical. System 1 “operates automatically and quickly, with little or no effort and no sense of voluntary control.” We react to faces that we perceive as angry faster than to “happy” faces because they contain a greater possibility of danger. System 2 “allocates attention to the effortful mental activities that demand it, including complex computations.” It makes decisions — or thinks it does. We don’t notice when a person dressed in a gorilla suit appears in a film of two teams passing basketballs if we’ve been assigned the job of counting how many times one team passes the ball. We “normalize” irrational data either by organizing it to fit a made-up narrative or by ignoring it altogether.

The effect of these “cognitive biases” can be unsettling: A study of judges in Israel revealed that 65 percent of requests for parole were granted after meals, dropping steadily to zero until the judges’ “next feeding.” “Thinking, Fast and Slow” isn’t prescriptive. Professor Kahneman shows us how our minds work, not how to fiddle with what Gilbert Ryle called the ghost in the machine.

“The Power of Habit” is more proactive. Mr. Duhigg’s thesis is that we can’t change our habits, we can only acquire new ones. Alcoholics can’t stop drinking through willpower alone: they need to alter behavior — going to A.A. meetings instead of bars, for instance — that triggers the impulse to drink. “You have to keep the same cues and rewards as before, and feed the craving by inserting a new routine.”

“The Power of Habit” and “Imagine” belong to a genre that has become increasingly conspicuous over the last few years: the hortatory book, armed with highly sophisticated science, that demonstrates how we can achieve our ambitions despite our sensory cluelessness.

[div class=attrib]Read the entire article following the jump.[end-div]

British Literary Greats, Mapped

Frank Jacobs over at Strange Maps has found another really cool map. This one shows 181 British writers placed according to the part of the British Isles with which they are best associated.

[div class=attrib]From Strange Maps:[end-div]

Maps usually display only one layer of information. In most cases, they’re limited to the topography, place names and traffic infrastructure of a certain region. True, this is very useful, and in all fairness quite often it’s all we ask for. But to reduce cartography to a schematic of accessibility is to exclude the poetry of place.

Or in this case, the poetry and prose of place. This literary map of Britain is composed of the names of 181 British writers, each positioned in parts of the country with which they are associated.

This is not the best navigational tool imaginable. If you want to go from William Wordsworth to Alfred Tennyson, you could pass through Coleridge and Thomas Wyatt, slice through the Brontë sisters, step over Andrew Marvell and finally traverse Philip Larkin. All of which sounds kind of messy.

t’s also rather limited. To reduce the whole literary history of Britain to nine score and one writers can only be done by the exclusion of many other, at least equally worthy contributors to the country’s literary landscape. But completeness is not the point of this map: it is not an instrument for literary-historical navigation either. Its main purpose is sheer cartographic joy.

An added bonus is that we’re able to geo-locate some of English literature’s best-known names. Seamus Heaney is about as Irish as a pint of Guinness for breakfast on March 17th, but it’s a bit of a surprise to see C.S. Lewis placed in Northern Ireland as well. The writer of the Narnia saga is closely associated with Oxford, but was indeed born and raised in Belfast.

Thomas Hardy’s name fills out an area close to Wessex, the fictional west country where much of his stories are set. London is occupied by Ben Jonson and John Donne, among others. Hanging around the capital are Geoffrey Chaucer, who was born there, and Christopher Marlowe, a native of Canterbury. The Isle of Wight is formed by the names of David Gascoyne, the surrealist poet, and John Keats, the romantic poet. Neither was born on the island, but both spent some time there.

[div class=attrib]Read the entire article after the jump.[end-div]

Humanity Becoming “Nicer”

Peter Singer, Professor of Bioethics at Princeton, lends support to Steven Pinker’s recent arguments that our current era is less violent and more peaceful than any previous period of human existence.

[div class=attrib]From Project Syndicate:[end-div]

With daily headlines focusing on war, terrorism, and the abuses of repressive governments, and religious leaders frequently bemoaning declining standards of public and private behavior, it is easy to get the impression that we are witnessing a moral collapse. But I think that we have grounds to be optimistic about the future.

Thirty years ago, I wrote a book called The Expanding Circle, in which I asserted that, historically, the circle of beings to whom we extend moral consideration has widened, first from the tribe to the nation, then to the race or ethnic group, then to all human beings, and, finally, to non-human animals. That, surely, is moral progress.

We might think that evolution leads to the selection of individuals who think only of their own interests, and those of their kin, because genes for such traits would be more likely to spread. But, as I argued then, the development of reason could take us in a different direction.

On the one hand, having a capacity to reason confers an obvious evolutionary advantage, because it makes it possible to solve problems and to plan to avoid dangers, thereby increasing the prospects of survival. Yet, on the other hand, reason is more than a neutral problem-solving tool. It is more like an escalator: once we get on it, we are liable to be taken to places that we never expected to reach. In particular, reason enables us to see that others, previously outside the bounds of our moral view, are like us in relevant respects. Excluding them from the sphere of beings to whom we owe moral consideration can then seem arbitrary, or just plain wrong.

Steven Pinker’s recent book The Better Angels of Our Nature lends weighty support to this view.  Pinker, a professor of psychology at Harvard University, draws on recent research in history, psychology, cognitive science, economics, and sociology to argue that our era is less violent, less cruel, and more peaceful than any previous period of human existence.

The decline in violence holds for families, neighborhoods, tribes, and states. In essence, humans living today are less likely to meet a violent death, or to suffer from violence or cruelty at the hands of others, than their predecessors in any previous century.

Many people will doubt this claim. Some hold a rosy view of the simpler, supposedly more placid lives of tribal hunter-gatherers relative to our own. But examination of skeletons found at archaeological sites suggests that as many as 15% of prehistoric humans met a violent death at the hands of another person. (For comparison, in the first half of the twentieth century, the two world wars caused a death rate in Europe of not much more than 3%.)

Even those tribal peoples extolled by anthropologists as especially “gentle” – for example, the Semai of Malaysia, the Kung of the Kalahari, and the Central Arctic Inuit – turn out to have murder rates that are, relative to population, comparable to Detroit, which has one of the highest murder rates in the United States. In Europe, your chance of being murdered is now less than one-tenth, and in some countries only one-fiftieth, of what it would have been had you lived 500 years ago.

Pinker accepts that reason is an important factor underlying the trends that he describes. In support of this claim, he refers to the “Flynn Effect” – the remarkable finding by the philosopher James Flynn that since IQ tests were first administered, scores have risen considerably. The average IQ is, by definition, 100; but, to achieve that result, raw test results have to be standardized. If the average teenager today took an IQ test in 1910, he or she would score 130, which would be better than 98% of those taking the test then.

It is not easy to attribute this rise to improved education, because the aspects of the tests on which scores have risen the most do not require a good vocabulary, or even mathematical ability, but instead assess powers of abstract reasoning.

[div class=attrib]Read the entire article after the jump.[end-div]

Burning Man as Counterculture? Think Again

Fascinating insight into the Burning Man festival courtesy of co-founder, Larry Harvey. It may be more like Wall Street than Haight-Ashbury.

[div class=attrib]From Washington Post:[end-div]

Go to Burning Man, and you’ll find everything from a thunderdome battle between a couple in tiger-striped bodypaint to a man dressed as a gigantic blueberry muffin on wheels. But underneath it all, says the festival’s co-founder, Larry Harvey, is “old-fashioned capitalism.”

There’s not a corporate logo in sight at the countercultural arts festival, and nothing is for sale but ice and coffee. But at its core, Harvey believes that Burning Man hews closely to the true spirit of a free-enterprise democracy: Ingenuity is celebrated, autonomy is affirmed, and self-reliance is expected. “If you’re talking about old-fashioned, Main Street Republicanism, we could be the poster child,” says Harvey, who hastens to add that the festival is non-ideological — and doesn’t anticipate being in GOP campaign ads anytime soon.

For more than two decades, the festival has funded itself entirely through donations and ticket sales — which now go up to $300 a pop — and it’s almost never gone in the red. And on the dry, barren plains of the Nevada desert where Burning Man materializes for a week each summer, you’re judged by what you do — your art, costumes and participation in a community that expects everyone to contribute in some form and frowns upon those who’ve come simply to gawk or mooch off others.

That’s part of the message that Harvey and his colleagues have brought to Washington this week, in his meetings with congressional staffers and the Interior Department to discuss the future of Burning Man. In fact, the festival is already a known quantity on the Hill: Harvey and his colleagues have been coming to Washington for years to explain the festival to policymakers, in least part because Burning Man takes place on public land that’s managed by the Interior Department.

In fact, Burning Man’s current challenge stems come because it’s so immensely popular, growing beyond 50,000 participants since it started some 20 years ago. “We’re no longer so taxed in explaining that it’s not a hippie debauch,” Harvey tells me over sodas in downtown Washington. “The word has leaked out so well that everyone now wants to come.” In fact, the Interior Department’s Bureau of Land Management that oversees the Black Rock Desert recently put the festival on probation for exceeding the land’s permitted crowd limits — a decision that organizers are now appealing.

Harvey now hopes to direct the enormous passion that Burning Man has stoked in its devotees over the years outside of Nevada’s Black Rock Desert, in the U.S. and overseas — the primary focus of this week’s visit to Washington. Last year, Burning Man transitioned from a limited liability corporation into a 501(c)3 nonprofit, which organizers believed was a better way to support their activities — not just for the festival, but for outside projects and collaborations in what festival-goers often refer to as “the default world.”

These days, Harvey — now in his mid-60s, dressed in a gray cowboy hat, silver western shirt, and aviator sunglasses — is just as likely to reference Richard Florida as the beatniks he once met on Haight Street. Most recently, he’s been talking with Tony Hsieh, the CEO of Zappos, who shares his vision of revitalizing Las Vegas, one of the cities hardest hit by the recent housing bust. “Urban renewal? We’re qualified. We’ve built up and torn down cities for 20 years,” says Harvey. “Cities everywhere are calling for artists, and it’s a blank slate there, blocks and blocks. … We want to extend the civil experiment — to see if business and art can coincide and not maim one another.”

Harvey points out that there’s been long-standing ties between Burning Man artists and to some of the private sector’s most successful executives. Its arts foundation, which distributes grants for festival projects, has received backing from everyone from real-estate magnate Christopher Bently to Mark Pincus, head of online gaming giant Zynga, as the Wall Street Journal points out. “There are a fair number of billionaires” who come to the festival every year, says Harvey, adding that some of the art is privately funded as well. In this way, Burning Man is a microcosm of San Francisco itself, stripping the bohemian artists and the Silicon Valley entrepreneurs of their usual tribal markers on the blank slate of the Nevada desert. At Burning Man, “when someone asks, ‘what do you do?’ — they meant, what did you just do” that day, he explains.

It’s one of the many apparent contradictions at the core of the festival: Paired with the philosophy of “radical self-reliance” — one that demands that participants cart out all their own food, water and shelter into a dust-filled desert for a week — is the festival’s communitarian ethos. Burning Man celebrates a gift economy that inspires random acts of generosity, and volunteer “rangers” traverse the festival to aid those in trouble. The climactic burning of the festival’s iconic “man”— along with a wooden temple filled with notes and memorials — is a ritual of togetherness and belonging for many participants. At the same time, one of the festival’s mottos is, ‘You have a right to hurt yourself.’ It’s the opposite of a nanny state,” Harvey says, recounting the time a participant unsuccessfully tried to sue the festival: He had walked out onto the coals after the “man”was set on fire and, predictably, burned himself.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Jailbreak.[end-div]

I Scream, You Scream, We Should All Scream for The Scream

On May 2, 2012 The Scream sold at auction in New York for just under $120,000,000.

The Scream, actually one of 4 slightly different originals, painted by Edvard Munch, has become as iconic as the Apple or McDonalds corporate logo. And, that sums up the crass, financial madness that continues to envelop the art world, and indeed most of society.

[div class=attrib]More from Jonathan Jones on Art:[end-div]

I used to like The Scream. Its sky of blood and zombie despair seemed to say so much, so honestly. Munch is a poet in colours. His pictures portray moods, most of which are dark. But sometimes on a spring day on the banks of Oslofjord he can muster a bit of uneasy delight in the world. Right now, I would rather look at his painting Ashes, a portrayal of the aftermath of sex in a Norwegian wood, or Girls on a Pier, whose lyrical longing is fraught with loneliness, than at Munch’s most famous epitome of the modern condition.

The modern art market is becoming violent and destructive. It spoils what it sells and leaves nothing but ashes. The greatest works of art are churned through a sausage mill of celebrity and chatter and become, at the end of it all, just a price tag. The Scream has been too famous for too long: too famous for its own good. Its apotheosis by this auction of the only version in private hands turns the introspection of a man in the grip of terrible visions into a number: 120,000,000. Dollars, that is. It is no longer a great painting: it is an event in the madness of our time. As all the world screams at inequality and the tyranny of a finance-led model of capitalism that is failing to provide the general wellbeing that might justify its excesses, the 1% rub salt in the wound by turning profound insights into saleable playthings.

Disgust rises at the thought of that grotesque number, so gross and absurd that it destroys actual value. Art has become the meaningless totem of a world that no longer feels the emotions it was created to express. We can no longer create art like The Scream (the closest we can get is a diamond skull). But we are good at turning the profundities of the past into price tags.

Think about it. Munch’s Scream is an unadulterated vision of modern life as a shudder of despair. Pain vibrates across the entire surface of the painting like a pool of tears rippled by a cry. Munch’s world of poverty and illness, as Sue Prideaux makes clear in her devastating biography, more than justified such a scream. His other paintings, such as The Sick Child and Evening on Karl-Johan reveal his comprehensive unhappiness and alienation that reaches its purest lucidity in The Scream.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: One of several versions of the painting “The Scream”. Painted in 1893, Edvard Munch. Courtesy of The National Gallery, Oslo, Norway.[end-div]

Before I Die…

Before I Die” is an interactive, public art project conceived by artist Candy Chang. The first installation was in New Orleans in February 2011, and has since grown to around 30 other cities across the United States, and 7 countries.

The premise is simple: install a blank billboard-sized chalkboard in a publicly accessible space, supply a bucket of chalk, write the prompt “Before I Die…” on the chalkboard, sit back and wait, watch people share their hopes and dreams.

So far the artist and her collaborators have noted over 25,000 responses. Of the responses, 15 percent want to travel to distant lands, 10 percent wish to reconnect with family and 1 percent want to write a book.

[div class=attrib]From the Washington  Post:[end-div]

Before they die, the citizens of Washington, D.C., would like to achieve things both monumental and minuscule. They want to eat delicious food, travel the globe and — naturally — effect political change. They want to see the Earth from the Moon. They want to meet God.

They may have carried these aspirations in their hearts and heads their whole lives, but until a chalkboard sprang up at 14th and Q streets NW, they may have never verbalized them. On the construction barrier enveloping a crumbling old laundromat in the midst of its transformation into an upscale French bistro, the billboard-size chalkboard offers baskets of chalk and a prompt: “Before I die .?.?.”

The project was conceived by artist Candy Chang, a 2011 TED fellow who created the first “Before I Die” public art installation last February in a city that has contemplated its own mortality: New Orleans. On the side of an abandoned building, Chang erected the chalkboard to help residents “remember what is important to them,” she wrote on her Web site. She let the responses — funny, poignant, morbid — roll in. “Before I Die” migrated to other cities, and with the help of other artists who borrowed her template, it has recorded the bucket-list dreams of people in more than 30 locations. The District’s arrived in Logan Circle early Sunday morning.

Chang analyzes the responses on each wall; most involve travel, she says. But in a well-traveled city like Washington, many of the hopes on the board here address politics and power. Before they die, Washingtonians would like to “Liberate Palestine,” “Be a general (Hooah!),” “Be chief of staff,” “See a transgender president,” “[Have] access to reproductive health care without stigma.” Chang also notes that the D.C. wall is more international than others she’s seen, with responses in at least seven languages.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Crystal Hamling, 27, adds her thoughts to the “Before I Die…” art wall at 14th and Q streets NW. She wrote “Make people feel loved.” Courtesy of Katherine Frey / Washington Post.[end-div]

The Connectome: Slicing and Reconstructing the Brain

[tube]1nm1i4CJGwY[/tube]

[div class=attrib]From the Guardian:[end-div]

There is a macabre brilliance to the machine in Jeff Lichtman’s laboratory at Harvard University that is worthy of a Wallace and Gromit film. In one end goes brain. Out the other comes sliced brain, courtesy of an automated arm that wields a diamond knife. The slivers of tissue drop one after another on to a conveyor belt that zips along with the merry whirr of a cine projector.

Lichtman’s machine is an automated tape-collecting lathe ultramicrotome (Atlum), which, according to the neuroscientist, is the tool of choice for this line of work. It produces long strips of sticky tape with brain slices attached, all ready to be photographed through a powerful electron microscope.

When these pictures are combined into 3D images, they reveal the inner wiring of the organ, a tangled mass of nervous spaghetti. The research by Lichtman and his co-workers has a goal in mind that is so ambitious it is almost unthinkable.

If we are ever to understand the brain in full, they say, we must know how every neuron inside is wired up.

Though fanciful, the payoff could be profound. Map out our “connectome” – following other major “ome” projects such as the genome and transcriptome – and we will lay bare the biological code of our personalities, memories, skills and susceptibilities. Somewhere in our brains is who we are.

To use an understatement heard often from scientists, the job at hand is not trivial. Lichtman’s machine slices brain tissue into exquisitely thin wafers. To turn a 1mm thick slice of brain into neural salami takes six days in a process that yields about 30,000 slices.

But chopping up the brain is the easy part. When Lichtman began this work several years ago, he calculated how long it might take to image every slice of a 1cm mouse brain. The answer was 7,000 years. “When you hear numbers like that, it does make your pulse quicken,” Lichtman said.

The human brain is another story. There are 85bn neurons in the 1.4kg (3lbs) of flesh between our ears. Each has a cell body (grey matter) and long, thin extensions called dendrites and axons (white matter) that reach out and link to others. Most neurons have lots of dendrites that receive information from other nerve cells, and one axon that branches on to other cells and sends information out.

On average, each neuron forms 10,000 connections, through synapses with other nerve cells. Altogether, Lichtman estimates there are between 100tn and 1,000tn connections between neurons.

Unlike the lung, or the kidney, where the whole organ can be understood, more or less, by grasping the role of a handful of repeating physiological structures, the brain is made of thousands of specific types of brain cell that look and behave differently. Their names – Golgi, Betz, Renshaw, Purkinje – read like a roll call of the pioneers of neuroscience.

Lichtman, who is fond of calculations that expose the magnitude of the task he has taken on, once worked out how much computer memory would be needed to store a detailed human connectome.

“To map the human brain at the cellular level, we’re talking about 1m petabytes of information. Most people think that is more than the digital content of the world right now,” he said. “I’d settle for a mouse brain, but we’re not even ready to do that. We’re still working on how to do one cubic millimetre.”

He says he is about to submit a paper on mapping a minuscule volume of the mouse connectome and is working with a German company on building a multibeam microscope to speed up imaging.

For some scientists, mapping the human connectome down to the level of individual cells is verging on overkill. “If you want to study the rainforest, you don’t need to look at every leaf and every twig and measure its position and orientation. It’s too much detail,” said Olaf Sporns, a neuroscientist at Indiana University, who coined the term “connectome” in 2005.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the Connectome Project / Guardian.[end-div]

Painting the Light: The Life and Death of Thomas Kinkade

You’ve probably seen a Kinkade painting somewhere — think cute cottage, meandering stream, misty clouds, soft focus and warm light.

According to Thomas Kinkade’s company one of his cozy, kitschy paintings (actually a photographic reproduction) could be found in one of every 20 homes in the United States. Kinkade died on April 6, 2012. With his passing, scholars of the art market are now analyzing what he left behind.

[div class=attrib]From the Guardian:[end-div]

In death, the man who at his peak claimed to be the world’s most successful living artist perhaps achieved the sort of art-world excess he craved.

On Tuesday, the coroner’s office in Santa Clara, California, announced that the death of Thomas Kinkade, the Painter of Light™, purveyor of kitsch prints to the masses, was caused by an accidental overdose of alcohol and Valium. For good measure, a legal scrap has emerged between Kinkade’s ex-wife (and trustee of his estate) and his girlfriend.

Who could have imagined that behind so many contented visions of peace, harmony and nauseating goodness lay just another story of deception, disappointment and depravity, fuelled by those ever-ready stooges, Valium and alcohol?

Kinkade was a self-made phenomenon, with his prints (according to his company) hanging in one in 20 American homes. At his height, in 2001, Kinkade generated $130m (£81m) in sales. Kinkade’s twee paintings of cod-traditional cottages, lighthouses, gardens, gazebos and gates sold by the million through a network of Thomas Kinkade galleries, owned by his company, and through a parallel franchise operation. At their peak (between 1995 and 2005) there were 350 Kinkade franchises across the US, with the bulk in his home state of California. You would see them in roadside malls in small towns, twinkly lights adorning the windows, and in bright shopping centres, sandwiched between skatewear outlets and nail bars.

But these weren’t just galleries. They were the Thomas Kinkade experience – minus the alcohol and Valium, of course. Clients would be ushered into a climate-controlled viewing room to maximise the Kinkadeness of the whole place, and their experience. Some galleries offered “master highlighters”, trained by someone not far from the master himself, to add a hand-crafted splash of paint to the desired print and so make a truly unique piece of art, as opposed to the framed photographic print that was the standard fare.

The artistic credo was expressed best in the 2008 movie Thomas Kinkade’s Christmas Cottage. Peter O’Toole, earning a crust playing Kinkade’s artistic mentor, urges the young painter to “Paint the light, Thomas! Paint the light!”.

Kinkade’s art also went beyond galleries through the “Thomas Kinkade lifestyle brand”. This wasn’t just the usual art gallery giftshop schlock: Kinkade sealed a tie-in with La-Z-Boy furniture (home of the big butt recliner) for a Kinkade-inspired range of furniture. But arguably his only great artwork was “The Village, a Thomas Kinkade Community”, unveiled in 2001. A 101-home development in Vallejo, outside San Francisco, operating under the slogan: “Calm, not chaos. Peace, not pressure,” the village offers four house designs, each named after one of Kinkade’s daughters. Plans for further housing developments, alas, fell foul of the housing crisis.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]