Philip K. Dick – Future Gnostic

Simon Critchley, professor of philosophy, continues his serialized analysis of Philip K. Dick. Part I first appeared here. Part II examines the events around 2-3-74 that led to Dick’s 8,000 page Gnostic treatise “Exegesis”.

[div class=attrib]From the New York Times:[end-div]

In the previous post, we looked at the consequences and possible philosophic import of the events of February and March of 1974 (also known as 2-3-74) in the life and work of Philip K. Dick, a period in which a dose of sodium pentathol, a light-emitting fish pendant and decades of fiction writing and quasi-philosophic activity came together in revelation that led to Dick’s 8,000-page “Exegesis.”

So, what is the nature of the true reality that Dick claims to have intuited during psychedelic visions of 2-3-74? Does it unwind into mere structureless ranting and raving or does it suggest some tradition of thought or belief? I would argue the latter. This is where things admittedly get a little weirder in an already weird universe, so hold on tight.

In the very first lines of “Exegesis” Dick writes, “We see the Logos addressing the many living entities.” Logos is an important concept that litters the pages of “Exegesis.” It is a word with a wide variety of meaning in ancient Greek, one of which is indeed “word.” It can also mean speech, reason (in Latin, ratio) or giving an account of something. For Heraclitus, to whom Dick frequently refers, logos is the universal law that governs the cosmos of which most human beings are somnolently ignorant. Dick certainly has this latter meaning in mind, but — most important — logos refers to the opening of John’s Gospel, “In the beginning was the word” (logos), where the word becomes flesh in the person of Christ.

But the core of Dick’s vision is not quite Christian in the traditional sense; it is Gnostical: it is the mystical intellection, at its highest moment a fusion with a transmundane or alien God who is identified with logos and who can communicate with human beings in the form of a ray of light or, in Dick’s case, hallucinatory visions.

There is a tension throughout “Exegesis” between a monistic view of the cosmos (where there is just one substance in the universe, which can be seen in Dick’s references to Spinoza’s idea as God as nature, Whitehead’s idea of reality as process and Hegel’s dialectic where “the true is the whole”) and a dualistic or Gnostical view of the cosmos, with two cosmic forces in conflict, one malevolent and the other benevolent. The way I read Dick, the latter view wins out. This means that the visible, phenomenal world is fallen and indeed a kind of prison cell, cage or cave.

Christianity, lest it be forgotten, is a metaphysical monism where it is the obligation of every Christian to love every aspect of creation – even the foulest and smelliest – because it is the work of God. Evil is nothing substantial because if it were it would have to be caused by God, who is by definition good. Against this, Gnosticism declares a radical dualism between the false God who created this world – who is usually called the “demiurge” – and the true God who is unknown and alien to this world. But for the Gnostic, evil is substantial and its evidence is the world. There is a story of a radical Gnostic who used to wash himself in his own saliva in order to have as little contact as possible with creation. Gnosticism is the worship of an alien God by those alienated from the world.

The novelty of Dick’s Gnosticism is that the divine is alleged to communicate with us through information. This is a persistent theme in Dick, and he refers to the universe as information and even Christ as information. Such information has a kind of electrostatic life connected to the theory of what he calls orthogonal time. The latter is rich and strange idea of time that is completely at odds with the standard, linear conception, which goes back to Aristotle, as a sequence of now-points extending from the future through the present and into the past. Dick explains orthogonal time as a circle that contains everything rather than a line both of whose ends disappear in infinity. In an arresting image, Dick claims that orthogonal time contains, “Everything which was, just as grooves on an LP contain that part of the music which has already been played; they don’t disappear after the stylus tracks them.”

It is like that seemingly endless final chord in the Beatles’ “A Day in the Life” that gathers more and more momentum and musical complexity as it decays. In other words, orthogonal time permits total recall.

[div class=attrib]Read the entire article after the jump.[end-div]

Heinz and the Clear Glass Bottle

[div class=attrib]From Anthropology in Practice:[end-div]

Do me a favor: Go open your refrigerator and look at the labels on your condiments. Alternatively, if you’re at work, open your drawer and flip through your stash of condiment packets. (Don’t look at me like that. I know you have a stash. Or you know where to find one. It’s practically Office Survival 101.) Go on. I’ll wait.

So tell me, what brands are hanging out in your fridge? (Or drawer?) Hellmann’s? French’s? Heinz? Even if you aren’t a slave to brand names and you typically buy whatever is on sale or the local supermarket brand, if you’ve ever eaten out or purchased a meal to-go that required condiments, you’ve likely been exposed to one of these brands for mayonnaise, mustard, or ketchup. And given the broad reach of Heinz, I’d be surprised if the company didn’t get a mention. So what are the origins of Heinz—the man and the brand? Why do we adorn our hamburgers and hotdogs with his products over others? It boils down to trust—carefully crafted trust, which obscures the image of Heinz as a food corporation and highlights a sense of quality, home-made goods.

Henry Heinz was born in 1844 to German immigrant parents near Pittsburgh, Pennsylvania. His father John owned a brickyard in Sharpsburg, and his mother Anna was a homemaker with a talent for gardening. Henry assisted both of them—in the brickyard before and after school, and in the garden when time permitted. He also sold surplus produce to local grocers. Henry proved to have quite a green thumb himself and at the age of twelve, he had his own plot, a horse, a cart, and a list of customers.

Henry’s gardening proficiency was in keeping with the times—most households were growing or otherwise making their own foods at home in the early nineteenth century, space permitting. The market for processed food was hampered by distrust in the quality offered:

Food quality and safety were growing concerns in the mid nineteenth-century cities. These issues were not new. Various local laws had mandated inspection of meat and flour exports since the colonial period. Other ordinances had regulated bread prices and ingredients, banning adulterants, such as chalk and ground beans. But as urban areas and the sources of food supplying these areas expanded, older controls weakened. Public anxiety about contaminated food, including milk, meat, eggs, and butter mounted. So, too, did worries about adulterated chocolate, sugar, vinegar, molasses, and other foods.

Contaminants included lead (in peppers and mustard) and ground stone (in flour and sugar). So it’s not surprising that people were hesitant about purchasing pre-packaged products. However, American society was on the brink of a social change that would make people more receptive to processed foods: industrialization was accelerating. As a result, an increase in urbanization reduced the amount of space available for gardens and livestock, incomes rose so that more people could afford prepared foods, and women’s roles shifted to allow for wage labor. In fact, between 1859 and 1899, the output of the food processing industry expanded 1500%, and by 1900, manufactured food comprised about a third of commodities produced in the US.

So what led the way for this adoption of packaged foods? Believe it or not, horseradish.

Horseradish was particularly popular among English and German immigrant communities. It was used to flavor potatoes, cabbage, bread, meats, and fish—and some people even attributed medicinal properties to the condiment. It was also extremely time consuming to make: the root had to be grated, packed in vinegar and spices, and sealed in jars or pots. The potential market for prepared horseradish existed, but customers were suspicious of the contents of the green and brown glass bottles that served as packaging. Turnip and wood-fibers were popular fillers, and the opaque coloring of the bottles made it hard to judge the caliber of the contents.

Heinz understood this—and saw the potential for selling consumers, especially women—something that they desperately wanted: time. In his teens, he began to bottle horseradish using his mother’s recipe—without fillers—in clear glass, and sold his products to local grocers and hotel owners. He emphasized the purity of his product and noted he had nothing to hide because he used clear glass so you could view the contents of his product. His strategy worked: By 1861, he was growing three and a half acres of horseradish to meet demand, and had made $2400.00 by year’s end (roughly $93,000.00 in 2012).

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Henry J. Heinz (1844-1919). Courtesy of Wikipedia.[end-div]

Whitewashing Prejudice One Word at a Time

[div class=attrib]From Salon:[end-div]

The news of recent research documenting how readers identify with the main characters in stories has mostly been taken as confirmation of the value of literary role models. Lisa Libby, an assistant professor at Ohio State University and co-author of a study published in the Journal of Personality and Social Psychology, explained that subjects who read a short story in which the protagonist overcomes obstacles in order to vote were more likely to vote themselves several days later.

The suggestibility of readers isn’t news. Johann Wolfgang von Goethe’s novel of a sensitive young man destroyed by unrequited love, “The Sorrows of Young Werther,” inspired a rash of suicides by would-be Werthers in the late 1700s. Jack Kerouac has launched a thousand road trips. Still, this is part of science’s job: Running empirical tests on common knowledge — if for no other reason than because common knowledge (and common sense) is often wrong.

A far more unsettling finding is buried in this otherwise up-with-reading news item. The Ohio State researchers gave 70 heterosexual male readers stories about a college student much like themselves. In one version, the character was straight. In another, the character is described as gay early in the story. In a third version the character is gay, but this isn’t revealed until near the end. In each case, the readers’ “experience-taking” — the name these researchers have given to the act of immersing oneself in the perspective, thoughts and emotions of a story’s protagonist — was measured.

The straight readers were far more likely to take on the experience of the main character if they weren’t told until late in the story that he was different from themselves. This, too, is not so surprising. Human beings are notorious for extending more of their sympathy to people they perceive as being of their own kind. But the researchers also found that readers of the “gay-late” story showed “significantly more favorable attitudes toward homosexuals” than the other two groups of readers, and that they were less likely to attribute stereotypically gay traits, such as effeminacy, to the main character. The “gay-late” story actually reduced their biases (conscious or not) against gays, and made them more empathetic. Similar results were found when white readers were given stories about black characters to read.

What can we do with this information? If we subscribe to the idea that literature ought to improve people’s characters — and that’s the sentiment that seems to be lurking behind the study itself — then perhaps authors and publishers should be encouraged to conceal a main character’s race or sexual orientation from readers until they become invested in him or her. Who knows how much J.K. Rowling’s revelation that Albus Dumbledore is gay, announced after the publication of the final Harry Potter book, has helped to combat homophobia? (Although I confess that I find it hard to believe there were that many homophobic Potter fans in the first place.)

[div class=attrib]Read the entire article after the jump.[end-div]

Men are From LinkedIn, Women are From Pinterest

No surprise. Women and men use online social networks differently. A new study of online behavior by researchers in Vienna, Austria, shows that the sexes organize their networks very differently and for different reasons.

[div class=attrib]From Technology Review:[end-div]

One of the interesting insights that social networks offer is the difference between male and female behaviour.

In the past, behavioural differences have been hard to measure. Experiments could only be done on limited numbers of individuals and even then, the process of measurement often distorted people’s behaviour.

That’s all changed with the advent of massive online participation in gaming, professional and friendship  networks. For the first time, it has become possible to quantify exactly how the genders differ in their approach to things like risk and communication.

Gender specific studies are surprisingly rare, however. Nevertheless a growing body if evidence is emerging that social networks reflect many of the social and evolutionary differences that we’ve long suspected.

Earlier this year, for example, we looked at a remarkable study of a mobile phone network that demonstrated the different reproductive strategies that men and women employ throughout their lives, as revealed by how often they call friends, family and potential mates.

Today, Michael Szell and Stefan Thurner at the Medical University of Vienna in Austria say they’ve found significance differences in the way men and women manage their social networks in an online game called Pardus with over 300,000 players.

In this game, players  explore various solar systems in a virtual universe. On the way, they can mark other players as friends or enemies, exchange messages, gain wealth by trading  or doing battle but can also be killed.

The interesting thing about online games is that almost every action of every player is recorded, mostly without the players being consciously aware of this. That means measurement bias is minimal.

The networks of friends and enemies that are set up also differ in an important way from those on social networking sites such as Facebook. That’s because players can neither see nor influence other players’ networks. This prevents the kind of clustering and herding behaviour that sometimes dominates  other social networks.

Szell and Thurner say the data reveals clear and significant differences between men and women in Pardus.

For example, men and women  interact with the opposite sex differently.  “Males reciprocate friendship requests from females faster than vice versa and hesitate to reciprocate hostile actions of females,” say Szell and Thurner.

Women are also significantly more risk averse than men as measured by the amount of fighting they engage in and their likelihood of dying.

They are also more likely to be friends with each other than men.

These results are more or less as expected. More surprising is the finding that women tend to be more wealthy than men, probably because they engage more in economic than destructive behaviour.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of InformationWeek.[end-div]

What Happened to TED?

No, not Ted Nugent or Ted Koppel or Ted Turner; we are talking about the TED.

Alex Pareene over at Salon offers a well rounded critique of TED. TED is a global forum of “ideas worth spreading” centered around annual conferences, loosely woven around themes of technology, entertainment and design (TED).

Richard Wurman started TED in 1984 as a self-congratulatory networking event for Silicon Valley insiders. Since changing hands in 2002, TED has grown into a worldwide brand, but still self-congratulatory, only more exclusive. Currently, it costs $6,000 annually to be admitted to the elite idea sharing club.

By way of background, TED’s mission statement follows:

We believe passionately in the power of ideas to change attitudes, lives and ultimately, the world. So we’re building here a clearinghouse that offers free knowledge and inspiration from the world’s most inspired thinkers, and also a community of curious souls to engage with ideas and each other.

[div class=attrib]From Salon:[end-div]

There was a bit of a scandal last week when it was reported that a TED Talk on income equality had been censored. That turned out to be not quite the entire story. Nick Hanauer, a venture capitalist with a book out on income inequality, was invited to speak at a TED function. He spoke for a few minutes, making the argument that rich people like himself are not in fact job creators and that they should be taxed at a higher rate.

The talk seemed reasonably well-received by the audience, but TED “curator” Chris Anderson told Hanauer that it would not be featured on TED’s site, in part because the audience response was mixed but also because it was too political and this was an “election year.”

Hanauer had his PR people go to the press immediately and accused TED of censorship, which is obnoxious — TED didn’t have to host his talk, obviously, and his talk was not hugely revelatory for anyone familiar with recent writings on income inequity from a variety of experts — but Anderson’s responses were still a good distillation of TED’s ideology.

In case you’re unfamiliar with TED, it is a series of short lectures on a variety of subjects that stream on the Internet, for free. That’s it, really, or at least that is all that TED is to most of the people who have even heard of it. For an elite few, though, TED is something more: a lifestyle, an ethos, a bunch of overpriced networking events featuring live entertainment from smart and occasionally famous people.

Before streaming video, TED was a conference — it is not named for a person, but stands for “technology, entertainment and design” — organized by celebrated “information architect” (fancy graphic designer) Richard Saul Wurman. Wurman sold the conference, in 2002, to a nonprofit foundation started and run by former publisher and longtime do-gooder Chris Anderson (not the Chris Anderson of Wired). Anderson grew TED from a woolly conference for rich Silicon Valley millionaire nerds to a giant global brand. It has since become a much more exclusive, expensive elite networking experience with a much more prominent public face — the little streaming videos of lectures.

It’s even franchising — “TEDx” events are licensed third-party TED-style conferences largely unaffiliated with TED proper — and while TED is run by a nonprofit, it brings in a tremendous amount of money from its members and corporate sponsorships. At this point TED is a massive, money-soaked orgy of self-congratulatory futurism, with multiple events worldwide, awards and grants to TED-certified high achievers, and a list of speakers that would cost a fortune if they didn’t agree to do it for free out of public-spiritedness.

According to a 2010 piece in Fast Company, the trade journal of the breathless bullshit industry, the people behind TED are “creating a new Harvard — the first new top-prestige education brand in more than 100 years.” Well! That’s certainly saying… something. (What it’s mostly saying is “This is a Fast Company story about some overhyped Internet thing.”)

To even attend a TED conference requires not just a donation of between $7,500 and $125,000, but also a complicated admissions process in which the TED people determine whether you’re TED material; so, as Maura Johnston says, maybe it’s got more in common with Harvard than is initially apparent.

Strip away the hype and you’re left with a reasonably good video podcast with delusions of grandeur. For most of the millions of people who watch TED videos at the office, it’s a middlebrow diversion and a source of factoids to use on your friends. Except TED thinks it’s changing the world, like if “This American Life” suddenly mistook itself for Doctors Without Borders.

The model for your standard TED talk is a late-period Malcolm Gladwell book chapter. Common tropes include:

  • Drastically oversimplified explanations of complex problems.
  • Technologically utopian solutions to said complex problems.
  • Unconventional (and unconvincing) explanations of the origins of said complex problems.

Staggeringly obvious observations presented as mind-blowing new insights.
What’s most important is a sort of genial feel-good sense that everything will be OK, thanks in large part to the brilliance and beneficence of TED conference attendees. (Well, that and a bit of Vegas magician-with-PowerPoint stagecraft.)

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Multi-millionaire Nick Hanauer delivers a speech at TED Talks. Courtesy of Time.[end-div]

Human Evolution: Stalled

It takes no expert neuroscientist, anthropologist or evolutionary biologist to recognize that human evolution has probably stalled. After all, one only needs to observe our obsession with reality TV. Yes, evolution screeched to a halt around 1999, when reality TV hit critical mass in the mainstream public consciousness. So, what of evolution?

[div class=attrib]From the Wall Street Journal:[end-div]

If you write about genetics and evolution, one of the commonest questions you are likely to be asked at public events is whether human evolution has stopped. It is a surprisingly hard question to answer.

I’m tempted to give a flippant response, borrowed from the biologist Richard Dawkins: Since any human trait that increases the number of babies is likely to gain ground through natural selection, we can say with some confidence that incompetence in the use of contraceptives is probably on the rise (though only if those unintended babies themselves thrive enough to breed in turn).

More seriously, infertility treatment is almost certainly leading to an increase in some kinds of infertility. For example, a procedure called “intra-cytoplasmic sperm injection” allows men with immobile sperm to father children. This is an example of the “relaxation” of selection pressures caused by modern medicine. You can now inherit traits that previously prevented human beings from surviving to adulthood, procreating when they got there or caring for children thereafter. So the genetic diversity of the human genome is undoubtedly increasing.

Or it was until recently. Now, thanks to pre-implantation genetic diagnosis, parents can deliberately choose to implant embryos that lack certain deleterious mutations carried in their families, with the result that genes for Tay-Sachs, Huntington’s and other diseases are retreating in frequency. The old and overblown worry of the early eugenicists—that “bad” mutations were progressively accumulating in the species—is beginning to be addressed not by stopping people from breeding, but by allowing them to breed, safe in the knowledge that they won’t pass on painful conditions.

Still, recent analyses of the human genome reveal a huge number of rare—and thus probably fairly new—mutations. One study, by John Novembre of the University of California, Los Angeles, and his colleagues, looked at 202 genes in 14,002 people and found one genetic variant in somebody every 17 letters of DNA code, much more than expected. “Our results suggest there are many, many places in the genome where one individual, or a few individuals, have something different,” said Dr. Novembre.

Another team, led by Joshua Akey of the University of Washington, studied 1,351 people of European and 1,088 of African ancestry, sequencing 15,585 genes and locating more than a half million single-letter DNA variations. People of African descent had twice as many new mutations as people of European descent, or 762 versus 382. Dr. Akey blames the population explosion of the past 5,000 years for this increase. Not only does a larger population allow more variants; it also implies less severe selection against mildly disadvantageous genes.

So we’re evolving as a species toward greater individual (rather than racial) genetic diversity. But this isn’t what most people mean when they ask if evolution has stopped. Mainly they seem to mean: “Has brain size stopped increasing?” For a process that takes millions of years, any answer about a particular instant in time is close to meaningless. Nonetheless, the short answer is probably “yes.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: The “Robot Evolution”. Courtesy of STRK3.[end-div]

Facebook: What Next?

Yawn…

The Facebook IPO (insider profit opportunity rather than Initial Public Offering) finally came and went. Much like its 900 million members, Facebook executives managed to garner enough fleeting “likes” from its Wall Street road show to ensure temporary short-term hype and big returns for key insiders. But, beneath the hyperbole lies a basic question that goes to the heart of its stratospheric valuation: Does Facebook have a long-term strategy beyond the rapidly deflating ad revenue model?

[div class=attrib]From Technology Review:[end-div]

Facebook is not only on course to go bust, but will take the rest of the ad-supported Web with it.

Given its vast cash reserves and the glacial pace of business reckonings, that will sound hyperbolic. But that doesn’t mean it isn’t true.

At the heart of the Internet business is one of the great business fallacies of our time: that the Web, with all its targeting abilities, can be a more efficient, and hence more profitable, advertising medium than traditional media. Facebook, with its 900 million users, valuation of around $100 billion, and the bulk of its business in traditional display advertising, is now at the heart of the heart of the fallacy.

The daily and stubborn reality for everybody building businesses on the strength of Web advertising is that the value of digital ads decreases every quarter, a consequence of their simultaneous ineffectiveness and efficiency. The nature of people’s behavior on the Web and of how they interact with advertising, as well as the character of those ads themselves and their inability to command real attention, has meant a marked decline in advertising’s impact.

At the same time, network technology allows advertisers to more precisely locate and assemble audiences outside of branded channels. Instead of having to go to CNN for your audience, a generic CNN-like audience can be assembled outside CNN’s walls and without the CNN-brand markup. This has resulted in the now famous and cruelly accurate formulation that $10 of offline advertising becomes $1 online.

I don’t know anyone in the ad-Web business who isn’t engaged in a relentless, demoralizing, no-exit operation to realign costs with falling per-user revenues, or who isn’t manically inflating traffic to compensate for ever-lower per-user value.

Facebook, however, has convinced large numbers of otherwise intelligent people that the magic of the medium will reinvent advertising in a heretofore unimaginably profitable way, or that the company will create something new that isn’t advertising, which will produce even more wonderful profits. But at a forward profit-to-earnings ratio of 56 (as of the close of trading on May 21), these innovations will have to be something like alchemy to make the company worth its sticker price. For comparison, Google trades at a forward P/E ratio of 12. (To gauge how much faith investors have that Google, Facebook, and other Web companies will extract value from their users, see our recent chart.)

Facebook currently derives 82 percent of its revenue from advertising. Most of that is the desultory ticky-tacky kind that litters the right side of people’s Facebook profiles. Some is the kind of sponsorship that promises users further social relationships with companies: a kind of marketing that General Motors just announced it would no longer buy.

Facebook’s answer to its critics is: pay no attention to the carping. Sure, grunt-like advertising produces the overwhelming portion of our $4 billion in revenues; and, yes, on a per-user basis, these revenues are in pretty constant decline, but this stuff is really not what we have in mind. Just wait.

It’s quite a juxtaposition of realities. On the one hand, Facebook is mired in the same relentless downward pressure of falling per-user revenues as the rest of Web-based media. The company makes a pitiful and shrinking $5 per customer per year, which puts it somewhat ahead of the Huffington Post and somewhat behind the New York Times’ digital business. (Here’s the heartbreaking truth about the difference between new media and old: even in the New York Times’ declining traditional business, a subscriber is still worth more than $1,000 a year.) Facebook’s business only grows on the unsustainable basis that it can add new customers at a faster rate than the value of individual customers declines. It is peddling as fast as it can. And the present scenario gets much worse as its users increasingly interact with the social service on mobile devices, because it is vastly harder, on a small screen, to sell ads and profitably monetize users.

On the other hand, Facebook is, everyone has come to agree, profoundly different from the Web. First of all, it exerts a new level of hegemonic control over users’ experiences. And it has its vast scale: 900 million, soon a billion, eventually two billion (one of the problems with the logic of constant growth at this scale and speed, of course, is that eventually it runs out of humans with computers or smart phones). And then it is social. Facebook has, in some yet-to-be-defined way, redefined something. Relationships? Media? Communications? Communities? Something big, anyway.

The subtext—an overt subtext—of the popular account of Facebook is that the network has a proprietary claim and special insight into social behavior. For enterprises and advertising agencies, it is therefore the bridge to new modes of human connection.

Expressed so baldly, this account is hardly different from what was claimed for the most aggressively boosted companies during the dot-com boom. But there is, in fact, one company that created and harnessed a transformation in behavior and business: Google. Facebook could be, or in many people’s eyes should be, something similar. Lost in such analysis is the failure to describe the application that will drive revenues.

[div class=attrib]Read the entire article after the jump.[end-div]

Something Out of Nothing

The debate on how the universe came to be rages on. Perhaps, however, we are a little closer to understanding why there is “something”, including us, rather than “nothing”.

[div class=attrib]From Scientific American:[end-div]

Why is there something rather than nothing? This is one of those profound questions that is easy to ask but difficult to answer. For millennia humans simply said, “God did it”: a creator existed before the universe and brought it into existence out of nothing. But this just begs the question of what created God—and if God does not need a creator, logic dictates that neither does the universe. Science deals with natural (not supernatural) causes and, as such, has several ways of exploring where the “something” came from.

Multiple universes. There are many multiverse hypotheses predicted from mathematics and physics that show how our universe may have been born from another universe. For example, our universe may be just one of many bubble universes with varying laws of nature. Those universes with laws similar to ours will produce stars, some of which collapse into black holes and singularities that give birth to new universes—in a manner similar to the singularity that physicists believe gave rise to the big bang.

M-theory. In his and Leonard Mlodinow’s 2010 book, The Grand Design, Stephen Hawking embraces “M-theory” (an extension of string theory that includes 11 dimensions) as “the only candidate for a complete theory of the universe. If it is finite—and this has yet to be proved—it will be a model of a universe that creates itself.”

Quantum foam creation. The “nothing” of the vacuum of space actually consists of subatomic spacetime turbulence at extremely small distances measurable at the Planck scale—the length at which the structure of spacetime is dominated by quantum gravity. At this scale, the Heisenberg uncertainty principle allows energy to briefly decay into particles and antiparticles, thereby producing “something” from “nothing.”

Nothing is unstable. In his new book, A Universe from Nothing, cosmologist Lawrence M. Krauss attempts to link quantum physics to Einstein’s general theory of relativity to explain the origin of a universe from nothing: “In quantum gravity, universes can, and indeed always will, spontaneously appear from nothing. Such universes need not be empty, but can have matter and radiation in them, as long as the total energy, including the negative energy associated with gravity [balancing the positive energy of matter], is zero.” Furthermore, “for the closed universes that might be created through such mechanisms to last for longer than infinitesimal times, something like inflation is necessary.” Observations show that the universe is in fact flat (there is just enough matter to slow its expansion but not to halt it), has zero total energy and underwent rapid inflation, or expansion, soon after the big bang, as described by inflationary cosmology. Krauss concludes: “Quantum gravity not only appears to allow universes to be created from noth ing—meaning … absence of space and time—it may require them. ‘Nothing’—in this case no space, no time, no anything!—is unstable.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: There’s Nothing Out There. Courtesy of Rolfe Kanefsky / Image Entertainment.[end-div]

Philip K. Dick – Mystic, Epileptic, Madman, Fictionalizing Philosopher

Professor of philosophy Simon Critchley has an insightful examination (serialized) of Philip K. Dick’s writings. Philip K. Dick had a tragically short, but richly creative writing career. Since his death twenty years ago, many of his novels have profoundly influenced contemporary culture.

[div class=attrib]From the New York Times:[end-div]

Philip K. Dick is arguably the most influential writer of science fiction in the past half century. In his short and meteoric career, he wrote 121 short stories and 45 novels. His work was successful during his lifetime but has grown exponentially in influence since his death in 1982. Dick’s work will probably be best known through the dizzyingly successful Hollywood adaptations of his work, in movies like “Blade Runner” (based on “Do Androids Dream of Electric Sheep?”), “Total Recall,” “Minority Report,” “A Scanner Darkly” and, most recently, “The Adjustment Bureau.” Yet few people might consider Dick a thinker. This would be a mistake.

Dick’s life has long passed into legend, peppered with florid tales of madness and intoxication. There are some who consider such legend something of a diversion from the character of Dick’s literary brilliance. Jonathan Lethem writes — rightly in my view — “Dick wasn’t a legend and he wasn’t mad. He lived among us and was a genius.” Yet Dick’s life continues to obtrude massively into any assessment of his work.

Everything turns here on an event that “Dickheads” refer to with the shorthand “the golden fish.” On Feb. 20, 1974, Dick was hit with the force of an extraordinary revelation after a visit to the dentist for an impacted wisdom tooth for which he had received a dose of sodium pentothal. A young woman delivered a bottle of Darvon tablets to his apartment in Fullerton, Calif. She was wearing a necklace with the pendant of a golden fish, an ancient Christian symbol that had been adopted by the Jesus counterculture movement of the late 1960s.

The fish pendant, on Dick’s account, began to emit a golden ray of light, and Dick suddenly experienced what he called, with a nod to Plato, anamnesis: the recollection or total recall of the entire sum of knowledge. Dick claimed to have access to what philosophers call the faculty of “intellectual intuition”: the direct perception by the mind of a metaphysical reality behind screens of appearance. Many philosophers since Kant have insisted that such intellectual intuition is available only to human beings in the guise of fraudulent obscurantism, usually as religious or mystical experience, like Emmanuel Swedenborg’s visions of the angelic multitude. This is what Kant called, in a lovely German word, “die Schwärmerei,” a kind of swarming enthusiasm, where the self is literally en-thused with the God, o theos. Brusquely sweeping aside the careful limitations and strictures that Kant placed on the different domains of pure and practical reason, the phenomenal and the noumenal, Dick claimed direct intuition of the ultimate nature of what he called “true reality.”

Yet the golden fish episode was just the beginning. In the following days and weeks, Dick experienced and indeed enjoyed a couple of nightlong psychedelic visions with phantasmagoric visual light shows. These hypnagogic episodes continued off and on, together with hearing voices and prophetic dreams, until his death eight years later at age 53. Many very weird things happened — too many to list here — including a clay pot that Dick called “Ho On” or “Oh Ho,” which spoke to him about various deep spiritual issues in a brash and irritable voice.

Now, was this just bad acid or good sodium pentothal? Was Dick seriously bonkers? Was he psychotic? Was he schizophrenic? (He writes, “The schizophrenic is a leap ahead that failed.”) Were the visions simply the effect of a series of brain seizures that some call T.L.E. — temporal lobe epilepsy? Could we now explain and explain away Dick’s revelatory experience by some better neuroscientific story about the brain? Perhaps. But the problem is that each of these causal explanations misses the richness of the phenomena that Dick was trying to describe and also overlooks his unique means for describing them.

The fact is that after Dick experienced the events of what he came to call “2-3-74” (the events of February and March of that year), he devoted the rest of his life to trying to understand what had happened to him. For Dick, understanding meant writing. Suffering from what we might call “chronic hypergraphia,” between 2-3-74 and his death, Dick wrote more than 8,000 pages about his experience. He often wrote all night, producing 20 single-spaced, narrow-margined pages at a go, largely handwritten and littered with extraordinary diagrams and cryptic sketches.

The unfinished mountain of paper, assembled posthumously into some 91 folders, was called “Exegesis.” The fragments were assembled by Dick’s friend Paul Williams and then sat in his garage in Glen Ellen, Calif., for the next several years. A beautifully edited selection of these texts, with a golden fish on the cover, was finally published at the end of 2011, weighing in at a mighty 950 pages. But this is still just a fraction of the whole.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Philip K. Dick by R.Crumb. Courtesy of Wired.[end-div]

Death May Not Be as Bad For You as You Think

Professor of philosopher Shelly Kagan has an interesting take on death. After all, how bad can something be for you if you’re not alive to experience it?

[div class=attrib]From the Chronicle:[end-div]

We all believe that death is bad. But why is death bad?

In thinking about this question, I am simply going to assume that the death of my body is the end of my existence as a person. (If you don’t believe me, read the first nine chapters of my book.) But if death is my end, how can it be bad for me to die? After all, once I’m dead, I don’t exist. If I don’t exist, how can being dead be bad for me?

People sometimes respond that death isn’t bad for the person who is dead. Death is bad for the survivors. But I don’t think that can be central to what’s bad about death. Compare two stories.

Story 1. Your friend is about to go on the spaceship that is leaving for 100 Earth years to explore a distant solar system. By the time the spaceship comes back, you will be long dead. Worse still, 20 minutes after the ship takes off, all radio contact between the Earth and the ship will be lost until its return. You’re losing all contact with your closest friend.

Story 2. The spaceship takes off, and then 25 minutes into the flight, it explodes and everybody on board is killed instantly.

Story 2 is worse. But why? It can’t be the separation, because we had that in Story 1. What’s worse is that your friend has died. Admittedly, that is worse for you, too, since you care about your friend. But that upsets you because it is bad for her to have died. But how can it be true that death is bad for the person who dies?

In thinking about this question, it is important to be clear about what we’re asking. In particular, we are not asking whether or how the process of dying can be bad. For I take it to be quite uncontroversial—and not at all puzzling—that the process of dying can be a painful one. But it needn’t be. I might, after all, die peacefully in my sleep. Similarly, of course, the prospect of dying can be unpleasant. But that makes sense only if we consider death itself to be bad. Yet how can sheer nonexistence be bad?

Maybe nonexistence is bad for me, not in an intrinsic way, like pain, and not in an instrumental way, like unemployment leading to poverty, which in turn leads to pain and suffering, but in a comparative way—what economists call opportunity costs. Death is bad for me in the comparative sense, because when I’m dead I lack life—more particularly, the good things in life. That explanation of death’s badness is known as the deprivation account.

Despite the overall plausibility of the deprivation account, though, it’s not all smooth sailing. For one thing, if something is true, it seems as though there’s got to be a time when it’s true. Yet if death is bad for me, when is it bad for me? Not now. I’m not dead now. What about when I’m dead? But then, I won’t exist. As the ancient Greek philosopher Epicurus wrote: “So death, the most terrifying of ills, is nothing to us, since so long as we exist, death is not with us; but when death comes, then we do not exist. It does not then concern either the living or the dead, since for the former it is not, and the latter are no more.”

If death has no time at which it’s bad for me, then maybe it’s not bad for me. Or perhaps we should challenge the assumption that all facts are datable. Could there be some facts that aren’t?

Suppose that on Monday I shoot John. I wound him with the bullet that comes out of my gun, but he bleeds slowly, and doesn’t die until Wednesday. Meanwhile, on Tuesday, I have a heart attack and die. I killed John, but when? No answer seems satisfactory! So maybe there are undatable facts, and death’s being bad for me is one of them.

Alternatively, if all facts can be dated, we need to say when death is bad for me. So perhaps we should just insist that death is bad for me when I’m dead. But that, of course, returns us to the earlier puzzle. How could death be bad for me when I don’t exist? Isn’t it true that something can be bad for you only if you exist? Call this idea the existence requirement.

Should we just reject the existence requirement? Admittedly, in typical cases—involving pain, blindness, losing your job, and so on—things are bad for you while you exist. But maybe sometimes you don’t even need to exist for something to be bad for you. Arguably, the comparative bads of deprivation are like that.

Unfortunately, rejecting the existence requirement has some implications that are hard to swallow. For if nonexistence can be bad for somebody even though that person doesn’t exist, then nonexistence could be bad for somebody who never exists. It can be bad for somebody who is a merely possible person, someone who could have existed but never actually gets born.

t’s hard to think about somebody like that. But let’s try, and let’s call him Larry. Now, how many of us feel sorry for Larry? Probably nobody. But if we give up on the existence requirement, we no longer have any grounds for withholding our sympathy from Larry. I’ve got it bad. I’m going to die. But Larry’s got it worse: He never gets any life at all.

Moreover, there are a lot of merely possible people. How many? Well, very roughly, given the current generation of seven billion people, there are approximately three million billion billion billion different possible offspring—almost all of whom will never exist! If you go to three generations, you end up with more possible people than there are particles in the known universe, and almost none of those people get to be born.

If we are not prepared to say that that’s a moral tragedy of unspeakable proportions, we could avoid this conclusion by going back to the existence requirement. But of course, if we do, then we’re back with Epicurus’ argument. We’ve really gotten ourselves into a philosophical pickle now, haven’t we? If I accept the existence requirement, death isn’t bad for me, which is really rather hard to believe. Alternatively, I can keep the claim that death is bad for me by giving up the existence requirement. But then I’ve got to say that it is a tragedy that Larry and the other untold billion billion billions are never born. And that seems just as unacceptable.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Still photograph from Ingmar Bergman’s “The Seventh Seal”. Courtesy of the Guardian.[end-div]

Reconnecting with Our Urban Selves

Christopher Mims over at the Technology Review revisits a recent study of our social networks, both real-world and online. It’s startling to see the growth in our social isolation despite the corresponding growth in technologies that increase our ability to communicate and interact with one another. Is the suburbanization of our species to blame, and can Facebook save us?

[div class=attrib]From Technology Review:[end-div]

In 2009, the Pew Internet Trust published a survey worth resurfacing for what it says about the significance of Facebook. The study was inspired by earlier research that “argued that since 1985 Americans have become more socially isolated, the size of their discussion networks has declined, and the diversity of those people with whom they discuss important matters has decreased.”

In particular, the study found that Americans have fewer close ties to those from their neighborhoods and from voluntary associations. Sociologists Miller McPherson, Lynn Smith-Lovin and Matthew Brashears suggest that new technologies, such as the internet and mobile phone, may play a role in advancing this trend.

If you read through all the results from Pew’s survey, you’ll discover two surprising things:

1. “Use of newer information and communication technologies (ICTs), such as the internet and mobile phones, is not the social change responsible for the restructuring of Americans’ core networks. We found that ownership of a mobile phone and participation in a variety of internet activities were associated with larger and more diverse core discussion networks.”

2. However, Americans on the whole are more isolated than they were in 1985. “The average size of Americans’ core discussion networks has declined since 1985; the mean network size has dropped by about one-third or a loss of approximately one confidant.” In addition, “The diversity of core discussion networks has markedly declined; discussion networks are less likely to contain non-kin – that is, people who are not relatives by blood or marriage.”

In other words, the technologies that have isolated Americans are anything but informational. It’s not hard to imagine what they are, as there’s been plenty of research on the subject. These technologies are the automobile, sprawl and suburbia. We know that neighborhoods that aren’t walkable decrease the number of our social connections and increase obesity. We know that commutes make us miserable, and that time spent in an automobile affects everything from our home life to our level of anxiety and depression.

Indirect evidence for this can be found in the demonstrated preferences of Millenials, who are opting for cell phones over automobiles and who would rather live in the urban cores their parents abandoned, ride mass transit and in all other respects physically re-integrate themselves with the sort of village life that is possible only in the most walkable portions of cities.

Meanwhile, it’s worth contemplating one of the primary factors that drove Facebook’s adoption by (soon) 1 billion people: Loneliness. Americans have less support than ever — one in eight in the Pew survey reported having no “discussion confidants.”

It’s clear that for all our fears about the ability of our mobile devices to isolate us in public, the primary way they’re actually used is for connection.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Typical suburban landscape. Courtesy of Treehugger.[end-div]

The Illusion of Free Will

A plethora of recent articles and books from the neuroscience community adds weight to the position that human free will does not exist. Our exquisitely complex brains construct a rather compelling illusion, however we are just observers, held captive to impulses that are completely driven by our biology. And, for that matter, much of this biological determinism is unavailable to our conscious minds.

James Atlas provides a recent summary of current thinking.

[div class=attrib]From the New York Times:[end-div]

WHY are we thinking so much about thinking these days? Near the top of best-seller lists around the country, you’ll find Jonah Lehrer’s “Imagine: How Creativity Works,” followed by Charles Duhigg’s book “The Power of Habit: Why We Do What We Do in Life and Business,” and somewhere in the middle, where it’s held its ground for several months, Daniel Kahneman’s “Thinking, Fast and Slow.” Recently arrived is “Subliminal: How Your Unconscious Mind Rules Your Behavior,” by Leonard Mlodinow.

It’s the invasion of the Can’t-Help-Yourself books.

Unlike most pop self-help books, these are about life as we know it — the one you can change, but only a little, and with a ton of work. Professor Kahneman, who won the Nobel Prize in economic science a decade ago, has synthesized a lifetime’s research in neurobiology, economics and psychology. “Thinking, Fast and Slow” goes to the heart of the matter: How aware are we of the invisible forces of brain chemistry, social cues and temperament that determine how we think and act? Has the concept of free will gone out the window?

These books possess a unifying theme: The choices we make in day-to-day life are prompted by impulses lodged deep within the nervous system. Not only are we not masters of our fate; we are captives of biological determinism. Once we enter the portals of the strange neuronal world known as the brain, we discover that — to put the matter plainly — we have no idea what we’re doing.

Professor Kahneman breaks down the way we process information into two modes of thinking: System 1 is intuitive, System 2 is logical. System 1 “operates automatically and quickly, with little or no effort and no sense of voluntary control.” We react to faces that we perceive as angry faster than to “happy” faces because they contain a greater possibility of danger. System 2 “allocates attention to the effortful mental activities that demand it, including complex computations.” It makes decisions — or thinks it does. We don’t notice when a person dressed in a gorilla suit appears in a film of two teams passing basketballs if we’ve been assigned the job of counting how many times one team passes the ball. We “normalize” irrational data either by organizing it to fit a made-up narrative or by ignoring it altogether.

The effect of these “cognitive biases” can be unsettling: A study of judges in Israel revealed that 65 percent of requests for parole were granted after meals, dropping steadily to zero until the judges’ “next feeding.” “Thinking, Fast and Slow” isn’t prescriptive. Professor Kahneman shows us how our minds work, not how to fiddle with what Gilbert Ryle called the ghost in the machine.

“The Power of Habit” is more proactive. Mr. Duhigg’s thesis is that we can’t change our habits, we can only acquire new ones. Alcoholics can’t stop drinking through willpower alone: they need to alter behavior — going to A.A. meetings instead of bars, for instance — that triggers the impulse to drink. “You have to keep the same cues and rewards as before, and feed the craving by inserting a new routine.”

“The Power of Habit” and “Imagine” belong to a genre that has become increasingly conspicuous over the last few years: the hortatory book, armed with highly sophisticated science, that demonstrates how we can achieve our ambitions despite our sensory cluelessness.

[div class=attrib]Read the entire article following the jump.[end-div]

British Literary Greats, Mapped

Frank Jacobs over at Strange Maps has found another really cool map. This one shows 181 British writers placed according to the part of the British Isles with which they are best associated.

[div class=attrib]From Strange Maps:[end-div]

Maps usually display only one layer of information. In most cases, they’re limited to the topography, place names and traffic infrastructure of a certain region. True, this is very useful, and in all fairness quite often it’s all we ask for. But to reduce cartography to a schematic of accessibility is to exclude the poetry of place.

Or in this case, the poetry and prose of place. This literary map of Britain is composed of the names of 181 British writers, each positioned in parts of the country with which they are associated.

This is not the best navigational tool imaginable. If you want to go from William Wordsworth to Alfred Tennyson, you could pass through Coleridge and Thomas Wyatt, slice through the Brontë sisters, step over Andrew Marvell and finally traverse Philip Larkin. All of which sounds kind of messy.

t’s also rather limited. To reduce the whole literary history of Britain to nine score and one writers can only be done by the exclusion of many other, at least equally worthy contributors to the country’s literary landscape. But completeness is not the point of this map: it is not an instrument for literary-historical navigation either. Its main purpose is sheer cartographic joy.

An added bonus is that we’re able to geo-locate some of English literature’s best-known names. Seamus Heaney is about as Irish as a pint of Guinness for breakfast on March 17th, but it’s a bit of a surprise to see C.S. Lewis placed in Northern Ireland as well. The writer of the Narnia saga is closely associated with Oxford, but was indeed born and raised in Belfast.

Thomas Hardy’s name fills out an area close to Wessex, the fictional west country where much of his stories are set. London is occupied by Ben Jonson and John Donne, among others. Hanging around the capital are Geoffrey Chaucer, who was born there, and Christopher Marlowe, a native of Canterbury. The Isle of Wight is formed by the names of David Gascoyne, the surrealist poet, and John Keats, the romantic poet. Neither was born on the island, but both spent some time there.

[div class=attrib]Read the entire article after the jump.[end-div]

Humanity Becoming “Nicer”

Peter Singer, Professor of Bioethics at Princeton, lends support to Steven Pinker’s recent arguments that our current era is less violent and more peaceful than any previous period of human existence.

[div class=attrib]From Project Syndicate:[end-div]

With daily headlines focusing on war, terrorism, and the abuses of repressive governments, and religious leaders frequently bemoaning declining standards of public and private behavior, it is easy to get the impression that we are witnessing a moral collapse. But I think that we have grounds to be optimistic about the future.

Thirty years ago, I wrote a book called The Expanding Circle, in which I asserted that, historically, the circle of beings to whom we extend moral consideration has widened, first from the tribe to the nation, then to the race or ethnic group, then to all human beings, and, finally, to non-human animals. That, surely, is moral progress.

We might think that evolution leads to the selection of individuals who think only of their own interests, and those of their kin, because genes for such traits would be more likely to spread. But, as I argued then, the development of reason could take us in a different direction.

On the one hand, having a capacity to reason confers an obvious evolutionary advantage, because it makes it possible to solve problems and to plan to avoid dangers, thereby increasing the prospects of survival. Yet, on the other hand, reason is more than a neutral problem-solving tool. It is more like an escalator: once we get on it, we are liable to be taken to places that we never expected to reach. In particular, reason enables us to see that others, previously outside the bounds of our moral view, are like us in relevant respects. Excluding them from the sphere of beings to whom we owe moral consideration can then seem arbitrary, or just plain wrong.

Steven Pinker’s recent book The Better Angels of Our Nature lends weighty support to this view.  Pinker, a professor of psychology at Harvard University, draws on recent research in history, psychology, cognitive science, economics, and sociology to argue that our era is less violent, less cruel, and more peaceful than any previous period of human existence.

The decline in violence holds for families, neighborhoods, tribes, and states. In essence, humans living today are less likely to meet a violent death, or to suffer from violence or cruelty at the hands of others, than their predecessors in any previous century.

Many people will doubt this claim. Some hold a rosy view of the simpler, supposedly more placid lives of tribal hunter-gatherers relative to our own. But examination of skeletons found at archaeological sites suggests that as many as 15% of prehistoric humans met a violent death at the hands of another person. (For comparison, in the first half of the twentieth century, the two world wars caused a death rate in Europe of not much more than 3%.)

Even those tribal peoples extolled by anthropologists as especially “gentle” – for example, the Semai of Malaysia, the Kung of the Kalahari, and the Central Arctic Inuit – turn out to have murder rates that are, relative to population, comparable to Detroit, which has one of the highest murder rates in the United States. In Europe, your chance of being murdered is now less than one-tenth, and in some countries only one-fiftieth, of what it would have been had you lived 500 years ago.

Pinker accepts that reason is an important factor underlying the trends that he describes. In support of this claim, he refers to the “Flynn Effect” – the remarkable finding by the philosopher James Flynn that since IQ tests were first administered, scores have risen considerably. The average IQ is, by definition, 100; but, to achieve that result, raw test results have to be standardized. If the average teenager today took an IQ test in 1910, he or she would score 130, which would be better than 98% of those taking the test then.

It is not easy to attribute this rise to improved education, because the aspects of the tests on which scores have risen the most do not require a good vocabulary, or even mathematical ability, but instead assess powers of abstract reasoning.

[div class=attrib]Read the entire article after the jump.[end-div]

Burning Man as Counterculture? Think Again

Fascinating insight into the Burning Man festival courtesy of co-founder, Larry Harvey. It may be more like Wall Street than Haight-Ashbury.

[div class=attrib]From Washington Post:[end-div]

Go to Burning Man, and you’ll find everything from a thunderdome battle between a couple in tiger-striped bodypaint to a man dressed as a gigantic blueberry muffin on wheels. But underneath it all, says the festival’s co-founder, Larry Harvey, is “old-fashioned capitalism.”

There’s not a corporate logo in sight at the countercultural arts festival, and nothing is for sale but ice and coffee. But at its core, Harvey believes that Burning Man hews closely to the true spirit of a free-enterprise democracy: Ingenuity is celebrated, autonomy is affirmed, and self-reliance is expected. “If you’re talking about old-fashioned, Main Street Republicanism, we could be the poster child,” says Harvey, who hastens to add that the festival is non-ideological — and doesn’t anticipate being in GOP campaign ads anytime soon.

For more than two decades, the festival has funded itself entirely through donations and ticket sales — which now go up to $300 a pop — and it’s almost never gone in the red. And on the dry, barren plains of the Nevada desert where Burning Man materializes for a week each summer, you’re judged by what you do — your art, costumes and participation in a community that expects everyone to contribute in some form and frowns upon those who’ve come simply to gawk or mooch off others.

That’s part of the message that Harvey and his colleagues have brought to Washington this week, in his meetings with congressional staffers and the Interior Department to discuss the future of Burning Man. In fact, the festival is already a known quantity on the Hill: Harvey and his colleagues have been coming to Washington for years to explain the festival to policymakers, in least part because Burning Man takes place on public land that’s managed by the Interior Department.

In fact, Burning Man’s current challenge stems come because it’s so immensely popular, growing beyond 50,000 participants since it started some 20 years ago. “We’re no longer so taxed in explaining that it’s not a hippie debauch,” Harvey tells me over sodas in downtown Washington. “The word has leaked out so well that everyone now wants to come.” In fact, the Interior Department’s Bureau of Land Management that oversees the Black Rock Desert recently put the festival on probation for exceeding the land’s permitted crowd limits — a decision that organizers are now appealing.

Harvey now hopes to direct the enormous passion that Burning Man has stoked in its devotees over the years outside of Nevada’s Black Rock Desert, in the U.S. and overseas — the primary focus of this week’s visit to Washington. Last year, Burning Man transitioned from a limited liability corporation into a 501(c)3 nonprofit, which organizers believed was a better way to support their activities — not just for the festival, but for outside projects and collaborations in what festival-goers often refer to as “the default world.”

These days, Harvey — now in his mid-60s, dressed in a gray cowboy hat, silver western shirt, and aviator sunglasses — is just as likely to reference Richard Florida as the beatniks he once met on Haight Street. Most recently, he’s been talking with Tony Hsieh, the CEO of Zappos, who shares his vision of revitalizing Las Vegas, one of the cities hardest hit by the recent housing bust. “Urban renewal? We’re qualified. We’ve built up and torn down cities for 20 years,” says Harvey. “Cities everywhere are calling for artists, and it’s a blank slate there, blocks and blocks. … We want to extend the civil experiment — to see if business and art can coincide and not maim one another.”

Harvey points out that there’s been long-standing ties between Burning Man artists and to some of the private sector’s most successful executives. Its arts foundation, which distributes grants for festival projects, has received backing from everyone from real-estate magnate Christopher Bently to Mark Pincus, head of online gaming giant Zynga, as the Wall Street Journal points out. “There are a fair number of billionaires” who come to the festival every year, says Harvey, adding that some of the art is privately funded as well. In this way, Burning Man is a microcosm of San Francisco itself, stripping the bohemian artists and the Silicon Valley entrepreneurs of their usual tribal markers on the blank slate of the Nevada desert. At Burning Man, “when someone asks, ‘what do you do?’ — they meant, what did you just do” that day, he explains.

It’s one of the many apparent contradictions at the core of the festival: Paired with the philosophy of “radical self-reliance” — one that demands that participants cart out all their own food, water and shelter into a dust-filled desert for a week — is the festival’s communitarian ethos. Burning Man celebrates a gift economy that inspires random acts of generosity, and volunteer “rangers” traverse the festival to aid those in trouble. The climactic burning of the festival’s iconic “man”— along with a wooden temple filled with notes and memorials — is a ritual of togetherness and belonging for many participants. At the same time, one of the festival’s mottos is, ‘You have a right to hurt yourself.’ It’s the opposite of a nanny state,” Harvey says, recounting the time a participant unsuccessfully tried to sue the festival: He had walked out onto the coals after the “man”was set on fire and, predictably, burned himself.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Jailbreak.[end-div]

I Scream, You Scream, We Should All Scream for The Scream

On May 2, 2012 The Scream sold at auction in New York for just under $120,000,000.

The Scream, actually one of 4 slightly different originals, painted by Edvard Munch, has become as iconic as the Apple or McDonalds corporate logo. And, that sums up the crass, financial madness that continues to envelop the art world, and indeed most of society.

[div class=attrib]More from Jonathan Jones on Art:[end-div]

I used to like The Scream. Its sky of blood and zombie despair seemed to say so much, so honestly. Munch is a poet in colours. His pictures portray moods, most of which are dark. But sometimes on a spring day on the banks of Oslofjord he can muster a bit of uneasy delight in the world. Right now, I would rather look at his painting Ashes, a portrayal of the aftermath of sex in a Norwegian wood, or Girls on a Pier, whose lyrical longing is fraught with loneliness, than at Munch’s most famous epitome of the modern condition.

The modern art market is becoming violent and destructive. It spoils what it sells and leaves nothing but ashes. The greatest works of art are churned through a sausage mill of celebrity and chatter and become, at the end of it all, just a price tag. The Scream has been too famous for too long: too famous for its own good. Its apotheosis by this auction of the only version in private hands turns the introspection of a man in the grip of terrible visions into a number: 120,000,000. Dollars, that is. It is no longer a great painting: it is an event in the madness of our time. As all the world screams at inequality and the tyranny of a finance-led model of capitalism that is failing to provide the general wellbeing that might justify its excesses, the 1% rub salt in the wound by turning profound insights into saleable playthings.

Disgust rises at the thought of that grotesque number, so gross and absurd that it destroys actual value. Art has become the meaningless totem of a world that no longer feels the emotions it was created to express. We can no longer create art like The Scream (the closest we can get is a diamond skull). But we are good at turning the profundities of the past into price tags.

Think about it. Munch’s Scream is an unadulterated vision of modern life as a shudder of despair. Pain vibrates across the entire surface of the painting like a pool of tears rippled by a cry. Munch’s world of poverty and illness, as Sue Prideaux makes clear in her devastating biography, more than justified such a scream. His other paintings, such as The Sick Child and Evening on Karl-Johan reveal his comprehensive unhappiness and alienation that reaches its purest lucidity in The Scream.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: One of several versions of the painting “The Scream”. Painted in 1893, Edvard Munch. Courtesy of The National Gallery, Oslo, Norway.[end-div]

Before I Die…

Before I Die” is an interactive, public art project conceived by artist Candy Chang. The first installation was in New Orleans in February 2011, and has since grown to around 30 other cities across the United States, and 7 countries.

The premise is simple: install a blank billboard-sized chalkboard in a publicly accessible space, supply a bucket of chalk, write the prompt “Before I Die…” on the chalkboard, sit back and wait, watch people share their hopes and dreams.

So far the artist and her collaborators have noted over 25,000 responses. Of the responses, 15 percent want to travel to distant lands, 10 percent wish to reconnect with family and 1 percent want to write a book.

[div class=attrib]From the Washington  Post:[end-div]

Before they die, the citizens of Washington, D.C., would like to achieve things both monumental and minuscule. They want to eat delicious food, travel the globe and — naturally — effect political change. They want to see the Earth from the Moon. They want to meet God.

They may have carried these aspirations in their hearts and heads their whole lives, but until a chalkboard sprang up at 14th and Q streets NW, they may have never verbalized them. On the construction barrier enveloping a crumbling old laundromat in the midst of its transformation into an upscale French bistro, the billboard-size chalkboard offers baskets of chalk and a prompt: “Before I die .?.?.”

The project was conceived by artist Candy Chang, a 2011 TED fellow who created the first “Before I Die” public art installation last February in a city that has contemplated its own mortality: New Orleans. On the side of an abandoned building, Chang erected the chalkboard to help residents “remember what is important to them,” she wrote on her Web site. She let the responses — funny, poignant, morbid — roll in. “Before I Die” migrated to other cities, and with the help of other artists who borrowed her template, it has recorded the bucket-list dreams of people in more than 30 locations. The District’s arrived in Logan Circle early Sunday morning.

Chang analyzes the responses on each wall; most involve travel, she says. But in a well-traveled city like Washington, many of the hopes on the board here address politics and power. Before they die, Washingtonians would like to “Liberate Palestine,” “Be a general (Hooah!),” “Be chief of staff,” “See a transgender president,” “[Have] access to reproductive health care without stigma.” Chang also notes that the D.C. wall is more international than others she’s seen, with responses in at least seven languages.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: Crystal Hamling, 27, adds her thoughts to the “Before I Die…” art wall at 14th and Q streets NW. She wrote “Make people feel loved.” Courtesy of Katherine Frey / Washington Post.[end-div]

The Connectome: Slicing and Reconstructing the Brain

[tube]1nm1i4CJGwY[/tube]

[div class=attrib]From the Guardian:[end-div]

There is a macabre brilliance to the machine in Jeff Lichtman’s laboratory at Harvard University that is worthy of a Wallace and Gromit film. In one end goes brain. Out the other comes sliced brain, courtesy of an automated arm that wields a diamond knife. The slivers of tissue drop one after another on to a conveyor belt that zips along with the merry whirr of a cine projector.

Lichtman’s machine is an automated tape-collecting lathe ultramicrotome (Atlum), which, according to the neuroscientist, is the tool of choice for this line of work. It produces long strips of sticky tape with brain slices attached, all ready to be photographed through a powerful electron microscope.

When these pictures are combined into 3D images, they reveal the inner wiring of the organ, a tangled mass of nervous spaghetti. The research by Lichtman and his co-workers has a goal in mind that is so ambitious it is almost unthinkable.

If we are ever to understand the brain in full, they say, we must know how every neuron inside is wired up.

Though fanciful, the payoff could be profound. Map out our “connectome” – following other major “ome” projects such as the genome and transcriptome – and we will lay bare the biological code of our personalities, memories, skills and susceptibilities. Somewhere in our brains is who we are.

To use an understatement heard often from scientists, the job at hand is not trivial. Lichtman’s machine slices brain tissue into exquisitely thin wafers. To turn a 1mm thick slice of brain into neural salami takes six days in a process that yields about 30,000 slices.

But chopping up the brain is the easy part. When Lichtman began this work several years ago, he calculated how long it might take to image every slice of a 1cm mouse brain. The answer was 7,000 years. “When you hear numbers like that, it does make your pulse quicken,” Lichtman said.

The human brain is another story. There are 85bn neurons in the 1.4kg (3lbs) of flesh between our ears. Each has a cell body (grey matter) and long, thin extensions called dendrites and axons (white matter) that reach out and link to others. Most neurons have lots of dendrites that receive information from other nerve cells, and one axon that branches on to other cells and sends information out.

On average, each neuron forms 10,000 connections, through synapses with other nerve cells. Altogether, Lichtman estimates there are between 100tn and 1,000tn connections between neurons.

Unlike the lung, or the kidney, where the whole organ can be understood, more or less, by grasping the role of a handful of repeating physiological structures, the brain is made of thousands of specific types of brain cell that look and behave differently. Their names – Golgi, Betz, Renshaw, Purkinje – read like a roll call of the pioneers of neuroscience.

Lichtman, who is fond of calculations that expose the magnitude of the task he has taken on, once worked out how much computer memory would be needed to store a detailed human connectome.

“To map the human brain at the cellular level, we’re talking about 1m petabytes of information. Most people think that is more than the digital content of the world right now,” he said. “I’d settle for a mouse brain, but we’re not even ready to do that. We’re still working on how to do one cubic millimetre.”

He says he is about to submit a paper on mapping a minuscule volume of the mouse connectome and is working with a German company on building a multibeam microscope to speed up imaging.

For some scientists, mapping the human connectome down to the level of individual cells is verging on overkill. “If you want to study the rainforest, you don’t need to look at every leaf and every twig and measure its position and orientation. It’s too much detail,” said Olaf Sporns, a neuroscientist at Indiana University, who coined the term “connectome” in 2005.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Video courtesy of the Connectome Project / Guardian.[end-div]

Painting the Light: The Life and Death of Thomas Kinkade

You’ve probably seen a Kinkade painting somewhere — think cute cottage, meandering stream, misty clouds, soft focus and warm light.

According to Thomas Kinkade’s company one of his cozy, kitschy paintings (actually a photographic reproduction) could be found in one of every 20 homes in the United States. Kinkade died on April 6, 2012. With his passing, scholars of the art market are now analyzing what he left behind.

[div class=attrib]From the Guardian:[end-div]

In death, the man who at his peak claimed to be the world’s most successful living artist perhaps achieved the sort of art-world excess he craved.

On Tuesday, the coroner’s office in Santa Clara, California, announced that the death of Thomas Kinkade, the Painter of Light™, purveyor of kitsch prints to the masses, was caused by an accidental overdose of alcohol and Valium. For good measure, a legal scrap has emerged between Kinkade’s ex-wife (and trustee of his estate) and his girlfriend.

Who could have imagined that behind so many contented visions of peace, harmony and nauseating goodness lay just another story of deception, disappointment and depravity, fuelled by those ever-ready stooges, Valium and alcohol?

Kinkade was a self-made phenomenon, with his prints (according to his company) hanging in one in 20 American homes. At his height, in 2001, Kinkade generated $130m (£81m) in sales. Kinkade’s twee paintings of cod-traditional cottages, lighthouses, gardens, gazebos and gates sold by the million through a network of Thomas Kinkade galleries, owned by his company, and through a parallel franchise operation. At their peak (between 1995 and 2005) there were 350 Kinkade franchises across the US, with the bulk in his home state of California. You would see them in roadside malls in small towns, twinkly lights adorning the windows, and in bright shopping centres, sandwiched between skatewear outlets and nail bars.

But these weren’t just galleries. They were the Thomas Kinkade experience – minus the alcohol and Valium, of course. Clients would be ushered into a climate-controlled viewing room to maximise the Kinkadeness of the whole place, and their experience. Some galleries offered “master highlighters”, trained by someone not far from the master himself, to add a hand-crafted splash of paint to the desired print and so make a truly unique piece of art, as opposed to the framed photographic print that was the standard fare.

The artistic credo was expressed best in the 2008 movie Thomas Kinkade’s Christmas Cottage. Peter O’Toole, earning a crust playing Kinkade’s artistic mentor, urges the young painter to “Paint the light, Thomas! Paint the light!”.

Kinkade’s art also went beyond galleries through the “Thomas Kinkade lifestyle brand”. This wasn’t just the usual art gallery giftshop schlock: Kinkade sealed a tie-in with La-Z-Boy furniture (home of the big butt recliner) for a Kinkade-inspired range of furniture. But arguably his only great artwork was “The Village, a Thomas Kinkade Community”, unveiled in 2001. A 101-home development in Vallejo, outside San Francisco, operating under the slogan: “Calm, not chaos. Peace, not pressure,” the village offers four house designs, each named after one of Kinkade’s daughters. Plans for further housing developments, alas, fell foul of the housing crisis.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Google search.[end-div]

Quantum Computer Leap

The practical science behind quantum computers continues to make exciting progress. Quantum computers promise, in theory, immense gains in power and speed through the use of atomic scale parallel processing.

[div class=attrib]From the Observer:[end-div]

The reality of the universe in which we live is an outrage to common sense. Over the past 100 years, scientists have been forced to abandon a theory in which the stuff of the universe constitutes a single, concrete reality in exchange for one in which a single particle can be in two (or more) places at the same time. This is the universe as revealed by the laws of quantum physics and it is a model we are forced to accept – we have been battered into it by the weight of the scientific evidence. Without it, we would not have discovered and exploited the tiny switches present in their billions on every microchip, in every mobile phone and computer around the world. The modern world is built using quantum physics: through its technological applications in medicine, global communications and scientific computing it has shaped the world in which we live.

Although modern computing relies on the fidelity of quantum physics, the action of those tiny switches remains firmly in the domain of everyday logic. Each switch can be either “on” or “off”, and computer programs are implemented by controlling the flow of electricity through a network of wires and switches: the electricity flows through open switches and is blocked by closed switches. The result is a plethora of extremely useful devices that process information in a fantastic variety of ways.

Modern “classical” computers seem to have almost limitless potential – there is so much we can do with them. But there is an awful lot we cannot do with them too. There are problems in science that are of tremendous importance but which we have no hope of solving, not ever, using classical computers. The trouble is that some problems require so much information processing that there simply aren’t enough atoms in the universe to build a switch-based computer to solve them. This isn’t an esoteric matter of mere academic interest – classical computers can’t ever hope to model the behaviour of some systems that contain even just a few tens of atoms. This is a serious obstacle to those who are trying to understand the way molecules behave or how certain materials work – without the possibility to build computer models they are hampered in their efforts. One example is the field of high-temperature superconductivity. Certain materials are able to conduct electricity “for free” at surprisingly high temperatures (still pretty cold, though, at well but still below -100 degrees celsius). The trouble is, nobody really knows how they work and that seriously hinders any attempt to make a commercially viable technology. The difficulty in simulating physical systems of this type arises whenever quantum effects are playing an important role and that is the clue we need to identify a possible way to make progress.

It was American physicist Richard Feynman who, in 1981, first recognised that nature evidently does not need to employ vast computing resources to manufacture complicated quantum systems. That means if we can mimic nature then we might be able to simulate these systems without the prohibitive computational cost. Simulating nature is already done every day in science labs around the world – simulations allow scientists to play around in ways that cannot be realised in an experiment, either because the experiment would be too difficult or expensive or even impossible. Feynman’s insight was that simulations that inherently include quantum physics from the outset have the potential to tackle those otherwise impossible problems.

Quantum simulations have, in the past year, really taken off. The ability to delicately manipulate and measure systems containing just a few atoms is a requirement of any attempt at quantum simulation and it is thanks to recent technical advances that this is now becoming possible. Most recently, in an article published in the journal Nature last week, physicists from the US, Australia and South Africa have teamed up to build a device capable of simulating a particular type of magnetism that is of interest to those who are studying high-temperature superconductivity. Their simulator is esoteric. It is a small pancake-like layer less than 1 millimetre across made from 300 beryllium atoms that is delicately disturbed using laser beams… and it paves the way for future studies into quantum magnetism that will be impossible using a classical computer.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image: A crystal of beryllium ions confined by a large magnetic field at the US National Institute of Standards and Technology’s quantum simulator. The outermost electron of each ion is a quantum bit (qubit), and here they are fluorescing blue, which indicates they are all in the same state. Photograph courtesy of Britton/NIST, Observer.[end-div]

Nanotech: Bane and Boon

An insightful opinion on the benefits and perils of nanotechnology from essayist and naturalist, Diane Ackerman.

[div class=attrib]From the New York Times:[end-div]

“I SING the body electric,” Walt Whitman wrote in 1855, inspired by the novelty of useful electricity, which he would live to see power streetlights and telephones, locomotives and dynamos. In “Leaves of Grass,” his ecstatic epic poem of American life, he depicted himself as a live wire, a relay station for all the voices of the earth, natural or invented, human or mineral. “I have instant conductors all over me,” he wrote. “They seize every object and lead it harmlessly through me… My flesh and blood playing out lightning to strike what is hardly different from myself.”

Electricity equipped Whitman and other poets with a scintillation of metaphors. Like inspiration, it was a lightning flash. Like prophetic insight, it illuminated the darkness. Like sex, it tingled the flesh. Like life, it energized raw matter. Whitman didn’t know that our cells really do generate electricity, that the heart’s pacemaker relies on such signals and that billions of axons in the brain create their own electrical charge (equivalent to about a 60-watt bulb). A force of nature himself, he admired the range and raw power of electricity.

Deeply as he believed the vow “I sing the body electric” — a line sure to become a winning trademark — I suspect one of nanotechnology’s recent breakthroughs would have stunned him. A team at the University of Exeter in England has invented the lightest, supplest, most diaphanous material ever made for conducting electricity, a dream textile named GraphExeter, which could revolutionize electronics by making it fashionable to wear your computer, cellphone and MP3 player. Only one atom thick, it’s an ideal fabric for street clothes and couture lines alike. You could start your laptop by plugging it into your jeans, recharge your cellphone by plugging it into your T-shirt. Then, not only would your cells sizzle with electricity, but even your clothing would chime in.

I don’t know if a fully electric suit would upset flight electronics, pacemakers, airport security monitors or the brain’s cellular dispatches. If you wore an electric coat in a lightning storm, would the hairs on the back of your neck stand up? Would you be more likely to fall prey to a lightning strike? How long will it be before a jokester plays the sound of one-hand-clapping from a mitten? How long before late-night hosts riff about electric undies? Will people tethered to recharging poles haunt the airport waiting rooms? Will it become hip to wear flashing neon ads, quotes and designs — maybe a name in a luminous tattoo?

Another recent marvel of nanotechnology promises to alter daily life, too, but this one, despite its silver lining, strikes me as wickedly dangerous, though probably inevitable. As a result, it’s bound to inspire labyrinthine laws and a welter of patents and to ignite bioethical debates.

Nano-engineers have developed a way to coat both hard surfaces (like hospital bed rails, doorknobs and furniture) and also soft surfaces (sheets, gowns and curtains) with microscopic nanoparticles of silver, an element known to kill microbes. You’d think the new nano-coating would offer a silver bullet, be a godsend to patients stricken with hospital-acquired sepsis and pneumonia, and to doctors fighting what has become a nightmare of antibiotic-resistant micro-organisms that can kill tens of thousands of people a year.

It does, and it is. That’s the problem. It’s too effective. Most micro-organisms are harmless, many are beneficial, but some are absolutely essential for the environment and human life. Bacteria were the first life forms on the planet, and we owe them everything. Our biochemistry is interwoven with theirs. Swarms of bacteria blanket us on the outside, other swarms colonize our insides. Kill all the gut bacteria, essential for breaking down large molecules, and digestion slows.

Friendly bacteria aid the immune system. They release biotin, folic acid and vitamin K; help eliminate heavy metals from the body; calm inflammation; and prevent cancers. During childbirth, a baby picks up beneficial bacteria in the birth canal. Nitrogen-fixing bacteria ensure healthy plants and ecosystems. We use bacteria to decontaminate sewage and also to create protein-rich foods like kefir and yogurt.

How tempting for nanotechnology companies, capitalizing on our fears and fetishes, to engineer superbly effective nanosilver microbe-killers, deodorants and sanitizers of all sorts for home and industry.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Technorati.[end-div]

The Wantologist

This may sound like another job from the future, but “wantologists” wander among us in 2012.

[div class=attrib]From the New York Times:[end-div]

IN the sprawling outskirts of San Jose, Calif., I find myself at the apartment door of Katherine Ziegler, a psychologist and wantologist. Could it be, I wonder, that there is such a thing as a wantologist, someone we can hire to figure out what we want? Have I arrived at some final telling moment in my research on outsourcing intimate parts of our lives, or at the absurdist edge of the market frontier?

A willowy woman of 55, Ms. Ziegler beckons me in. A framed Ph.D. degree in psychology from the University of Illinois hangs on the wall, along with an intricate handmade quilt and a collage of images clipped from magazines — the back of a child’s head, a gnarled tree, a wandering cat — an odd assemblage that invites one to search for a connecting thread.

After a 20-year career as a psychologist, Ms. Ziegler expanded her practice to include executive coaching, life coaching and wantology. Originally intended to help business managers make purchasing decisions, wantology is the brainchild of Kevin Kreitman, an industrial engineer who set up a two-day class to train life coaches to apply this method to individuals in private life. Ms. Ziegler took the course and was promptly certified in the new field.

Ms. Ziegler explains that the first step in thinking about a “want,” is to ask your client, “ ‘Are you floating or navigating toward your goal?’ A lot of people float. Then you ask, ‘What do you want to feel like once you have what you want?’ ”

She described her experience with a recent client, a woman who lived in a medium-size house with a small garden but yearned for a bigger house with a bigger garden. She dreaded telling her husband, who had long toiled at renovations on their present home, and she feared telling her son, who she felt would criticize her for being too materialistic.

Ms. Ziegler took me through the conversation she had with this woman: “What do you want?”

“A bigger house.”

“How would you feel if you lived in a bigger house?”

“Peaceful.”

“What other things make you feel peaceful?”

“Walks by the ocean.” (The ocean was an hour’s drive away.)

“Do you ever take walks nearer where you live that remind you of the ocean?”“Certain ones, yes.”

“What do you like about those walks?”

“I hear the sound of water and feel surrounded by green.”

This gentle line of questions nudged the client toward a more nuanced understanding of her own desire. In the end, the woman dedicated a small room in her home to feeling peaceful. She filled it with lush ferns. The greenery encircled a bubbling slate-and-rock tabletop fountain. Sitting in her redesigned room in her medium-size house, the woman found the peace for which she’d yearned.

I was touched by the story. Maybe Ms. Ziegler’s client just needed a good friend who could listen sympathetically and help her work out her feelings. Ms. Ziegler provided a service — albeit one with a wacky name — for a fee. Still, the mere existence of a paid wantologist indicates just how far the market has penetrated our intimate lives. Can it be that we are no longer confident to identify even our most ordinary desires without a professional to guide us?

Is the wantologist the tail end of a larger story? Over the last century, the world of services has changed greatly.

A hundred — or even 40 — years ago, human eggs and sperm were not for sale, nor were wombs for rent. Online dating companies, nameologists, life coaches, party animators and paid graveside visitors did not exist.

Nor had a language developed that so seamlessly melded village and market — as in “Rent-a-Mom,” “Rent-a-Dad,” “Rent-a-Grandma,” “Rent-a-Friend” — insinuating itself, half joking, half serious, into our culture. The explosion in the number of available personal services says a great deal about changing ideas of what we can reasonably expect from whom. In the late 1940s, there were 2,500 clinical psychologists licensed in the United States. By 2010, there were 77,000 — and an additional 50,000 marriage and family therapists.

[div class=attrib]Read the entire article after the jump.[end-div]

How Religions Are Born: Church of Jedi

May the Fourth was Star Wars Day. Why? Say, “May the Fourth” slowly while pretending to lisp slightly, and you’ll understand. Appropriately, Matt Cresswen over at the Guardian took this day to review the growing Jedi religion in the UK.

Would that make George Lucas God?

[div class=attrib]From the Guardian:[end-div]

Today [May 4] is Star Wars Day, being May the Fourth. (Say the date slowly, several times.) Around the world, film buffs, storm troopers and Jedi are gathering to celebrate one of the greatest science fiction romps of all time. It would be easy to let the fan boys enjoy their day and be done with it. However, Jediism is a growing religion in the UK. Although the results of the 2001 census, in which 390,000 recipients stated their religion as Jedi, have been widely interpreted as a pop at the government, the UK does actually have serious Jedi.

For those of you who, like BBC producer Bill Dare, have never seen Star Wars, the Jedi are “good” characters from the films. They draw from a mystical entity binding the universe, called “the Force”. Sporting hoodies, the Jedi are generally altruistic, swift-footed and handy with a light sabre. Their enemies, Emperor Palpatine, Darth Vader and other cohorts use the dark side of the Force. By tapping into its powers, the dark side command armies of demented droids, kill Jedi and are capable of wiping out entire planets.

This week, Chi-Pa Amshe from the Church of Jediism in Anglesey, Wales, emailed me with some responses to questions. He said Jediism was growing and that they were gaining hundreds of members each month. The church made the news three years ago, after its founder, Daniel Jones, had a widely reported run-in with Tesco.

Chi-Pa Amshe, speaking as a spokesperson for the Jedi council (Falkna Kar, Anzai Kooji Cutpa and Daqian Xiong), believes that Jediism can merge with other belief systems, rather like a bolt-on accessory.

“Many of our members are in fact both Christian and Jedi,” he says. “We can no more understand the Force and our place within it than a gear in a clock could comprehend its function in moving the hands across the face. I’d like to point out that each of our members interprets their beliefs through the prison of their own lives and although we offer guidance and support, ultimately like with the Qur’an, it is up to them to find what they need and choose their own path.”

Meeting up as a church is hard, the council explained, and members rely heavily on Skype and Facebook. They have an annual physical meeting, “where the church council is available for face-to-face questions and guidance”. They also support charity events and attend computer gaming conventions.

Meanwhile, in New Zealand, a web-based group called the Jedi Church believes that Jediism has always been around.

It states: “The Jedi religion is just like the sun, it existed before a popular movie gave it a name, and now that it has a name, people all over the world can share their experiences of the Jedi religion, here in the Jedi church.”

There are many other Jedi groups on the web, although Chi-Pa Amshe said some were “very unpleasant”. The dark side, perhaps.

[div class=attrib]Read the entire article after the jump.[end-div]

Google: Please Don’t Be Evil

Google has been variously praised and derided for its corporate manta, “Don’t Be Evil”. For those who like to believe that Google has good intentions recent events strain these assumptions. The company was found to have been snooping on and collecting data from personal Wi-Fi routers. Is this the case of a lone-wolf or a corporate strategy?

[div class=attrib]From Slate:[end-div]

Was Google’s snooping on home Wi-Fi users the work of a rogue software engineer? Was it a deliberate corporate strategy? Was it simply an honest-to-goodness mistake? And which of these scenarios should we wish for—which would assuage your fears about the company that manages so much of our personal data?

These are the central questions raised by a damning FCC report on Google’s Street View program that was released last weekend. The Street View scandal began with a revolutionary idea—Larry Page wanted to snap photos of every public building in the world. Beginning in 2007, the search company’s vehicles began driving on streets in the United States (and later Europe, Canada, Mexico, and everywhere else), collecting a stream of images to feed into Google Maps.

While developing its Street View cars, Google’s engineers realized that the vehicles could also be used for “wardriving.” That’s a sinister-sounding name for the mainly noble effort to map the physical location of the world’s Wi-Fi routers. Creating a location database of Wi-Fi hotspots would make Google Maps more useful on mobile devices—phones without GPS chips could use the database to approximate their physical location, while GPS-enabled devices could use the system to speed up their location-monitoring systems. As a privacy matter, there was nothing unusual about wardriving. By the time Google began building its system, several startups had already created their own Wi-Fi mapping databases.

But Google, unlike other companies, wasn’t just recording the location of people’s Wi-Fi routers. When a Street View car encountered an open Wi-Fi network—that is, a router that was not protected by a password—it recorded all the digital traffic traveling across that router. As long as the car was within the vicinity, it sucked up a flood of personal data: login names, passwords, the full text of emails, Web histories, details of people’s medical conditions, online dating searches, and streaming music and movies.

Imagine a postal worker who opens and copies one letter from every mailbox along his route. Google’s sniffing was pretty much the same thing, except instead of one guy on one route it was a whole company operating around the world. The FCC report says that when French investigators looked at the data Google collected, they found “an exchange of emails between a married woman and man, both seeking an extra-marital relationship” and “Web addresses that revealed the sexual preferences of consumers at specific residences.” In the United States, Google’s cars collected 200 gigabytes of such data between 2008 and 2010, and they stopped only when regulators discovered the practice.

Why did Google collect all this data? What did it want to do with people’s private information? Was collecting it a mistake? Was it the inevitable result of Google’s maximalist philosophy about public data—its aim to collect and organize all of the world’s information?

Google says the answer to that final question is no. In its response to the FCC and its public blog posts, the company says it is sorry for what happened, and insists that it has established a much stricter set of internal policies to prevent something like this from happening again. The company characterizes the collection of Wi-Fi payload data as the idea of one guy, an engineer who contributed code to the Street View program. In the FCC report, he’s called Engineer Doe. On Monday, the New York Times identified him as Marius Milner, a network programmer who created Network Stumbler, a popular Wi-Fi network detection tool. The company argues that Milner—for reasons that aren’t really clear—slipped the snooping code into the Street View program without anyone else figuring out what he was up to. Nobody else on the Street View team wanted to collect Wi-Fi data, Google says—they didn’t think it would be useful in any way, and, in fact, the data was never used for any Google product.

Should we believe Google’s lone-coder theory? I have a hard time doing so. The FCC report points out that Milner’s “design document” mentions his intention to collect and analyze payload data, and it also highlights privacy as a potential concern. Though Google’s privacy team never reviewed the program, many of Milner’s colleagues closely reviewed his source code. In 2008, Milner told one colleague in an email that analyzing the Wi-Fi payload data was “one of my to-do items.” Later, he ran a script to count the Web addresses contained in the collected data and sent his results to an unnamed “senior manager.” The manager responded as if he knew what was going on: “Are you saying that these are URLs that you sniffed out of Wi-Fi packets that we recorded while driving?” Milner responded by explaining exactly where the data came from. “The data was collected during the daytime when most traffic is at work,” he said.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Fastcompany.[end-div]

Creativity and Immorality

[div class=attrib]From Scientific American:[end-div]

In the mid 1990’s, Apple Computers was a dying company.  Microsoft’s Windows operating system was overwhelmingly favored by consumers, and Apple’s attempts to win back market share by improving the Macintosh operating system were unsuccessful.  After several years of debilitating financial losses, the company chose to purchase a fledgling software company called NeXT.  Along with purchasing the rights to NeXT’s software, this move allowed Apple to regain the services of one of the company’s founders, the late Steve Jobs.  Under the guidance of Jobs, Apple returned to profitability and is now the largest technology company in the world, with the creativity of Steve Jobs receiving much of the credit.

However, despite the widespread positive image of Jobs as a creative genius, he also has a dark reputation for encouraging censorship,“ losing sight of honesty and integrity”, belittling employees, and engaging in other morally questionable actions. These harshly contrasting images of Jobs raise the question of why a CEO held in such near-universal positive regard could also be the same one accused of engaging in such contemptible behavior.  The answer, it turns out, may have something to do with the aspect of Jobs which is so admired by so many.

In a recent paper published in the Journal of Personality and Social Psychology, researchers at Harvard and Duke Universities demonstrate that creativity can lead people to behave unethically.  In five studies, the authors show that creative individuals are more likely to be dishonest, and that individuals induced to think creatively were more likely to be dishonest. Importantly, they showed that this effect is not explained by any tendency for creative people to be more intelligent, but rather that creativity leads people to more easily come up with justifications for their unscrupulous actions.

In one study, the authors administered a survey to employees at an advertising agency.  The survey asked the employees how likely they were to engage in various kinds of unethical behaviors, such as taking office supplies home or inflating business expense reports.  The employees were also asked to report how much creativity was required for their job.  Further, the authors asked the executives of the company to provide creativity ratings for each department within the company.

Those who said that their jobs required more creativity also tended to self-report a greater likelihood of unethical behavior.  And if the executives said that a particular department required more creativity, the individuals in that department tended to report greater likelihoods of unethical behavior.

The authors hypothesized that it is creativity which causes unethical behavior by allowing people the means to justify their misdeeds, but it is hard to say for certain whether this is correct given the correlational nature of the study.  It could just as easily be true, after all, that unethical behavior leads people to be more creative, or that there is something else which causes both creativity and dishonesty, such as intelligence.  To explore this, the authors set up an experiment in which participants were induced into a creative mindset and then given the opportunity to cheat.

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Scientific American / iStock.[end-div]

Your Tween Online

Many parents with children in the pre-teenage years probably have a containment policy restricting them from participating on adult oriented social media such as Facebook. Well, these tech-savvy tweens may be doing more online than just playing Club Penguin.

[div class=attrib]From the WSJ:[end-div]

Celina McPhail’s mom wouldn’t let her have a Facebook account. The 12-year-old is on Instagram instead.

Her mother, Maria McPhail, agreed to let her download the app onto her iPod Touch, because she thought she was fostering an interest in photography. But Ms. McPhail, of Austin, Texas, has learned that Celina and her friends mostly use the service to post and “like” Photoshopped photo-jokes and text messages they create on another free app called Versagram. When kids can’t get on Facebook, “they’re good at finding ways around that,” she says.

It’s harder than ever to keep an eye on the children. Many parents limit their preteens’ access to well-known sites like Facebook and monitor what their children do online. But with kids constantly seeking new places to connect—preferably, unsupervised by their families—most parents are learning how difficult it is to prevent their kids from interacting with social media.

Children are using technology at ever-younger ages. About 15% of kids under the age of 11 have their own mobile phone, according to eMarketer. The Pew Research Center’s Internet & American Life Project reported last summer that 16% of kids 12 to 17 who are online used Twitter, double the number from two years earlier.

Parents worry about the risks of online predators and bullying, and there are other concerns. Kids are creating permanent public records, and they may encounter excessive or inappropriate advertising. Yet many parents also believe it is in their kids’ interest to be nimble with technology.

As families grapple with how to use social media safely, many marketers are working to create social networks and other interactive applications for kids that parents will approve. Some go even further, seeing themselves as providing a crucial education in online literacy—”training wheels for social media,” as Rebecca Levey of social-media site KidzVuz puts it.

Along with established social sites for kids, such as Walt Disney Co.’s Club Penguin, kids are flocking to newer sites such as FashionPlaytes.com, a meeting place aimed at girls ages 5 to 12 who are interested in designing clothes, and Everloop, a social network for kids under the age of 13. Viddy, a video-sharing site which functions similarly to Instagram, is becoming more popular with kids and teenagers as well.

Some kids do join YouTube, Google, Facebook, Tumblr and Twitter, despite policies meant to bar kids under 13. These sites require that users enter their date of birth upon signing up, and they must be at least 13 years old. Apple—which requires an account to download apps like Instagram to an iPhone—has the same requirement. But there is little to bar kids from entering a false date of birth or getting an adult to set up an account. Instagram declined to comment.

“If we learn that someone is not old enough to have a Google account, or we receive a report, we will investigate and take the appropriate action,” says Google spokesman Jay Nancarrow. He adds that “users first have a chance to demonstrate that they meet our age requirements. If they don’t, we will close the account.” Facebook and most other sites have similar policies.

Still, some children establish public identities on social-media networks like YouTube and Facebook with their parents’ permission. Autumn Miller, a 10-year-old from Southern California, has nearly 6,000 people following her Facebook fan-page postings, which include links to videos of her in makeup and costumes, dancing Laker-Girl style.

[div class=attrib]Read the entire article after the jump.[end-div]

Job of the Future: Personal Data Broker

Pause for a second, and think of all the personal data that companies have amassed about you. Then think about the billions that these companies make in trading this data to advertisers, information researchers and data miners. There are credit bureaus with details of your financial history since birth; social networks with details of everything you and your friends say and (dis)like; GPS-enabled services that track your every move; search engines that trawl your searches, medical companies with your intimate health data, security devices that monitor your movements, and online retailers with all your purchase transactions and wish-lists.

Now think of a business model that puts you in charge of your own personal data. This may not be as far fetched as it seems, especially as the backlash grows against the increasing consolidation of personal data in the hands of an ever smaller cadre of increasingly powerful players.

[div class=attrib]From Technology Review:[end-div]

Here’s a job title made for the information age: personal data broker.

Today, people have no choice but to give away their personal information—sometimes in exchange for free networking on Twitter or searching on Google, but other times to third-party data-aggregation firms without realizing it at all.

“There’s an immense amount of value in data about people,” says Bernardo Huberman, senior fellow at HP Labs. “That data is being collected all the time. Anytime you turn on your computer, anytime you buy something.”

Huberman, who directs HP Labs’ Social Computing Research Group, has come up with an alternative—a marketplace for personal information—that would give individuals control of and compensation for the private tidbits they share, rather than putting it all in the hands of companies.

In a paper posted online last week, Huberman and coauthor Christina Aperjis propose something akin to a New York Stock Exchange for personal data. A trusted market operator could take a small cut of each transaction and help arrive at a realistic price for a sale.

“There are two kinds of people. Some people who say, ‘I’m not going to give you my data at all, unless you give me a million bucks.’ And there are a lot of people who say, ‘I don’t care, I’ll give it to you for little,’ ” says Huberman. He’s tested this the academic way, through experiments that involved asking men and women to share how much they weigh for a payment.

On his proposed market, a person who highly values her privacy might chose an option to sell her shopping patterns for $10, but at a big risk of not finding a buyer. Alternately, she might sell the same data for a guaranteed payment of 50 cents. Or she might opt out and keep her privacy entirely.

You won’t find any kind of opportunity like this today. But with Internet companies making billions of dollars selling our information, fresh ideas and business models that promise users control over their privacy are gaining momentum. Startups like Personal and Singly are working on these challenges already. The World Economic Forum recently called an individual’s data an emerging “asset class.”

Huberman is not the first to investigate a personal data marketplace, and there would seem to be significant barriers—like how to get companies that already collect data for free to participate. But, he says, since the pricing options he outlines gauge how a person values privacy and risk, they address at least two big obstacles to making such a market function.

[div class=attrib]Read the entire article after the jump.[end-div]

Spacetime as an Emergent Phenomenon

A small, but growing, idea in theoretical physics and cosmology is that spacetime may be emergent. That is, spacetime emerges from something much more fundamental, in much the same way that our perception of temperature emerges from the motion and characteristics of underlying particles.

[div class=attrib]More on this new front in our quest to answer the most basic of questions from FQXi:[end-div]

Imagine if nothing around you was real. And, no, not in a science-fiction Matrix sense, but in an actual science-fact way.

Technically, our perceived reality is a gigantic series of approximations: The tables, chairs, people, and cell phones that we interact with every day are actually made up of tiny particles—as all good schoolchildren learn. From the motion and characteristics of those particles emerge the properties that we see and feel, including color and temperature. Though we don’t see those particles, because they are so much smaller than the phenomena our bodies are built to sense, they govern our day-to-day existence.

Now, what if spacetime is emergent too? That’s the question that Joanna Karczmarek, a string theorist at the University of British Columbia, Vancouver, is attempting to answer. As a string theorist, Karczmarek is familiar with imagining invisible constituents of reality. String theorists posit that at a fundamental level, matter is made up of unthinkably tiny vibrating threads of energy that underlie subatomic particles, such as quarks and electrons. Most string theorists, however, assume that such strings dance across a pre-existing and fundamental stage set by spacetime. Karczmarek is pushing things a step further, by suggesting that spacetime itself is not fundamental, but made of more basic constituents.

Having carried out early research in atomic, molecular and optical physics, Karczmarek shifted into string theory because she “was more excited by areas where less was known”—and looking for the building blocks from which spacetime arises certainly fits that criteria. The project, funded by a $40,000 FQXi grant, is “high risk but high payoff,” Karczmarek says.

Although one of only a few string theorists to address the issue, Karczmarek is part of a growing movement in the wider physics community to create a theory that shows spacetime is emergent. (See, for instance, “Breaking the Universe’s Speed Limit.”) The problem really comes into focus for those attempting to combine quantum mechanics with Einstein’s theory of general relativity and thus is traditionally tackled directly by quantum gravity researchers, rather than by string theorists, Karczmarek notes.

That may change though. Nathan Seiberg, a string theorist at the Institute for Advanced Study (IAS) in Princeton, New Jersey, has found good reasons for his stringy colleagues to believe that at least space—if not spacetime—is emergent. “With space we can sort of imagine how it might work,” Sieberg says. To explain how, Seiberg uses an everyday example—the emergence of an apparently smooth surface of water in a bowl. “If you examine the water at the level of particles, there is no smooth surface. It looks like there is, but this is an approximation,” Seiberg says. Similarly, he has found examples in string theory where some spatial dimensions emerge when you take a step back from the picture (arXiv:hep-th/0601234v1). “At shorter distances it doesn’t look like these dimensions are there because they are quantum fluctuations that are very rapid,” Seiberg explains. “In fact, the notion of space ceases to make sense, and eventually if you go to shorter and shorter distances you don’t even need it for the formulation of the theory.”

[div class=attrib]Read the entire article after the jump.[end-div]

[div class=attrib]Image courtesy of Nature.[end-div]