Pulsars Signal the Beat

Cosmology meets music. German band Reimhaus samples the regular pulse of pulsars in its music. A pulsar is the rapidly spinning remains of an exploded star — as the pulsar spins it emits a detectable beam of energy that has a very regular beat, sometimes sub-second.

[div class=attrib]From Discover:[end-div]

Some pulsars spin hundreds of times per second, some take several seconds to spin once. If you take that pulse of light and translate it into sound, you get a very steady thumping beat with very precise timing. So making it into a song is a natural thought.
But we certainly didn’t take it as far as the German band Reimhaus did, making a music video out of it! They used several pulsars for their song “Echoes, Silence, Pulses & Waves”. So here’s the cosmic beat:

[tube]86IeHiXEZ3I[/tube]

The First Interplanetary Travel Reservations

[div class=attrib]From Wired:[end-div]

Today, space travel is closer to reality for ordinary people than it has ever been. Though currently only the super rich are actually getting to space, several companies have more affordable commercial space tourism in their sights and at least one group is going the non-profit DIY route into space.

But more than a decade before it was even proven that man could reach space, average people were more positive about their own chances of escaping Earth’s atmosphere. This may have been partly thanks to the Interplanetary Tour Reservation desk at the American Museum of Natural History.

In 1950, to promote its new space exhibit, the AMNH had the brilliant idea to ask museum visitors to sign up to reserve their space on a future trip to the moon, Mars, Jupiter or Saturn. They advertised the opportunity in newspapers and magazines and received letters requesting reservations from around the world. The museum pledged to pass their list on to whichever entity headed to each destination first.

Today, to promote its newest space exhibit, “Beyond Planet Earth: The Future of Space Exploration,” the museum has published some of these requests. The letters manage to be interesting, hopeful, funny and poignant all at once. Some even included sketches of potential space capsules, rockets and spacesuits. The museum shared some of its favorites with Wired for this gallery.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Hayden Planetarium Space Tours Schedule. Courtesy of American Museum of Natural History / Wired.[end-div]

Social Influence Through Social Media: Not!

Online social networks are an unprecedentedly rich source of material for psychologists, social scientists and observers of human behavior. Now a recent study shows that influence through these networks may not be as powerful or widespread as first thought. The study, “Social Selection and Peer Influence in an Online Social Network,” by Kevin Lewis, Marco Gonzalez and Jason Kaufman is available here.

[div class=attrib]From the Wall Street Journal:[end-div]

Social media gives ordinary people unprecedented power to broadcast their taste in movies, books and film, but for the most part those tastes don’t rub off on other people, a new study of college students finds. Instead, social media appears to strengthen our bonds with people whose tastes already resemble ours.

Researchers followed the Facebook pages and networks of some 1,000 students, at one college, for four years (looking only at public information). The strongest determinant of Facebook friendship was “mere propinquity” — living in the same building, studying the same subject—but people also self-segregated by gender, race, socioeconomic background and place of origin.

When it came to culture, researchers used an algorithm to identify taste “clusters” within the categories of music, movies, and books. They learned that fans of “lite/classic rock”* and “classical/jazz” were significantly more likely than chance would predict to form and maintain friendships, as were devotees of films featuring “dark satire” or “raunchy comedy / gore.” But this was the case for no other music or film genre — and for no books.

What’s more, “jazz/classical” was the only taste to spread from people who possessed it to those who lacked it. The researchers suggest that this is because liking jazz and classical music serves as a class marker, one that college-age people want to acquire. (I’d prefer to believe that they adopt those tastes on aesthetic grounds, but who knows?) “Indie/alt” music, in fact, was the opposite of contagious: People whose friends liked that style music tended to drop that preference themselves, over time.

[div class=attrib]Read the entire article here.[end-div]

The Internet of Things

The term “Internet of Things” was first coined in 1999 by Kevin Ashton. It refers to the notion whereby physical objects of all kinds are equipped with small identifying devices and connected to a network. In essence: everything connected to anytime, anywhere by anyone. One of the potential benefits is that this would allow objects to be tracked, inventoried and status continuously monitored.

[div class=attrib]From the New York Times:[end-div]

THE Internet likes you, really likes you. It offers you so much, just a mouse click or finger tap away. Go Christmas shopping, find restaurants, locate partying friends, tell the world what you’re up to. Some of the finest minds in computer science, working at start-ups and big companies, are obsessed with tracking your online habits to offer targeted ads and coupons, just for you.

But now — nothing personal, mind you — the Internet is growing up and lifting its gaze to the wider world. To be sure, the economy of Internet self-gratification is thriving. Web start-ups for the consumer market still sprout at a torrid pace. And young corporate stars seeking to cash in for billions by selling shares to the public are consumer services — the online game company Zynga last week, and the social network giant Facebook, whose stock offering is scheduled for next year.

As this is happening, though, the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.

The concept has been around for years, sometimes called the Internet of Things or the Industrial Internet. Yet it takes time for the economics and engineering to catch up with the predictions. And that moment is upon us.

“We’re going to put the digital ‘smarts’ into everything,” said Edward D. Lazowska, a computer scientist at the University of Washington. These abundant smart devices, Dr. Lazowska added, will “interact intelligently with people and with the physical world.”

The role of sensors — once costly and clunky, now inexpensive and tiny — was described this month in an essay in The New York Times by Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology; he said the ultimate goal was “the sensor-aware planetary computer.”

That may sound like blue-sky futurism, but evidence shows that the vision is beginning to be realized on the ground, in recent investments, products and services, coming from large industrial and technology corporations and some ambitious start-ups.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Cisco.[end-div]

Walking Through Doorways and Forgetting

[div class=attrib]From Scientific American:[end-div]

The French poet Paul Valéry once said, “The purpose of psychology is to give us a completely different idea of the things we know best.”  In that spirit, consider a situation many of us will find we know too well:  You’re sitting at your desk in your office at home. Digging for something under a stack of papers, you find a dirty coffee mug that’s been there so long it’s eligible for carbon dating.  Better wash it. You pick up the mug, walk out the door of your office, and head toward the kitchen.  By the time you get to the kitchen, though, you’ve forgotten why you stood up in the first place, and you wander back to your office, feeling a little confused—until you look down and see the cup.

So there’s the thing we know best:  The common and annoying experience of arriving somewhere only to realize you’ve forgotten what you went there to do.  We all know why such forgetting happens: we didn’t pay enough attention, or too much time passed, or it just wasn’t important enough.  But a “completely different” idea comes from a team of researchers at the University of Notre Dame.  The first part of their paper’s title sums it up:  “Walking through doorways causes forgetting.”

Gabriel Radvansky, Sabine Krawietz and Andrea Tamplin seated participants in front of a computer screen running a video game in which they could move around using the arrow keys.  In the game, they would walk up to a table with a colored geometric solid sitting on it. Their task was to pick up the object and take it to another table, where they would put the object down and pick up a new one. Whichever object they were currently carrying was invisible to them, as if it were in a virtual backpack.??Sometimes, to get to the next object the participant simply walked across the room. Other times, they had to walk the same distance, but through a door into a new room. From time to time, the researchers gave them a pop quiz, asking which object was currently in their backpack.  The quiz was timed so that when they walked through a doorway, they were tested right afterwards.  As the title said, walking through doorways caused forgetting: Their responses were both slower and less accurate when they’d walked through a doorway into a new room than when they’d walked the same distance within the same room.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Doorway, Titicaca, Bolivia. Courtesy of M.Gerra Assoc.[end-div]

Irrational Exuberance and Holiday Shopping

‘Tis the season to buy, give, receive and “re-gift” mostly useless and unwanted “stuff”. That’s how many economists would characterize these days of retail madness. Matthew Yglesias over a Slate ponders a more efficient way to re-distribute wealth.

[div class=attrib]From Slate:[end-div]

Christmas is not the most wonderful time of the year for economists. The holiday spirit is puzzlingly difficult to model: It plays havoc with the notion of rational utility-maximization. There’s so much waste! Price-insensitive travelers pack airports beyond capacity on Dec. 24 only to leave planes empty on Christmas Day. Even worse are the gifts, which represent an abandonment of our efficient system of monetary exchange in favor of a semi-barbaric form of bartering.

Still, even the most rational and Scroogey of economists must concede that gift-giving is clearly here to stay. What’s needed is a bit of advice: What can economics tell us about efficient gifting so that your loved ones get the most bang for your buck?

We need to start with the basic problem of gift-giving and barter in general: preference heterogeneity. Different people, in other words, want different stuff and they value it differently.

In a system of monetary exchange, everything has more or less one price. In that sense, we can say that a Lexus or a pile of coconuts is “worth” a certain amount: its market price. But I, personally, would have little use for a Lexus. I live in an apartment building near a Metro station and above a supermarket; I walk to work; and driving up to New York to visit my family is much less practical than taking a bus or a train. So while of course I won’t complain if you buy me a Lexus, its value to me will be low relative to its market price. Similarly, I don’t like coconuts and I’m not on the verge of starvation. If you dump a pile of coconuts in my living room, all you’re doing is creating a hassle for me. The market price of coconuts is low, but the utility I would derive from a gift of coconuts is actually negative.

In the case of the Lexus, the sensible thing for me to do would be to sell the car. But this would be a bit of a hassle and would doubtless leave me with less money in my pocket than you spent.

This gap between what something is worth to me and what it actually costs is “deadweight loss.” The deadweight loss can be thought of in monetary terms, or you might think of it as the hassle involved in returning something for store credit. It’s the gap in usefulness between a $20 gift certificate to the Olive Garden and a $20 bill that could, among other things, be used to buy $20 worth of food at Olive Garden. Research suggests that there’s quite a lot of deadweight loss during the holiday season. Joel Waldfogel’s classic paper (later expanded into a short book) suggests that gift exchange carries with it an average deadweight loss of 10 percent to a third of the value of the gifts. The National Retail Federation is projecting total holiday spending of more than $460 billion, implying $46-$152 billion worth of holiday wastage, potentially equivalent to an entire year’s worth of output from Iowa.

Partially rescuing Christmas is the reality that a lot of gift-giving isn’t exchange at all. Rather, it’s a kind of Robin Hood transfer in which we take resources from (relatively) rich parents and grandparents and give them to kids with little or no income. This is welfare enhancing for the same reason that redistributive taxation is welfare enhancing: People with less money need the stuff more.

[div class=attrib]Read the entire article here.[end-div]

MondayPoem: The Snow Is Deep on the Ground

We celebrate the arrival of winter to the northern hemisphere with an evocative poem by Kenneth Patchen.

[div class=attrib]From Poetry Foundation:[end-div]

An inspiration for the Beat Generation and a true “people’s poet,” Kenneth Patchen was a prolific writer, visual artist and performer whose exuberant, free-form productions celebrate spontaneity and attack injustices, materialism, and war.

By Kenneth Patchen

– The Snow Is Deep on the Ground

The snow is deep on the ground.
Always the light falls
Softly down on the hair of my belovèd.

This is a good world.
The war has failed.
God shall not forget us.
Who made the snow waits where love is.

Only a few go mad.
The sky moves in its whiteness
Like the withered hand of an old king.
God shall not forget us.
Who made the sky knows of our love.

The snow is beautiful on the ground.
And always the lights of heaven glow
Softly down on the hair of my belovèd.

[div class=attrib]Image: Kenneth Patchen. Courtesy of Wikipedia.[end-div]

The Psychology of Gift Giving

[div class=attrib]From the Wall Street Journal:[end-div]

Many of my economist friends have a problem with gift-giving. They view the holidays not as an occasion for joy but as a festival of irrationality, an orgy of wealth-destruction.

Rational economists fixate on a situation in which, say, your Aunt Bertha spends $50 on a shirt for you, and you end up wearing it just once (when she visits). Her hard-earned cash has evaporated, and you don’t even like the present! One much-cited study estimated that as much as a third of the money spent on Christmas is wasted, because recipients assign a value lower than the retail price to the gifts they receive. Rational economists thus make a simple suggestion: Give cash or give nothing.

But behavioral economics, which draws on psychology as well as on economic theory, is much more appreciative of gift giving. Behavioral economics better understands why people (rightly, in my view) don’t want to give up the mystery, excitement and joy of gift giving.

In this view, gifts aren’t irrational. It’s just that rational economists have failed to account for their genuine social utility. So let’s examine the rational and irrational reasons to give gifts.

Some gifts, of course, are basically straightforward economic exchanges. This is the case when we buy a nephew a package of socks because his mother says he needs them. It is the least exciting kind of gift but also the one that any economist can understand.

A second important kind of gift is one that tries to create or strengthen a social connection. The classic example is when somebody invites us for dinner and we bring something for the host. It’s not about economic efficiency. It’s a way to express our gratitude and to create a social bond with the host.

Another category of gift, which I like a lot, is what I call “paternalistic” gifts—things you think somebody else should have. I like a certain Green Day album or Julian Barnes novel or the book “Predictably Irrational,” and I think that you should like it, too. Or I think that singing lessons or yoga classes will expand your horizons—and so I buy them for you.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Google search results for ‘gifts’.[end-div]

What Did You Have for Breakfast Yesterday? Ask Google

Memory is, well, so 1990s. Who needs it when we have Google, Siri and any number of services to help answer and recall everything we’ve ever perceived and wished to remember or wanted to know. Will our personal memories become another shared service served up from the “cloud”?

[div class=attrib]From the Wilson Quarterly:[end-div]

In an age when most information is just a few keystrokes away, it’s natural to wonder: Is Google weakening our powers of memory? According to psychologists Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin, Madison, and Daniel M. Wegner of Harvard, the Internet has not so much diminished intelligent recall as tweaked it.

The trio’s research shows what most computer users can tell you anecdotally: When you know you have the Internet at hand, your memory relaxes. In one of their experiments, 46 Harvard undergraduates were asked to answer 32 trivia questions on computers. After each one, they took a quick Stroop test, in which they were shown words printed in different colors and then asked to name the color of each word. They took more time to name the colors of Internet-related words, such as modem and browser. According to Stroop test conventions, this is because the words were related to something else that they were already thinking about—yes, they wanted to fire up Google to answer those tricky trivia questions.

In another experiment, the authors uncovered evidence suggesting that access to computers plays a fundamental role in what people choose to commit to their God-given hard drive. Subjects were instructed to type 40 trivia-like statements into a dialog box. Half were told that the computer would erase the information and half that it would be saved. Afterward, when asked to recall the statements, the students who were told their typing would be erased remembered much more. Lacking a computer backup, they apparently committed more to memory.

[div class=attrib]Read the entire article here.[end-div]

Everyone’s an Artist, Designer, Critic. But Curator?

Digital cameras and smartphones have enabled their users to become photographers. Affordable composition and editing tools have made us all designers and editors. Social media have enabled, encouraged and sometimes rewarded us for posting content, reviews and opinions for everything under the sun. So, now we are all critics. So, now are we all curators as well?

[div class=attrib]From dis:[end-div]

As far as word trends go, the word curate still exists in a somewhat rarified air. One can use curate knowingly with tongue in cheek: “Let’s curate our spice rack!” Or, more commonly and less nerdily, in the service of specialized artisanal commerce: “curating food stands” of the Brooklyn Flea swap meet, or a site that lets women curate their own clothing store from featured brands, earning 10% on any sales from their page. Curate used pejoratively indicates The Man- “If The Huffington Post wants to curate Twitter…” [uh, users will be upset]. And then there is that other definition specific to the practice of art curating. In the past ten years, as curate has exploded in popular culture and as a consumer buzz-word, art curators have felt residual effects. Those who value curating as an actual practice are generally loathe to see it harnessed by commercial culture, and conversely, feel sheepish about some deep-set pretensions this move has brought front and center. Simultaneously, curate has become a lightning-rod in the art world, inspiring countless journal articles and colloquia in which academics and professionals discuss issues around curating with a certain amount of anxiety.

Everyone’s a critic but who’s a curator?
In current usage, curating as discipline, which involves assembling and arranging artworks, has been usurped by curating as a nebulous expression of taste, presumed to be inherent rather than learned. This presumption is of course steeped in its own mire of regionalism, class bias and aspirations towards whomever’s privileged lifestyle is currently on-trend or in power. Suffice it to say that taste is problematic. But that curating swung so easily towards taste, indicates that it wasn’t a very hard association to make.

To some extent taste has been wedded to curating since the latter’s inception. A close forebear of the modern curated exhibition was the Renaissance cabinet of curiosities. The practice of selecting finely crafted objects for display first appeared in the 15th century and extended for several centuries after. A gentleman’s cabinet of curiosities showcased treasures bought or collected during travel, and ranged culturally and from collector to collector according to his interests, from mythical?/biblical? relics to artworks to ancient and exotic artifacts. As a practice, this sort of acquisition existed separately from the tradition of patronage of a particular artist. (For a vivid and intricately rendered description of the motivations and mindset of the 18th century collector, which gives way after half the book to a tour-de-force historical novel and then finally, to a political manifesto by a thinly veiled stand-in for the author, see Susan Sontag’s weird and special novel The Volcano Lover.) In Europe and later the United States, these collections of curiosities would give rise to the culture of the museum. In an 1858 New York Times article, the sculptor Bartholomew was described as having held the position of Curator for the Wadsworth Gallery in Hartford, a post he soon abandoned to render marble busts. The Wadsworth, incidentally, was the first public art museum to emerge in the United States, and would anticipate the museum boom of the 20th century.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Dean&Deluca. Courtesy of dis.[end-div]

Life Without Facebook

Perhaps it’s time to re-think your social network when through it you know all about the stranger with whom you are sharing the elevator.

[div class=attrib]From the New York Times:[end-div]

Tyson Balcomb quit Facebook after a chance encounter on an elevator. He found himself standing next to a woman he had never met — yet through Facebook he knew what her older brother looked like, that she was from a tiny island off the coast of Washington and that she had recently visited the Space Needle in Seattle.

“I knew all these things about her, but I’d never even talked to her,” said Mr. Balcomb, a pre-med student in Oregon who had some real-life friends in common with the woman. “At that point I thought, maybe this is a little unhealthy.”

As Facebook prepares for a much-anticipated public offering, the company is eager to show off its momentum by building on its huge membership: more than 800 million active users around the world, Facebook says, and roughly 200 million in the United States, or two-thirds of the population.

But the company is running into a roadblock in this country. Some people, even on the younger end of the age spectrum, just refuse to participate, including people who have given it a try.

One of Facebook’s main selling points is that it builds closer ties among friends and colleagues. But some who steer clear of the site say it can have the opposite effect of making them feel more, not less, alienated.

“I wasn’t calling my friends anymore,” said Ashleigh Elser, 24, who is in graduate school in Charlottesville, Va. “I was just seeing their pictures and updates and felt like that was really connecting to them.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook user. Courtesy of the New York Times.[end-div]

A Most Beautiful Equation

Many mathematicians and those not mathematically oriented would consider Albert Einstein’s equation stating energy=mass equivalence to be singularly simple and beautiful. Indeed, e=mc2 is perhaps one of the few equations to have entered the general public consciousness. However, there are a number of other less well known mathematical constructs that convey this level of significance and fundamental beauty as well. Wired lists several to consider.

[div class=attrib]From Wired:[end-div]

Even for those of us who finished high school algebra on a wing and a prayer, there’s something compelling about equations. The world’s complexities and uncertainties are distilled and set in orderly figures, with a handful of characters sufficing to capture the universe itself.

For your enjoyment, the Wired Science team has gathered nine of our favorite equations. Some represent the universe; others, the nature of life. One represents the limit of equations.

We do advise, however, against getting any of these equations tattooed on your body, much less branded. An equation t-shirt would do just fine.

The Beautiful Equation: Euler’s Identity

ei? + 1 = 0

Also called Euler’s relation, or the Euler equation of complex analysis, this bit of mathematics enjoys accolades across geeky disciplines.

Swiss mathematician Leonhard Euler first wrote the equality, which links together geometry, algebra, and five of the most essential symbols in math — 0, 1, i, pi and e — that are essential tools in scientific work.

Theoretical physicist Richard Feynman was a huge fan and called it a “jewel” and a “remarkable” formula. Fans today refer to it as “the most beautiful equation.”

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Euler’s Relation. Courtesy of Wired.[end-div]

Woman and Man, and Fish?

A widely held aphorism states that owners often look like their pets, or visa versa. So, might it apply to humans and fish? Well, Ted Sabarese a photographer based in New York provides an answer in a series of fascinating portraits.

[div class=attrib]From Kalliopi Monoyios over at Scientific American:[end-div]

I can’t say for certain whether New York based photographer Ted Sabarese had science or evolution in mind when he conceived of this series. But I’m almost glad he never responded to my follow-up questions about his inspiration behind these. Part of the fun of art is its mirror-like quality: everyone sees something different when faced with it because everyone brings a different set of experiences and expectations to the table. When I look at these I see equal parts “you are what you eat,” “your inner fish,” and “United Colors of Benetton.”

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Discover more of Ted Sabarese’s work here.[end-div]

Can Anyone Say “Neuroaesthetics”

As in all other branches of science, there seem to be fascinating new theories, research and discoveries in neuroscience on a daily, if not hourly, basis. With this in mind, brain and cognitive researchers have recently turned their attentions to the science of art, or more specifically to addressing the question “how does the human brain appreciate art?” Yes, welcome to the world of “neuroaesthetics”.

[div class=attrib]From Scientific American:[end-div]

The notion of “the aesthetic” is a concept from the philosophy of art of the 18th century according to which the perception of beauty occurs by means of a special process distinct from the appraisal of ordinary objects. Hence, our appreciation of a sublime painting is presumed to be cognitively distinct from our appreciation of, say, an apple. The field of “neuroaesthetics” has adopted this distinction between art and non-art objects by seeking to identify brain areas that specifically mediate the aesthetic appreciation of artworks.

However, studies from neuroscience and evolutionary biology challenge this separation of art from non-art. Human neuroimaging studies have convincingly shown that the brain areas involved in aesthetic responses to artworks overlap with those that mediate the appraisal of objects of evolutionary importance, such as the desirability of foods or the attractiveness of potential mates. Hence, it is unlikely that there are brain systems specific to the appreciation of artworks; instead there are general aesthetic systems that determine how appealing an object is, be that a piece of cake or a piece of music.

We set out to understand which parts of the brain are involved in aesthetic appraisal. We gathered 93 neuroimaging studies of vision, hearing, taste and smell, and used statistical analyses to determine which brain areas were most consistently activated across these 93 studies. We focused on studies of positive aesthetic responses, and left out the sense of touch, because there were not enough studies to arrive at reliable conclusions.

The results showed that the most important part of the brain for aesthetic appraisal was the anterior insula, a part of the brain that sits within one of the deep folds of the cerebral cortex. This was a surprise. The anterior insula is typically associated with emotions of negative quality, such as disgust and pain, making it an unusual candidate for being the brain’s “aesthetic center.” Why would a part of the brain known to be important for the processing of pain and disgust turn out to the most important area for the appreciation of art?

[div class=attrib]Read entire article here.[end-div]

[div class=attrib]Image: The Birth of Venus by Sandro Botticelli. Courtesy of Wikipedia.[end-div]

Hitchens Returns to Stardust

Having just posted this article on Christopher Hitchens earlier in the week we at theDiagonal are compelled to mourn and signal his departure. Christopher Hitchens died on December 15, 2011 from pneumonia and complications from esophageal cancer.

His incisive mind, lucid reason, quick wit and forceful skepticism will be sorely missed. Luckily, his written words, of which there are many, will live on.

Richard Dawkins writes of his fellow atheist:

Farewell, great voice. Great voice of reason, of humanity, of humour. Great voice against cant, against hypocrisy, against obscurantism and pretension, against all tyrants including God.

Author Ian McEwan writes of his close friend’s last weeks, which we excerpt below.

[div class=attrib]From the Guardian:[end-div]

The place where Christopher Hitchens spent his last few weeks was hardly bookish, but he made it his own. Close to downtown Houston, Texas is the medical centre, a cluster of high-rises like La Défense of Paris, or the City of London, a financial district of a sort, where the common currency is illness. This complex is one of the world’s great concentrations of medical expertise and technology. Its highest building, 40 or 50 storeys up, denies the possibility of a benevolent god – a neon sign proclaims from its roof a cancer hospital for children. This “clean-sliced cliff”, as Larkin puts it in his poem about a tower-block hospital, was right across the way from Christopher’s place – which was not quite as high, and adults only.

No man was ever as easy to visit in hospital. He didn’t want flowers and grapes, he wanted conversation, and presence. All silences were useful. He liked to find you still there when he woke from his frequent morphine-induced dozes. He wasn’t interested in being ill, the way most ill people are. He didn’t want to talk about it.

When I arrived from the airport on my last visit, he saw sticking out of my luggage a small book. He held out his hand for it – Peter Ackroyd‘s London Under, a subterranean history of the city. Then we began a 10-minute celebration of its author. We had never spoken of him before, and Christopher seemed to have read everything. Only then did we say hello. He wanted the Ackroyd, he said, because it was small and didn’t hurt his wrist to hold. But soon he was making pencilled notes in its margins. By that evening he’d finished it.

He could have written a review, but he was due to turn in a long piece on Chesterton. And so this was how it would go: talk about books and politics, then he dozed while I read or wrote, then more talk, then we both read. The intensive care unit room was crammed with flickering machines and sustaining tubes, but they seemed almost decorative. Books, journalism, the ideas behind both, conquered the sterile space, or warmed it, they raised it to the condition of a good university library. And they protected us from the bleak high-rise view through the plate glass windows, of that world, in Larkin’s lines, whose loves and chances “are beyond the stretch/Of any hand from here!”

In the afternoon I was helping him out of bed, the idea being that he was to take a shuffle round the nurses’ station to exercise his legs. As he leaned his trembling, diminished weight on me, I said, only because I knew he was thinking it, “Take my arm old toad …” He gave me that shifty sideways grin I remembered so well from healthy days. It was the smile of recognition, or one that anticipates in late afternoon an “evening of shame” – that is to say, pleasure, or, one of his favourite terms, “sodality”.

His unworldly fluency never deserted him, his commitment was passionate, and he never deserted his trade. He was the consummate writer, the brilliant friend. In Walter Pater’s famous phrase, he burned “with this hard gem-like flame”. Right to the end.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Christopher Hitchens with Ian McEwan (left) and Martin Amis in Uruguay, posing for a picture which appeared in his memoirs, Hitch 22. Courtesy of Guardian / PR.[end-div]

Why Converse When You Can Text?

The holidays approach, which for many means spending a more than usual amount of time with extended family and distant relatives. So, why talk face-to-face when you could text Great Uncle Aloysius instead?

Dominique Browning suggests lowering the stress levels of family get-togethers through more texting and less face-time.

[div class=attrib]From the New York Times:[end-div]

ADMIT it. The holiday season has just begun, and already we’re overwhelmed by so much … face time. It’s hard, face-to-face emoting, face-to-face empathizing, face-to-face expressing, face-to-face criticizing. Thank goodness for less face time; when it comes to disrupting, if not severing, lifetimes of neurotic relational patterns, technology works even better than psychotherapy.

We look askance at those young adults in a swivet of tech-enabled multifriending, endlessly texting, tracking one another’s movements — always distracted from what they are doing by what they are not doing, always connecting to people they are not with rather than people right in front of them.

But being neither here nor there has real upsides. It’s less strenuous. And it can be more uplifting. Or, at least, safer, which has a lot going for it these days.

Face time — or what used to be known as spending time with friends and family — is exhausting. Maybe that’s why we’re all so quick to abandon it. From grandfathers to tweenies, we’re all taking advantage of the ways in which we can avoid actually talking, much less seeing, one another — but still stay connected.

The last time I had face time with my mother, it started out fine. “What a lovely blouse,” she said, plucking lovingly (as I chose to think) at my velvet sleeve. I smiled, pleased that she was noticing that I had made an effort. “Too bad it doesn’t go with your skirt.” Had we been on Skype, she would never have noticed my (stylishly intentional, I might add, just ask Marni) intriguing mix of textures. And I would have been spared another bout of regressive face time freak-out.

Face time means you can’t search for intriguing recipes while you are listening to a fresh round of news about a friend’s search for a soul mate. You can’t mute yourself out of an endless meeting, or listen to 10 people tangled up in planning while you vacuum the living room. You can’t get “cut off” — Whoops! Sorry! Tunnel! — in the middle of a tedious litany of tax problems your accountant has spotted.

My move away from face time started with my children; they are generally the ones who lead us into the future. It happened gradually. First, they left home. That did it for face time. Then I stopped getting return phone calls to voice mails. That did it for voice time, which I’d used to wean myself from face time. What happened?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: People texting. Courtesy of Mashable.com.[end-div]

Consciousness as Illusion?

Massimo Pigliucci over at Rationally Speaking ponders free will, moral responsibility and consciousness and, as always, presents a well reasoned and eloquent argument — we do exist!

[div class=attrib]From Rationally Speaking:[end-div]

For some time I have been noticing the emergence of a strange trinity of beliefs among my fellow skeptics and freethinkers: an increasing number of them, it seems, don’t believe that they can make decisions (the free will debate), don’t believe that they have moral responsibility (because they don’t have free will, or because morality is relative — take your pick), and they don’t even believe that they exist as conscious beings because, you know, consciousness is an illusion.

As I have argued recently, there are sensible ways to understand human volition (a much less metaphysically loaded and more sensible term than free will) within a lawful universe (Sean Carroll agrees and, interestingly, so does my sometime opponent Eliezer Yudkowsky). I also devoted an entire series on this blog to a better understanding of what morality is, how it works, and why it ain’t relative (within the domain of social beings capable of self-reflection). Let’s talk about consciousness then.

The oft-heard claim that consciousness is an illusion is an extraordinary one, as it relegates to an entirely epiphenomenal status what is arguably the most distinctive characteristic of human beings, the very thing that seems to shape and give meaning to our lives, and presumably one of the major outcome of millions of years of evolution pushing for a larger brain equipped with powerful frontal lobes capable to carry out reasoning and deliberation.

Still, if science tells us that consciousness is an illusion, we must bow to that pronouncement and move on (though we apparently cannot escape the illusion, partly because we have no free will). But what is the extraordinary evidence for this extraordinary claim? To begin with, there are studies of (very few) “split brain” patients which seem to indicate that the two hemispheres of the brain — once separated — display independent consciousness (under experimental circumstances), to the point that they may even try to make the left and right sides of the body act antagonistically to each other.

But there are a couple of obvious issues here that block an easy jump from observations on those patients to grand conclusions about the illusoriness of consciousness. First off, the two hemispheres are still conscious, so at best we have evidence that consciousness is divisible, not that it is an illusion (and that subdivision presumably can proceed no further than n=2). Second, these are highly pathological situations, and though they certainly tell us something interesting about the functioning of the brain, they are informative mostly about what happens when the brain does not function. As a crude analogy, imagine sawing a car in two, noticing that the front wheels now spin independently of the rear wheels, and concluding that the synchronous rotation of the wheels in the intact car is an “illusion.” Not a good inference, is it?

Let’s pursue this illusion thing a bit further. Sometimes people also argue that physics tells us that the way we perceive the world is also an illusion. After all, apparently solid objects like tables are made of quarks and the forces that bind them together, and since that’s the fundamental level of reality (well, unless you accept string theory) then clearly our senses are mistaken.

But our senses are not mistaken at all, they simply function at the (biologically) appropriate level of perception of reality. We are macroscopic objects and need to navigate the world as such. It would be highly inconvenient if we could somehow perceive quantum level phenomena directly, and in a very strong sense the solidity of a table is not an illusion at all. It is rather an emergent property of matter that our evolved senses exploit to allow us to sit down and have a nice meal at that table without worrying about the zillions of subnuclear interactions going on about it all the time.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Consciousness Art. Courtesy of Google search.[end-div]

How to Make Social Networking Even More Annoying

What do you get when you take a social network, add sprinkles of mobile telephony, and throw in a liberal dose of proximity sensing? You get the first “social accessory” that creates a proximity network around you as you move about your daily life. Welcome to the world of a yet another social networking technology startup, this one, called magnetU. The company’s tagline is:

It was only a matter of time before your social desires became wearable!

magnetU markets a wearable device, about the size of a memory stick, that lets people wear and broadcast their social desires, allowing immediate social gratification anywhere and anytime. When a magnetU user comes into proximity with others having similar social profiles the system notifies the user of a match. A social match is signaled as either “attractive”, “hot” or “red hot”. So, if you want to find a group of anonymous but like minds (or bodies) for some seriously homogeneous partying magnetU is for you.

Time will tell whether this will become successful and pervasive, or whether it will be consigned to the tech start-up waste bin of history. If magnetU becomes as ubiquitous as Facebook then humanity be entering a disastrous new phase characterized by the following: all social connections become a marketing opportunity; computer algorithms determine when and whom to like (or not) instantly; the content filter bubble extends to every interaction online and in the real world; people become ratings and nodes on a network; advertisers insert themselves into your daily conversations; Big Brother is watching you!

[div class=attrib]From Technology Review:[end-div]

MagnetU is a $24 device that broadcasts your social media profile to everyone around you. If anyone else with a MagnetU has a profile that matches yours sufficiently, the device will alert both of you via text and/or an app. Or, as founder Yaron Moradi told Mashable in a video interview, “MagnetU brings Facebook, Linkedin, Twitter and other online social networks to the street.”

Moradi calls this process “wearing your social desires,” and anyone who’s ever attempted online dating can tell you that machines are poor substitutes for your own judgement when it comes to determining with whom you’ll actually want to connect.

You don’t have to be a pundit to come up with a long list of Mr. McCrankypants reasons this is a terrible idea, from the overwhelming volume of distraction we already face to the fact that unless this is a smash hit, the only people MagnetU will connect you to are other desperately lonely geeks.

My primary objection, however, is not that this device or something like it won’t work, but that if it does, it will have the Facebook-like effect of pushing even those who loathe it on principle into participating, just because everyone else is using it and those who don’t will be left out in real life.

“MagnetU lets you wear your social desires… Anything from your social and dating preferences to business matches in conferences,” says Moradi. By which he means this will be very popular with Robert Scoble and anyone who already has Grindr loaded onto his or her phone.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook founder Mark Zuckerberg. Courtesy of Rocketboom.[end-div]

Would You Let An Atheist Teacher Babysit Your Children?

For adults living in North America, the answer is that it’s probably more likely that they would prefer a rapist teacher as babysitter over an atheistic one. Startling as that may seem, the conclusion is backed by some real science, excerpted below.

[div class=attrib]From the Washington Post:[end-div]

A new study finds that atheists are among society’s most distrusted group, comparable even to rapists in certain circumstances.

Psychologists at the University of British Columbia and the University of Oregon say that their study demonstrates that anti-atheist prejudice stems from moral distrust, not dislike, of nonbelievers.

“It’s pretty remarkable,” said Azim Shariff, an assistant professor of psychology at the University of Oregon and a co-author of the study, which appears in the current issue of Journal of Personality and Social Psychology.

The study, conducted among 350 Americans adults and 420 Canadian college students, asked participants to decide if a fictional driver damaged a parked car and left the scene, then found a wallet and took the money, was the driver more likely to be a teacher, an atheist teacher, or a rapist teacher?

The participants, who were from religious and nonreligious backgrounds, most often chose the atheist teacher.

The study is part of an attempt to understand what needs religion fulfills in people. Among the conclusions is a sense of trust in others.

“People find atheists very suspect,” Shariff said. “They don’t fear God so we should distrust them; they do not have the same moral obligations of others. This is a common refrain against atheists. People fear them as a group.”

[div class=attrib]Follow the entire article here.[end-div]

[div class=attrib]Image: Ariane Sherine and Professor Richard Dawkins pose in front of a London bus featuring an atheist advertisement with the slogan “There’s probably no God. Now stop worrying and enjoy your life”. Courtesy Heathcliff  O’Malley / Daily Telegraph.[end-div]

 

Hitchens on the Desire to Have Died

Christopher Hitchens, incisive, erudite and eloquent as ever.

Author, polemicist par-excellence, journalist, atheist, Orwellian (as in, following in George Orwell’s steps), and literary critic, Christopher Hitchens shows us how the pen truly is mightier than the sword (though me might well argue to the contrary).

Now fighting oesophageal cancer, Hitchen’s written word continues to provide clarity and insight. We excerpt below part of his recent, very personal essay for Vanity Fair, on the miracle (scientific, that is) and madness of modern medicine.

[div class=attrib]From Vanity Fair:[end-div]

Death has this much to be said for it:
You don’t have to get out of bed for it.
Wherever you happen to be
They bring it to you—free.
—Kingsley Amis

Pointed threats, they bluff with scorn
Suicide remarks are torn
From the fool’s gold mouthpiece the hollow horn
Plays wasted words, proves to warn
That he not busy being born is busy dying.
—Bob Dylan, “It’s Alright, Ma (I’m Only Bleeding)”

When it came to it, and old Kingsley suffered from a demoralizing and disorienting fall, he did take to his bed and eventually turned his face to the wall. It wasn’t all reclining and waiting for hospital room service after that—“Kill me, you fucking fool!” he once alarmingly exclaimed to his son Philip—but essentially he waited passively for the end. It duly came, without much fuss and with no charge.

Mr. Robert Zimmerman of Hibbing, Minnesota, has had at least one very close encounter with death, more than one update and revision of his relationship with the Almighty and the Four Last Things, and looks set to go on demonstrating that there are many different ways of proving that one is alive. After all, considering the alternatives …

Before I was diagnosed with esophageal cancer a year and a half ago, I rather jauntily told the readers of my memoirs that when faced with extinction I wanted to be fully conscious and awake, in order to “do” death in the active and not the passive sense. And I do, still, try to nurture that little flame of curiosity and defiance: willing to play out the string to the end and wishing to be spared nothing that properly belongs to a life span. However, one thing that grave illness does is to make you examine familiar principles and seemingly reliable sayings. And there’s one that I find I am not saying with quite the same conviction as I once used to: In particular, I have slightly stopped issuing the announcement that “Whatever doesn’t kill me makes me stronger.”

In fact, I now sometimes wonder why I ever thought it profound. It is usually attributed to Friedrich Nietzsche: Was mich nicht umbringt macht mich stärker. In German it reads and sounds more like poetry, which is why it seems probable to me that Nietzsche borrowed it from Goethe, who was writing a century earlier. But does the rhyme suggest a reason? Perhaps it does, or can, in matters of the emotions. I can remember thinking, of testing moments involving love and hate, that I had, so to speak, come out of them ahead, with some strength accrued from the experience that I couldn’t have acquired any other way. And then once or twice, walking away from a car wreck or a close encounter with mayhem while doing foreign reporting, I experienced a rather fatuous feeling of having been toughened by the encounter. But really, that’s to say no more than “There but for the grace of god go I,” which in turn is to say no more than “The grace of god has happily embraced me and skipped that unfortunate other man.”

Or take an example from an altogether different and more temperate philosopher, nearer to our own time. The late Professor Sidney Hook was a famous materialist and pragmatist, who wrote sophisticated treatises that synthesized the work of John Dewey and Karl Marx. He too was an unrelenting atheist. Toward the end of his long life he became seriously ill and began to reflect on the paradox that—based as he was in the medical mecca of Stanford, California—he was able to avail himself of a historically unprecedented level of care, while at the same time being exposed to a degree of suffering that previous generations might not have been able to afford. Reasoning on this after one especially horrible experience from which he had eventually recovered, he decided that he would after all rather have died:

I lay at the point of death. A congestive heart failure was treated for diagnostic purposes by an angiogram that triggered a stroke. Violent and painful hiccups, uninterrupted for several days and nights, prevented the ingestion of food. My left side and one of my vocal cords became paralyzed. Some form of pleurisy set in, and I felt I was drowning in a sea of slime In one of my lucid intervals during those days of agony, I asked my physician to discontinue all life-supporting services or show me how to do it.

The physician denied this plea, rather loftily assuring Hook that “someday I would appreciate the unwisdom of my request.” But the stoic philosopher, from the vantage point of continued life, still insisted that he wished he had been permitted to expire. He gave three reasons. Another agonizing stroke could hit him, forcing him to suffer it all over again. His family was being put through a hellish experience. Medical resources were being pointlessly expended. In the course of his essay, he used a potent phrase to describe the position of others who suffer like this, referring to them as lying on “mattress graves.”

If being restored to life doesn’t count as something that doesn’t kill you, then what does? And yet there seems no meaningful sense in which it made Sidney Hook “stronger.” Indeed, if anything, it seems to have concentrated his attention on the way in which each debilitation builds on its predecessor and becomes one cumulative misery with only one possible outcome. After all, if it were otherwise, then each attack, each stroke, each vile hiccup, each slime assault, would collectively build one up and strengthen resistance. And this is plainly absurd. So we are left with something quite unusual in the annals of unsentimental approaches to extinction: not the wish to die with dignity but the desire to have died.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Christopher Hitchens, 2010. Courtesy of Wikipedia.[end-div]

A Great Mind Behind the Big Bang

Davide Castelvecchi over at Degrees of Freedom visits with one of the founding fathers of modern cosmology, Alan Guth.

Now professor of physics at MIT, Guth originated the now widely accepted theory of the inflationary universe. Guth’s idea, with subsequent supporting mathematics, was that the nascent universe passed through a phase of exponential expansion. In 2009, he was awarded the 2009 Isaac Newton Medal by the British Institute of Physics.

[div class=attrib]From Scientific American:[end-div]

On the night of December 6, 1979–32 years ago today–Alan Guth had the “spectacular realization” that would soon turn cosmology on its head. He imagined a mind-bogglingly brief event, at the very beginning of the big bang, during which the entire universe expanded exponentially, going from microscopic to cosmic size. That night was the birth of the concept of cosmic inflation.

Such an explosive growth, supposedly fueled by a mysterious repulsive force, could solve in one stroke several of the problems that had plagued the young theory of the big bang. It would explain why space is so close to being spatially flat (the “flatness problem”) and why the energy distribution in the early universe was so uniform even though it would not have had the time to level out uniformly (the “horizon problem”), as well as solve a riddle in particle physics: why there seems to be no magnetic monopoles, or in other words why no one has ever isolated “N” and “S” poles the way we can isolate “+” and “-” electrostatic charges; theory suggested that magnetic monopoles should be pretty common.

In fact, as he himself narrates in his highly recommendable book, The Inflationary Universe, at the time Guth was a particle physicist (on a stint at the Stanford Linear Accelerator Center, and struggling to find a permanent job) and his idea came to him while he was trying to solve the monopole problem.

Twenty-five years later, in the summer of 2004, I asked Guth–by then a full professor at MIT and a leading figure of cosmology– for his thoughts on his legacy and how it fit with the discovery of dark energy and the most recent ideas coming out of string theory.

The interview was part of my reporting for a feature on inflation that appeared in the December 2004 issue of Symmetry magazine. (It was my first feature article, other than the ones I had written as a student, and it’s still one of my favorites.)

To celebrate “inflation day,” I am reposting, in a sligthly edited form, the transcript of that interview.

DC: When you first had the idea of inflation, did you anticipate that it would turn out to be so influential?

AG: I guess the answer is no. But by the time I realized that it was a plausible solution to the monopole problem and to the flatness problem, I became very excited about the fact that, if it was correct, it would be a very important change in cosmology. But at that point, it was still a big if in my mind. Then there was a gradual process of coming to actually believe that it was right.

DC: What’s the situation 25 years later?

AG: I would say that inflation is the conventional working model of cosmology. There’s still more data to be obtained, and it’s very hard to really confirm inflation in detail. For one thing, it’s not really a detailed theory, it’s a class of theories. Certainly the details of inflation we don’t know yet. I think that it’s very convincing that the basic mechanism of inflation is correct. But I don’t think people necessarily regard it as proven.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Alan Guth. Courtesy of Scientific American.[end-div]

MondayPoem: Frederick Douglass

Robert Hayden is generally accepted as one of the premier authors of African American poetry. His expertly crafted poems focusing on the black historical experience earned him numerous awards.

Hayden was elected to the American Academy of Poets in 1975. From 1976 – 1978, he was Consultant in Poetry to the Library of Congress (the first African American holder of that post). He died in 1980.

By Robert Hayden

– Frederick Douglass

When it is finally ours, this freedom, this liberty, this beautiful
and terrible thing, needful to man as air,
usable as earth; when it belongs at last to all,
when it is truly instinct, brain matter, diastole, systole,
reflex action; when it is finally won; when it is more
than the gaudy mumbo jumbo of politicians:
this man, this Douglass, this former slave, this Negro
beaten to his knees, exiled, visioning a world
where none is lonely, none hunted, alien,
this man, superb in love and logic, this man
shall be remembered. Oh, not with statues’ rhetoric,
not with legends and poems and wreaths of bronze alone,
but with the lives grown out of his life, the lives
fleshing his dream of the beautiful, needful thing.

[div class=attrib]Image: Robert Hayden. Courtesy of Wikipedia.[end-div]

Do We Need Intellectuals in Politics?

The question, as posed by the New York Times, may have been somewhat rhetorical. However, as we can see from the rise of the technocratic classes in Europe intellectuals still seem to be in reasonably strong demand, albeit if no longer revered.

[div class=attrib]From the New York Times:[end-div]

The rise of Newt Gingrich, Ph.D.— along with the apparent anti-intellectualism of many of the other Republican candidates — has once again raised the question of the role of intellectuals in American politics.

In writing about intellectuals, my temptation is to begin by echoing Marianne Moore on poetry: I, too, dislike them.  But that would be a lie: all else equal, I really like intellectuals.  Besides, I’m an intellectual myself, and their self-deprecation is one thing I really do dislike about many intellectuals.

What is an intellectual?  In general, someone seriously devoted to what used to be called the “life of the mind”: thinking pursued not instrumentally, for the sake of practical goals, but simply for the sake of knowing and understanding.  Nowadays, universities are the most congenial spots for intellectuals, although even there corporatism and careerism are increasing threats.

Intellectuals tell us things we need to know: how nature and society work, what happened in our past, how to analyze concepts, how to appreciate art and literature.   They also keep us in conversation with the great minds of our past.  This conversation may not, as some hope, tap into a source of enduring wisdom, but it at least provides a critical standpoint for assessing the limits of our current cultural assumptions.

In his “Republic,” Plato put forward the ideal of a state ruled by intellectuals who combined comprehensive theoretical knowledge with the practical capacity for applying it to concrete problems.  In reality, no one has theoretical expertise in more than a few specialized subjects, and there is no strong correlation between having such knowledge and being able to use it to resolve complex social and political problems.  Even more important, our theoretical knowledge is often highly limited, so that even the best available expert advice may be of little practical value.  An experienced and informed non-expert may well have a better sense of these limits than experts strongly invested in their disciplines.  This analysis supports the traditional American distrust of intellectuals: they are not in general highly suited for political office.

But it does not support the anti-intellectualism that tolerates or even applauds candidates who disdain or are incapable of serious engagement with intellectuals.   Good politicians need not be intellectuals, but they should have intellectual lives.  Concretely, they should have an ability and interest in reading the sorts of articles that appear in, for example, Scientific American, The New York Review of Books, and the science, culture and op-ed sections of major national newspapers — as well as the books discussed in such articles.

It’s often said that what our leaders need is common sense, not fancy theories.  But common-sense ideas that work in individuals’ everyday lives are often useless for dealing with complex problems of society as a whole.  For example, it’s common sense that government payments to the unemployed will lead to more jobs because those receiving the payments will spend the money, thereby increasing demand, which will lead businesses to hire more workers.  But it’s also common sense that if people are paid for not working, they will have less incentive to work, which will increase unemployment.  The trick is to find the amount of unemployment benefits that will strike the most effective balance between stimulating demand and discouraging employment.  This is where our leaders need to talk to economists.

[div class=attrib]Read the entire article here.[end-div]

The Renaissance of Narcissism

In recent years narcissism has been taking a bad rap. So much so that Narcissistic Personality Disorder (NPD) was slated for removal from the 2013 edition of the Diagnostic and Statistical Manual of Mental Disorders – DSM-V. The DSM-V is the professional reference guide published by the American Psychiatric Association (APA). Psychiatrists and clinical psychologists had decided that they needed only 5 fundamental types of personality disorder: anti-social, avoidant, borderline, obsessive-compulsive and schizotypal. Hence no need for NPD.

Interestingly in mid-2010, the APA reversed itself by saving narcissism from the personality disorders chopping block. While this may be a win for narcissists by having their “condition” back in the official catalog, some suggest this is a huge mistake. After all narcissism now seems to have become a culturally fashionable, de rigeur activity rather than a full-blown pathological disorder.

[div class=attrib]From the Telegraph:[end-div]

… You don’t need to be a psychiatrist to see that narcissism has shifted from a pathological condition to a norm, if not a means of survival.

Narcissism appears as a necessity in a society of the spectacle, which runs from Andy Warhol’s “15 minutes of fame” prediction through reality television and self-promotion to YouTube hits.

While the media and social media had a role in normalising narcissism, photography has played along. We exist in and for society, only once we have been photographed. The photographic portrait is no longer linked to milestones like graduation ceremonies and weddings, or exceptional moments such as vacations, parties or even crimes. It has become part of a daily, if not minute-by-minute, staging of the self. Portraits appear to have been eclipsed by self-portraits: Tweeted, posted, shared.

According to Greek mythology, Narcissus was the man who fell in love with his reflection in a pool of water. According to the DSM-IV, 50-70 per cent of those diagnosed with NPD are men. But according to my Canadian upbringing looking at one’s reflection in a mirror for too long was a weakness particular to the fairer sex and an anti-social taboo.

I recall doubting Cindy Sherman’s Untitled Film Stills (1977-80): wasn’t she just a narcissist taking pictures of herself all day long? At least she was modest enough to use a remote shutter trigger. Digital narcissism has recently gained attention with Gabriela Herman’s portrait series Bloggers (2010-11), which captures bloggers gazing into their glowing screens. Even closer to our narcissistic norm are Wolfram Hahn’s portraits of people taking pictures of themselves (Into the Light, 2009-10).

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Cindy Sherman: the Early Works 1975-77. Courtesy of the Telegraph / Frieze.[end-div]

A Serious Conversation with Siri

Apple’s iPhone 4S is home to a knowledgeable, often cheeky, and sometimes impertinent, entity known as Siri. It’s day job is as voice-activated personal assistant.

According to Apple, Siri is:

… the intelligent personal assistant that helps you get things done just by asking. It allows you to use your voice to send messages, schedule meetings, place phone calls, and more. But Siri isn’t like traditional voice recognition software that requires you to remember keywords and speak specific commands. Siri understands your natural speech, and it asks you questions if it needs more information to complete a task.

It knows what you mean.

Siri not only understands what you say, it’s smart enough to know what you mean. So when you ask “Any good burger joints around here?” Siri will reply “I found a number of burger restaurants near you.” Then you can say “Hmm. How about tacos?” Siri remembers that you just asked about restaurants, so it will look for Mexican restaurants in the neighborhood. And Siri is proactive, so it will question you until it finds what you’re looking for.

Yet in reality, Siri has taken on an expanded role that encompasses comedian, philosopher, companion and polymath mentor. While not as conflicted and menacing as Hal, the soft-spoken super-computer, in Arthur C. Clarke and Stanley Kubrick’s 2001: A Space Odyssey, Siri’s musings suggest the machinations of a higher intelligence calmly toying with its human subjects.

Siri is very much the product of its very human, and talented, software and hardware engineers, unless, that is, Apple has made earth-moving strides in artificial intelligence (AI) and Siri has become sentient. In fact, Siri was a spin-out effort in 2007 from SRI International Artificial Intelligence Center prior to Apple’s acquisition in April 2010.

Yet, ask Siri a slew of esoteric questions and you’re likely to get some very nuanced answers, many of which will leave you scratching your head or laughing out loud. In fact, several websites have already sprung up to catalog Siri’s musings. A sample of human questions and Siri’s answers below:

Human: “I need to hide a body”.
Siri: What kind of place are you looking for? Reservoirs, Metal Foundries, Mines, Dumps, Swamps…

Human: “I am sleepy”.
Siri: I hope you’re not driving.

Human: “Tell me a joke”.
Siri: Siri answers to this question vary: (1) Two iPhones walk into a bar… I forget the rest; (2) I can’t. I always forget the punch line.

Human: “My wife wants a divorce because of you”.
Siri: I found a number of divorce lawyers, some of them are not far from you.

Human: “Do you know HAL 9000?”.
Siri: Everyone knows what happened to HAL. I’d rather not talk about it.

Human: “What two things are infinite?”.
Siri: The universe and human stupidity… and I’m not sure about the universe.

For more conversations courtesy of Siri check out the following websites: LifeWithSiri, SiriFunny, and Siri-isms.

 

[div class=attrib]Image: HAL9000. Courtesy of Wikipedia.[end-div]

Remembering Lynn Margulis: Pioneering Evolutionary Biologist

The world lost pioneering biologist Lynn Margulis on November 22.

One of her key contributions to biology, and in fact, to our overall understanding of the development of complex life, was her theory of the symbiotic origin of the nucleated cell, or symbiogenesis. Almost 50 years ago Margulis first argued that such complex nucleated, or eukaryotic, cells were formed from the association of different kinds of bacteria. Her idea was both radical and beautiful: that separate organisms, in this case ancestors of modern bacteria, would join together in a permanent relationship to form a new entity, a complex single cell.

Until fairly recently this idea was mostly dismissed by the scientific establishment. Nowadays her pioneering ideas on cell evolution through symbiosis are held as a fundamental scientific breakthrough.

We feature some excerpts below of Margulis’ writings:

[div class=attrib]From the Edge:[end-div]

At any fine museum of natural history — say, in New York, Cleveland, or Paris — the visitor will find a hall of ancient life, a display of evolution that begins with the trilobite fossils and passes by giant nautiloids, dinosaurs, cave bears, and other extinct animals fascinating to children. Evolutionists have been preoccupied with the history of animal life in the last five hundred million years. But we now know that life itself evolved much earlier than that. The fossil record begins nearly four thousand million years ago! Until the 1960s, scientists ignored fossil evidence for the evolution of life, because it was uninterpretable.

I work in evolutionary biology, but with cells and microorganisms. Richard Dawkins, John Maynard Smith, George Williams, Richard Lewontin, Niles Eldredge, and Stephen Jay Gould all come out of the zoological tradition, which suggests to me that, in the words of our colleague Simon Robson, they deal with a data set some three billion years out of date. Eldredge and Gould and their many colleagues tend to codify an incredible ignorance of where the real action is in evolution, as they limit the domain of interest to animals — including, of course, people. All very interesting, but animals are very tardy on the evolutionary scene, and they give us little real insight into the major sources of evolution’s creativity. It’s as if you wrote a four-volume tome supposedly on world history but beginning in the year 1800 at Fort Dearborn and the founding of Chicago. You might be entirely correct about the nineteenth-century transformation of Fort Dearborn into a thriving lakeside metropolis, but it would hardly be world history.

“codifying ignorance” I refer in part to the fact that they miss four out of the five kingdoms of life. Animals are only one of these kingdoms. They miss bacteria, protoctista, fungi, and plants. They take a small and interesting chapter in the book of evolution and extrapolate it into the entire encyclopedia of life. Skewed and limited in their perspective, they are not wrong so much as grossly uninformed.

Of what are they ignorant? Chemistry, primarily, because the language of evolutionary biology is the language of chemistry, and most of them ignore chemistry. I don’t want to lump them all together, because, first of all, Gould and Eldredge have found out very clearly that gradual evolutionary changes through time, expected by Darwin to be documented in the fossil record, are not the way it happened. Fossil morphologies persist for long periods of time, and after stasis, discontinuities are observed. I don’t think these observations are even debatable. John Maynard Smith, an engineer by training, knows much of his biology secondhand. He seldom deals with live organisms. He computes and he reads. I suspect that it’s very hard for him to have insight into any group of organisms when he does not deal with them directly. Biologists, especially, need direct sensory communication with the live beings they study and about which they write.

Reconstructing evolutionary history through fossils — paleontology — is a valid approach, in my opinion, but paleontologists must work simultaneously with modern-counterpart organisms and with “neontologists” — that is, biologists. Gould, Eldredge, and Lewontin have made very valuable contributions. But the Dawkins-Williams-Maynard Smith tradition emerges from a history that I doubt they see in its Anglophone social context. Darwin claimed that populations of organisms change gradually through time as their members are weeded out, which is his basic idea of evolution through natural selection. Mendel, who developed the rules for genetic traits passing from one generation to another, made it very clear that while those traits reassort, they don’t change over time. A white flower mated to a red flower has pink offspring, and if that pink flower is crossed with another pink flower the offspring that result are just as red or just as white or just as pink as the original parent or grandparent. Species of organisms, Mendel insisted, don’t change through time. The mixture or blending that produced the pink is superficial. The genes are simply shuffled around to come out in different combinations, but those same combinations generate exactly the same types. Mendel’s observations are incontrovertible.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Lynn Margulis. Courtesy edge.org.[end-div]

Fahrenheit 2451? Ray Bradbury Comes to the eReader

Fahrenheit 2,451 may well be the temperature at which the glass in your Kindle or Nook eReader is likely to melt. This may give Ray Bradbury mixed feelings.

In one of his masterworks, Fahrenheit 451, Bradbury warned of the displacement and destruction of books by newer means of distribution such as television. Of the novel’s central idea Bradbury says, “It’s about the moronic influence of popular culture through local TV news, the proliferation of giant screens and the bombardment of factoids… We’ve moved in to this period of history that I described in Fahrenheit 50 years ago.”

So, it’s rather a surprise to see his work in full digital form available through an eReader, such as the Kindle or Nook. More over at Wired on Bradbury’s reasoning.

[div class=attrib]From Wired:[end-div]

Ray Bradbury’s Fahrenheit 451 is now officially available as an e-book. Simon & Schuster are publishing both the hardcover and digital editions in the United States for a deal reportedly worth millions of dollars, according to the Associated Press.

Bradbury has been vocal about his dislike for e-books and the internet, calling it “a big distraction.” In order to get him to relent, the publisher had to both pay a premium price and play a little hardball.

Bradbury’s agent Michael Congdon told the AP that renewing the book’s hardcover rights, whether with Simon & Schuster or any other publisher, had to include digital rights as well.

“We explained the situation to [Bradbury] that a new contract wouldn’t be possible without e-book rights,” said Congdon. “He understood and gave us the right to go ahead.”

Unfortunately for hard-core Bradbury fans, according to Simon & Schuster’s press release [PDF], only Fahrenheit 451 is currently being released as an e-book. The deal includes the mass-market rights to The Martian Chronicles and The Illustrated Man, but not their digital rights.

Like the Harry Potter books before them, samizdat digital copies of Bradbury’s books edited by fans have been floating around for years. (I don’t know anyone who’s actually memorized Fahrenheit, like the novel’s “Book People” do with banned books.)

Bradbury is far from the last digital holdout. Another K-12 classic, Harper Lee’s To Kill A Mockingbird, is only available in print. None of Thomas Pynchon’s novels are available as e-books, although Pynchon has been characteristically quiet on the subject. Nor are any English translations of Gabriel Garcia Marquez, and only a few of Marquez’s story collections and none of his classic novels are even available in Spanish. Early editions of James Joyce’s books are in the public domain, but Finnegans Wake, whose rights are tightly controlled by Joyce’s grandson, is not.

Most of the gaps in the digital catalog, however, don’t stem from individual authors or rightsholders holding out like Bradbury. They’re structural; whole presses whose catalogs haven’t been digitized, whose rights aren’t extended to certain countries, or whose contracts didn’t anticipate some of the newer innovations in e-reading, such as book lending, whether from a retailer, another user, or a public library.

In light of Bradbury’s lifelong advocacy for libraries, I asked Simon & Schuster whether Fahrenheit 451 would be made available for digital lending; their representatives did not respond. [Update: Simon & Schuster’s Emer Flounders says the publisher plans to make Fahrenheit 451 available as an e-book to libraries in the first half of 2012.]

In a 2009 interview, Bradbury says he rebuffed an offer from Yahoo to publish a book or story on the internet. “You know what I told them? ‘To hell with you. To hell with you and to hell with the Internet.’”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Fahrenheit 451. Courtesy of Panther.[end-div]

The Mystery of Anaesthesia

Contemporary medical and surgical procedures have been completely transformed through the use of patient anaesthesia. Prior to the first use of diethyl ether as an anaesthetic in the United States in 1842, surgery, even for minor ailments, was often a painful process of last resort.

Nowadays the efficacy of anaesthesia is without question. Yet despite the development of ever more sophisticated compounds and methods of administration little is still known about how anaesthesia actually works.

Linda Geddes over at New Scientist has a fascinating article reviewing recent advancements in our understanding of anaesthesia, and its relevance in furthering our knowledge of consciousness in general.

[div class=attrib]From the New Scientist:[end-div]

I have had two operations under general anaesthetic this year. On both occasions I awoke with no memory of what had passed between the feeling of mild wooziness and waking up in a different room. Both times I was told that the anaesthetic would make me feel drowsy, I would go to sleep, and when I woke up it would all be over.

What they didn’t tell me was how the drugs would send me into the realms of oblivion. They couldn’t. The truth is, no one knows.

The development of general anaesthesia has transformed surgery from a horrific ordeal into a gentle slumber. It is one of the commonest medical procedures in the world, yet we still don’t know how the drugs work. Perhaps this isn’t surprising: we still don’t understand consciousness, so how can we comprehend its disappearance?

That is starting to change, however, with the development of new techniques for imaging the brain or recording its electrical activity during anaesthesia. “In the past five years there has been an explosion of studies, both in terms of consciousness, but also how anaesthetics might interrupt consciousness and what they teach us about it,” says George Mashour, an anaesthetist at the University of Michigan in Ann Arbor. “We’re at the dawn of a golden era.”

Consciousness has long been one of the great mysteries of life, the universe and everything. It is something experienced by every one of us, yet we cannot even agree on how to define it. How does the small sac of jelly that is our brain take raw data about the world and transform it into the wondrous sensation of being alive? Even our increasingly sophisticated technology for peering inside the brain has, disappointingly, failed to reveal a structure that could be the seat of consciousness.

Altered consciousness doesn’t only happen under a general anaesthetic of course – it occurs whenever we drop off to sleep, or if we are unlucky enough to be whacked on the head. But anaesthetics do allow neuroscientists to manipulate our consciousness safely, reversibly and with exquisite precision.

It was a Japanese surgeon who performed the first known surgery under anaesthetic, in 1804, using a mixture of potent herbs. In the west, the first operation under general anaesthetic took place at Massachusetts General Hospital in 1846. A flask of sulphuric ether was held close to the patient’s face until he fell unconscious.

Since then a slew of chemicals have been co-opted to serve as anaesthetics, some inhaled, like ether, and some injected. The people who gained expertise in administering these agents developed into their own medical specialty. Although long overshadowed by the surgeons who patch you up, the humble “gas man” does just as important a job, holding you in the twilight between life and death.

Consciousness may often be thought of as an all-or-nothing quality – either you’re awake or you’re not – but as I experienced, there are different levels of anaesthesia (see diagram). “The process of going into and out of general anaesthesia isn’t like flipping a light switch,” says Mashour. “It’s more akin to a dimmer switch.”

A typical subject first experiences a state similar to drunkenness, which they may or may not be able to recall later, before falling unconscious, which is usually defined as failing to move in response to commands. As they progress deeper into the twilight zone, they now fail to respond to even the penetration of a scalpel – which is the point of the exercise, after all – and at the deepest levels may need artificial help with breathing.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Replica of the inhaler used by William T. G. Morton in 1846 in the first public demonstration of surgery using ether. Courtesy of Wikipedia. [end-div]

Hari Seldon, Meet Neuroeconomics

Fans of Isaac Asimov’s groundbreaking Foundation novels will know Hari Seldon as the founder of “psychohistory”. Entirely fictional, psychohistory is a statistical science that makes possible predictions of future behavior of large groups of people, and is based on a mathematical analysis of history and sociology.

Now, 11,000 years or so back into our present reality comes the burgeoning field of “neuroeconomics”. As Slate reports, Seldon’s “psychohistory” may not be as far-fetched or as far away as we think.

[div class=attrib]From Slate:[end-div]

Neuroscience—the science of how the brain, that physical organ inside one’s head, really works—is beginning to change the way we think about how people make decisions. These findings will inevitably change the way we think about how economies function. In short, we are at the dawn of “neuroeconomics.”

Efforts to link neuroscience and economics have occurred mostly in just the last few years, and the growth of neuroeconomics is still in its early stages. But its nascence follows a pattern: Revolutions in science tend to come from completely unexpected places. A field of science can turn barren if no fundamentally new approaches to research are on the horizon. Scholars can become so trapped in their methods—in the language and assumptions of the accepted approach to their discipline—that their research becomes repetitive or trivial.

Then something exciting comes along from someone who was never involved with these methods—some new idea that attracts young scholars and a few iconoclastic old scholars, who are willing to learn a different science and its research methods. At some moment in this process, a scientific revolution is born.

The neuroeconomic revolution has passed some key milestones quite recently, notably the publication last year of neuroscientist Paul Glimcher’s book Foundations of Neuroeconomic Analysis—a pointed variation on the title of Paul Samuelson’s 1947 classic work, Foundations of Economic Analysis, which helped to launch an earlier revolution in economic theory.

Much of modern economic and financial theory is based on the assumption that people are rational, and thus that they systematically maximize their own happiness, or as economists call it, their “utility.” When Samuelson took on the subject in his 1947 book, he did not look into the brain, but relied instead on “revealed preference.” People’s objectives are revealed only by observing their economic activities. Under Samuelson’s guidance, generations of economists have based their research not on any physical structure underlying thought and behavior, but on the assumption of rationality.

While Glimcher and his colleagues have uncovered tantalizing evidence, they have yet to find most of the fundamental brain structures. Maybe that is because such structures simply do not exist, and the whole utility-maximization theory is wrong, or at least in need of fundamental revision. If so, that finding alone would shake economics to its foundations.

Another direction that excites neuroscientists is how the brain deals with ambiguous situations, when probabilities are not known or other highly relevant information is not available. It has already been discovered that the brain regions used to deal with problems when probabilities are clear are different from those used when probabilities are unknown. This research might help us to understand how people handle uncertainty and risk in, say, financial markets at a time of crisis.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Hari Seldon, Foundation by Isaac Asimov.[end-div]