All posts by Mike

Crossword Puzzles and Cognition

[div class=attrib]From the New Scientist:[end-div]

TACKLING a crossword can crowd the tip of your tongue. You know that you know the answers to 3 down and 5 across, but the words just won’t come out. Then, when you’ve given up and moved on to another clue, comes blessed relief. The elusive answer suddenly occurs to you, crystal clear.

The processes leading to that flash of insight can illuminate many of the human mind’s curious characteristics. Crosswords can reflect the nature of intuition, hint at the way we retrieve words from our memory, and reveal a surprising connection between puzzle solving and our ability to recognise a human face.

“What’s fascinating about a crossword is that it involves many aspects of cognition that we normally study piecemeal, such as memory search and problem solving, all rolled into one ball,” says Raymond Nickerson, a psychologist at Tufts University in Medford, Massachusetts. In a paper published earlier this year, he brought profession and hobby together by analysing the mental processes of crossword solving (Psychonomic Bulletin and Review, vol 18, p 217).

1 across: “You stinker!” – audible cry that allegedly marked displacement activity (6)

Most of our mental machinations take place pre-consciously, with the results dropping into our conscious minds only after they have been decided elsewhere in the brain. Intuition plays a big role in solving a crossword, Nickerson observes. Indeed, sometimes your pre-conscious mind may be so quick that it produces the goods instantly.

At other times, you might need to take a more methodical approach and consider possible solutions one by one, perhaps listing synonyms of a word in the clue.

Even if your list doesn’t seem to make much sense, it might reflect the way your pre-conscious mind is homing in on the solution. Nickerson points to work in the 1990s by Peter Farvolden at the University of Toronto in Canada, who gave his subjects four-letter fragments of seven-letter target words (as may happen in some crossword layouts, especially in the US, where many words overlap). While his volunteers attempted to work out the target, they were asked to give any other word that occurred to them in the meantime. The words tended to be associated in meaning with the eventual answer, hinting that the pre-conscious mind solves a problem in steps.

Should your powers of deduction fail you, it may help to let your mind chew over the clue while your conscious attention is elsewhere. Studies back up our everyday experience that a period of incubation can lead you to the eventual “aha” moment. Don’t switch off entirely, though. For verbal problems, a break from the clue seems to be more fruitful if you occupy yourself with another task, such as drawing a picture or reading (Psychological Bulletin, vol 135, p 94).

So if 1 across has you flummoxed, you could leave it and take a nice bath, or better still read a novel. Or just move on to the next clue.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Newspaper crossword puzzle. Courtesy of Polytechnic West.[end-div]

Morality for Atheists

The social standing of atheists seems to be on the rise. Back in December we cited a research study that found atheists to be more reviled than rapists. Well, a more recent study now finds that atheists are less disliked than members of the Tea Party.

With this in mind Louise Antony ponders how it is possible for atheists to acquire morality without the help of God.

[div class=attrib]From the New York Times:[end-div]

I was heartened to learn recently that atheists are no longer the most reviled group in the United States: according to the political scientists Robert Putnam and David Campbell, we’ve been overtaken by the Tea Party.  But even as I was high-fiving my fellow apostates (“We’re number two!  We’re number two!”), I was wondering anew: why do so many people dislike atheists?

I gather that many people believe that atheism implies nihilism — that rejecting God means rejecting morality.  A person who denies God, they reason, must be, if not actively evil, at least indifferent to considerations of right and wrong.  After all, doesn’t the dictionary list “wicked” as a synonym for “godless?”  And isn’t it true, as Dostoevsky said, that “if God is dead, everything is permitted”?

Well, actually — no, it’s not.  (And for the record, Dostoevsky never said it was.)   Atheism does not entail that anything goes.

Admittedly, some atheists are nihilists.  (Unfortunately, they’re the ones who get the most press.)  But such atheists’ repudiation of morality stems more from an antecedent cynicism about ethics than from any philosophical view about the divine.  According to these nihilistic atheists, “morality” is just part of a fairy tale we tell each other in order to keep our innate, bestial selfishness (mostly) under control.  Belief in objective “oughts” and “ought nots,” they say, must fall away once we realize that there is no universal enforcer to dish out rewards and punishments in the afterlife.  We’re left with pure self-interest, more or less enlightened.

This is a Hobbesian view: in the state of nature “[t]he notions of right and wrong, justice and injustice have no place.  Where there is no common power, there is no law: where no law, no injustice.”  But no atheist has to agree with this account of morality, and lots of us do not.  We “moralistic atheists” do not see right and wrong as artifacts of a divine protection racket.  Rather, we find moral value to be immanent in the natural world, arising from the vulnerabilities of sentient beings and from the capacities of rational beings to recognize and to respond to those vulnerabilities and capacities in others.

This view of the basis of morality is hardly incompatible with religious belief.  Indeed, anyone who believes that God made human beings in His image believes something like this — that there is a moral dimension of things, and that it is in our ability to apprehend it that we resemble the divine.  Accordingly, many theists, like many atheists, believe that moral value is inherent in morally valuable things.  Things don’t become morally valuable because God prefers them; God prefers them because they are morally valuable. At least this is what I was taught as a girl, growing up Catholic: that we could see that God was good because of the things He commands us to do.  If helping the poor were not a good thing on its own, it wouldn’t be much to God’s credit that He makes charity a duty.

It may surprise some people to learn that theists ever take this position, but it shouldn’t.  This position is not only consistent with belief in God, it is, I contend, a more pious position than its opposite.  It is only if morality is independent of God that we can make moral sense out of religious worship.  It is only if morality is independent of God that any person can have a moral basis for adhering to God’s commands.

Let me explain why.  First let’s take a cold hard look at the consequences of pinning morality to the existence of God.  Consider the following moral judgments — judgments that seem to me to be obviously true:

• It is wrong to drive people from their homes or to kill them because you want their land.

• It is wrong to enslave people.

• It is wrong to torture prisoners of war.

•  Anyone who witnesses genocide, or enslavement, or torture, is morally required
to try to stop it.

To say that morality depends on the existence of God is to say that none of these specific moral judgments is true unless God exists.  That seems to me to be a remarkable claim.  If God turned out not to exist — then slavery would be O.K.?  There’d be nothing wrong with torture?  The pain of another human being would mean nothing?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Sam Harris. Courtesy of Salon.[end-div]

The Sheer Joy of Unconnectedness

Seventeenth century polymath Blaise Pascal had it right when he remarked, “Distraction is the only thing that consoles us for our miseries, and yet it is itself the greatest of our miseries.”

Here in the 21st century we have so many distractions that even our distractions get little attention. Author Pico Iyer shares his prognosis, and shows that perhaps the very much younger generation may be making some progress “in terms of sensing not what’s new, but what’s essential.”

[div class=attrib]From the New York Times:[end-div]

ABOUT a year ago, I flew to Singapore to join the writer Malcolm Gladwell, the fashion designer Marc Ecko and the graphic designer Stefan Sagmeister in addressing a group of advertising people on “Marketing to the Child of Tomorrow.” Soon after I arrived, the chief executive of the agency that had invited us took me aside. What he was most interested in, he began — I braced myself for mention of some next-generation stealth campaign — was stillness.

A few months later, I read an interview with the perennially cutting-edge designer Philippe Starck. What allowed him to remain so consistently ahead of the curve? “I never read any magazines or watch TV,” he said, perhaps a little hyperbolically. “Nor do I go to cocktail parties, dinners or anything like that.” He lived outside conventional ideas, he implied, because “I live alone mostly, in the middle of nowhere.”

Around the same time, I noticed that those who part with $2,285 a night to stay in a cliff-top room at the Post Ranch Inn in Big Sur pay partly for the privilege of not having a TV in their rooms; the future of travel, I’m reliably told, lies in “black-hole resorts,” which charge high prices precisely because you can’t get online in their rooms.

Has it really come to this?

In barely one generation we’ve moved from exulting in the time-saving devices that have so expanded our lives to trying to get away from them — often in order to make more time. The more ways we have to connect, the more many of us seem desperate to unplug. Like teenagers, we appear to have gone from knowing nothing about the world to knowing too much all but overnight.

Internet rescue camps in South Korea and China try to save kids addicted to the screen.

Writer friends of mine pay good money to get the Freedom software that enables them to disable (for up to eight hours) the very Internet connections that seemed so emancipating not long ago. Even Intel (of all companies) experimented in 2007 with conferring four uninterrupted hours of quiet time every Tuesday morning on 300 engineers and managers. (The average office worker today, researchers have found, enjoys no more than three minutes at a time at his or her desk without interruption.) During this period the workers were not allowed to use the phone or send e-mail, but simply had the chance to clear their heads and to hear themselves think. A majority of Intel’s trial group recommended that the policy be extended to others.

THE average American spends at least eight and a half hours a day in front of a screen, Nicholas Carr notes in his eye-opening book “The Shallows,” in part because the number of hours American adults spent online doubled between 2005 and 2009 (and the number of hours spent in front of a TV screen, often simultaneously, is also steadily increasing).

The average American teenager sends or receives 75 text messages a day, though one girl in Sacramento managed to handle an average of 10,000 every 24 hours for a month. Since luxury, as any economist will tell you, is a function of scarcity, the children of tomorrow, I heard myself tell the marketers in Singapore, will crave nothing more than freedom, if only for a short while, from all the blinking machines, streaming videos and scrolling headlines that leave them feeling empty and too full all at once.

The urgency of slowing down — to find the time and space to think — is nothing new, of course, and wiser souls have always reminded us that the more attention we pay to the moment, the less time and energy we have to place it in some larger context.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Processing large amounts of information may lead our brains to forget exactly where it all came from. Courtesy of NY Daily News / Chamoun/Getty.[end-div]

Levelling the Political Playing Field

Let’s face it, taking money out of politics in the United States, especially since the 2010 Supreme Court Decision (Citizens United v. Federal Election Commission), is akin to asking a hardcore addict to give up his or her favorite substance — it’s unlikely to be easy, if at all possible.

So, another approach might be to “re-distribute” the funds more equitably. Not a new idea — a number of European nations do this today. However, Max Frankel over at the NY Review of Books offers a thoughtful proposal with a new twist.

[div class=attrib]By Max Frankel:[end-div]

Every election year brings vivid reminders of how money distorts our politics, poisons our lawmaking, and inevitably widens the gulf between those who can afford to buy influence and the vast majority of Americans who cannot. In 2012, this gulf will become a chasm: one analysis predicts that campaign spending on presidential, congressional, and state elections may exceed $6 billion and all previous records. The Supreme Court has held that money is in effect speech, it talks; and those without big money have become progressively voiceless.

That it may cost as much as a billion dollars to run for President is scandal enough, but the multimillions it now takes to pursue or defend a seat in Congress are even more corrupting. Many of our legislators spend hours of every day begging for contributions from wealthy constituents and from the lobbyists for corporate interests. The access and influence that they routinely sell give the moneyed a seat at the tables where laws are written, to the benefit of those contributors and often to the disadvantage of the rest of us.

And why do the candidates need all that money? Because electoral success requires them to buy endless hours of expensive television time for commercials that advertise their virtues and, more often, roundly assail their opponents with often spurious claims. Of the more than a billion dollars spent on political commercials this year, probably more than half will go for attack ads.

It has long been obvious that television ads dominate electioneering in America. Most of those thirty-second ads are glib at best but much of the time they are unfair smears of the opposition. And we all know that those sordid slanders work—the more negative the better—unless they are instantly answered with equally facile and equally expensive rebuttals.

Other election expenses pale beside the ever larger TV budgets. Campaign staffs, phone and email solicitations, billboards and buttons and such could easily be financed with the small contributions of ordinary voters. But the decisive TV competitions leave politicians at the mercy of self-interested wealthy individuals, corporations, unions, and groups, now often disguised in “Super PACs” that can spend freely on any candidate so long as they are not overtly coordinating with that candidate’s campaign. Even incumbents who face no immediate threat feel a need to keep hoarding huge war chests with which to discourage potential challengers. Senator Charles Schumer of New York, for example, was easily reelected to a third term in 2010 but stands poised five years before his next run with a rapidly growing fund of $10 million.

A rational people looking for fairness in their politics would have long ago demanded that television time be made available at no cost and apportioned equally among rival candidates. But no one expects that any such arrangement is now possible. Political ads are jealously guarded as a major source of income by television stations. And what passes for news on most TV channels gives short shrift to most political campaigns except perhaps to “cover” the advertising combat.

As a political reporter and editor, I concluded long ago that efforts to limit campaign contributions and expenditures have been either disingenuous or futile. Most spending caps are too porous. In fact, they have further distorted campaigns by favoring wealthy candidates whose spending on their own behalf the Supreme Court has exempted from all limitations. And the public has overwhelmingly rejected the use of tax money to subsidize campaigning. In any case, private money that wants to buy political influence tends to behave like water running downhill: it will find a way around most obstacles. Since the court’s decision in the 2010 Citizens United case, big money is now able to find endless new paths, channeling even tax-exempt funds into political pools.

There are no easy ways to repair our entire election system. But I believe that a large degree of fairness could be restored to our campaigns if we level the TV playing field. And given the television industry’s huge stake in paid political advertising, it (and the Supreme Court) would surely resist limiting campaign ads, as many European countries do. With so much campaign cash floating around, there is only one attractive remedy I know of: double the price of political commercials so that every candidate’s purchase of TV time automatically pays for a comparable slot awarded to an opponent. The more you spend, the more your rival benefits as well. The more you attack, the more you underwrite the opponent’s responses. The desirable result would likely be that rival candidates would negotiate an arms control agreement, setting their own limits on their TV budgets and maybe even on their rhetoric.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Alliance for a Just Society.[end-div]

How to (Not) Read a Tough Book

Ever picked up a copy of the Illiad or War and Peace or Foucault’s Pendulum or Finnegan’s Wake leafed through the first five pages and given up? Well, you may be in good company. So, here are some useful tips for the readers, and non-readers alike, on how to get through some notable classics that demand our fullest attention and faculties.

[div class=attrib]From the Wall Street Journal:[end-div]

I’m determined to finish “The Iliad” before I start anything else, but I’ve been having trouble picking it up amid all the seasonal distractions and therefore I’m not reading anything at all: It’s blocking other books. Suggestions?

—E.S., New York

When I decided to read “War and Peace” a few years ago, I worried about exactly this problem: a challenging book slowing me down so much that I simply stopped reading anything at all. My solution, which worked, was to assign myself a certain number of pages—in this case, 100—each day, after which I was free to read anything else. One hundred pages a day may seem like a lot, but I had time on my hands, and (of course) “War and Peace” turned out to be anything but laborious. Still, there was a psychological comfort in knowing that if I wasn’t enjoying it, I wasn’t in a reading straitjacket.

With a book like “The Iliad,” which is far more demanding than “War and Peace,” I’d say one or two pages a day would be a perfectly respectable goal. You could see that time as a period of meditation or prayer—an excuse to be alone, quiet and contemplative.

You could also alternate reading “The Iliad” with listening to someone else read it. There’s no rule that says you can’t mix media on a single book, especially when it’s poetry, and the divine Alfred Molina reads Stephen Mitchell’s new translation of Homer’s classic.

Reading a work like “The Iliad” shouldn’t feel like punishment or homework. If it does, then read a sentence a day with the patience of Penelope.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Achilles tending Patroclus wounded by an arrow, identified by inscriptions on the upper part of the vase. Tondo of an Attic red-figure kylix, ca. 500 BC. From Vulci. Courtesy of Wikipedia.[end-div]

From Nine Dimensions to Three

Over the last 40 years or so physicists and cosmologists have sought to construct a single grand theory that describes our entire universe from the subatomic soup that makes up particles and describes all forces to the vast constructs of our galaxies, and all in between and beyond. Yet a major stumbling block has been how to bring together the quantum theories that have so successfully described, and predicted, the microscopic with our current understanding of gravity. String theory is one such attempt to develop a unified theory of everything, but it remains jumbled with many possible solutions and, currently, is beyond experimental verification.

Recently however, theorists in Japan announced a computer simulation which shows how our current 3-dimensional universe may have evolved from a 9-dimensional space hypothesized by string theory.

[div class=attrib]From Interactions:[end-div]

A group of three researchers from KEK, Shizuoka University and Osaka University has for the first time revealed the way our universe was born with 3 spatial dimensions from 10-dimensional superstring theory1 in which spacetime has 9 spatial directions and 1 temporal direction. This result was obtained by numerical simulation on a supercomputer.

[Abstract]

According to Big Bang cosmology, the universe originated in an explosion from an invisibly tiny point. This theory is strongly supported by observation of the cosmic microwave background2 and the relative abundance of elements. However, a situation in which the whole universe is a tiny point exceeds the reach of Einstein’s general theory of relativity, and for that reason it has not been possible to clarify how the universe actually originated.

In superstring theory, which is considered to be the “theory of everything”, all the elementary particles are represented as various oscillation modes of very tiny strings. Among those oscillation modes, there is one that corresponds to a particle that mediates gravity, and thus the general theory of relativity can be naturally extended to the scale of elementary particles. Therefore, it is expected that superstring theory allows the investigation of the birth of the universe. However, actual calculation has been intractable because the interaction between strings is strong, so all investigation thus far has been restricted to discussing various models or scenarios.

Superstring theory predicts a space with 9 dimensions3, which poses the big puzzle of how this can be consistent with the 3-dimensional space that we live in.

A group of 3 researchers, Jun Nishimura (associate professor at KEK), Asato Tsuchiya (associate professor at Shizuoka University) and Sang-Woo Kim (project researcher at Osaka University) has succeeded in simulating the birth of the universe, using a supercomputer for calculations based on superstring theory. This showed that the universe had 9 spatial dimensions at the beginning, but only 3 of these underwent expansion at some point in time.

This work will be published soon in Physical Review Letters.

[The content of the research]

In this study, the team established a method for calculating large matrices (in the IKKT matrix model4), which represent the interactions of strings, and calculated how the 9-dimensional space changes with time. In the figure, the spatial extents in 9 directions are plotted against time.

If one goes far enough back in time, space is indeed extended in 9 directions, but then at some point only 3 of those directions start to expand rapidly. This result demonstrates, for the first time, that the 3-dimensional space that we are living in indeed emerges from the 9-dimensional space that superstring theory predicts.

This calculation was carried out on the supercomputer Hitachi SR16000 (theoretical performance: 90.3 TFLOPS) at the Yukawa Institute for Theoretical Physics of Kyoto University.

[The significance of the research]

It is almost 40 years since superstring theory was proposed as the theory of everything, extending the general theory of relativity to the scale of elementary particles. However, its validity and its usefulness remained unclear due to the difficulty of performing actual calculations. The newly obtained solution to the space-time dimensionality puzzle strongly supports the validity of the theory.

Furthermore, the establishment of a new method to analyze superstring theory using computers opens up the possibility of applying this theory to various problems. For instance, it should now be possible to provide a theoretical understanding of the inflation5 that is believed to have taken place in the early universe, and also the accelerating expansion of the universe6, whose discovery earned the Nobel Prize in Physics this year. It is expected that superstring theory will develop further and play an important role in solving such puzzles in particle physics as the existence of the dark matter that is suggested by cosmological observations, and the Higgs particle, which is expected to be discovered by LHC experiments.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: A visualization of strings. Courtesy of R. Dijkgraaf / Universe Today.[end-div]

Salad Bar Strategies

It turns out that human behavior at the ubiquitous, self-serve salad bar in your suburban restaurant or hotel is a rather complex affair. There is a method to optimizing the type and quantity of food on one’s plate.

[div class=attrib]From the New Scientist:[end-div]

Competition, greed and skulduggery are the name of the game if you want to eat your fill. Smorgasbord behaviour is surprisingly complex.

A mathematician, an engineer and a psychologist go up to a buffet… No, it’s not the start of a bad joke.

While most of us would dive into the sandwiches without thinking twice, these diners see a groaning table as a welcome opportunity to advance their research.

Look behind the salads, sausage rolls and bite-size pizzas and it turns out that buffets are a microcosm of greed, sexual politics and altruism – a place where our food choices are driven by factors we’re often unaware of. Understand the science and you’ll see buffets very differently next time you fill your plate.

The story starts with Lionel Levine of Cornell University in Ithaca, New York, and Katherine Stange of Stanford University, California. They were sharing food at a restaurant one day, and wondered: do certain choices lead to tastier platefuls when food must be divided up? You could wolf down everything in sight, of course, but these guys are mathematicians, so they turned to a more subtle approach: game theory.

Applying mathematics to a buffet is harder than it sounds, so they started by simplifying things. They modelled two people taking turns to pick items from a shared platter – hardly a buffet, more akin to a polite tapas-style meal. It was never going to generate a strategy for any occasion, but hopefully useful principles would nonetheless emerge. And for their bellies, the potential rewards were great.

First they assumed that each diner would have individual preferences. One might place pork pie at the top and beetroot at the bottom, for example, while others might salivate over sausage rolls. That ranking can be plugged into calculations by giving each food item a score, where higher-ranked foods are worth more points. The most enjoyable buffet meal would be the one that scores highest in total.

In some scenarios, the route to the most enjoyable plate was straightforward. If both people shared the same rankings, they should pick their favourites first. But Levine and Stange also uncovered a counter-intuitive effect: it doesn’t always pay to take the favourite item first. To devise an optimum strategy, they say, you should take into account what your food rival considers to be the worst food on the table.

If that makes your brow furrow, consider this: if you know your fellow diner hates chicken legs, you know that can be the last morsel you aim to eat – even if it’s one of your favourites. In principle, if you had full knowledge of your food rival’s preferences, it would be possible to work backwards from their least favourite and identify the optimum order in which to fill your plate, according to the pair’s calculations, which will appear in American Mathematical Monthly (arxiv.org/abs/1104.0961).

So how do you know what to select first? In reality, the buffet might be long gone before you had worked it out. Even if you did, the researchers’ strategy also assumes that you are at a rather polite buffet, taking turns, so it has its limitations. However, it does provide practical advice in some scenarios. For example, imagine Amanda is up against Brian, who she knows has the opposite ranking of tastes to her. Amanda loves sausages, hates pickled onions, and is middling about quiche. Brian loves pickled onions, hates sausages, shares the same view of quiche. Having identified that her favourites are safe, Amanda should prioritise morsels where their taste-ranking matched – the quiche, in other words.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: salad bars: Courtesy of Google search.[end-div]

Ronald Searle

Ronald Searle, your serious wit and your heroic pen will be missed. Searle died on December 30, aged 91.

The first “real” book purchased by theDiagonal’s editor with his own money was “How To Be Topp” by Geoffrey Willans and Ronald Searle. The book featured Searle’s unique and unmistakable illustrations of anti-hero Nigel Molesworth, a stoic, shrewd and droll English schoolboy.

Yet while Searle will be best remembered for his drawings of Molesworth and friends at St.Custard’s high school and his invention of St.Trinian’s (school for rowdy schoolgirls), he leaves behind a critical body of work that graphically illustrates his brutal captivity at the hands of the Japanese during the Second World War.

Most of these drawings appear in his 1986 book, Ronald Searle: To the Kwai and Back, War Drawings 1939-1945. In the book, Searle also wrote of his experiences as a prisoner. Many of his original drawings are now in the permanent collection of the Imperial War Museum, London.

[div class=attrib]From the BBC:[end-div]

British cartoonist Ronald Searle, best known for creating the fictional girls’ school St Trinian’s, has died aged 91.

His daughter Kate Searle said in a statement that he “passed away peacefully in his sleep” in a hospital in France.

Searle’s spindly cartoons of the naughty schoolgirls first appeared in 1941, before the idea was adapted for film.

The first movie version, The Belles of St Trinian’s, was released in 1954.

Joyce Grenfell and George Cole starred in the film, along with Alastair Sim, who appeared in drag as headmistress Millicent Fritton.

Searle also provided illustrations the Molesworth series, written by Geoffrey Willans.

The gothic, line-drawn cartoons breathed life into the gruesome pupils of St Custard’s school, in particular the outspoken, but functionally-illiterate Nigel Molesworth “the goriller of 3B”.

Searle’s work regularly appeared in magazines and newspapers, including Punch and The New Yorker.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Welcome back to the new term molesworth! From How to be Topp. Courtesy of Geoffrey Willans and Ronald Searle / Vanguard Press.[end-div]

Weight Loss and the Coordinated Defense Mechanism

New research into obesity and weight loss shows us why it’s so hard to keep weight lost from dieting from returning. The good news is that weight (re-)gain is not all due to a simple lack of control and laziness. However, the bad news is that keeping one’s weight down may be much more difficult due to the body’s complex defense mechanism.

Tara Parker-Pope over at the Well blog reviews some of the new findings, which seem to point the finger at a group hormones and specific genes that work together to help us regain those lost pounds.

[div class=attrib]From the New York Times:[end-div]

For 15 years, Joseph Proietto has been helping people lose weight. When these obese patients arrive at his weight-loss clinic in Australia, they are determined to slim down. And most of the time, he says, they do just that, sticking to the clinic’s program and dropping excess pounds. But then, almost without exception, the weight begins to creep back. In a matter of months or years, the entire effort has come undone, and the patient is fat again. “It has always seemed strange to me,” says Proietto, who is a physician at the University of Melbourne. “These are people who are very motivated to lose weight, who achieve weight loss most of the time without too much trouble and yet, inevitably, gradually, they regain the weight.”

Anyone who has ever dieted knows that lost pounds often return, and most of us assume the reason is a lack of discipline or a failure of willpower. But Proietto suspected that there was more to it, and he decided to take a closer look at the biological state of the body after weight loss.

Beginning in 2009, he and his team recruited 50 obese men and women. The men weighed an average of 233 pounds; the women weighed about 200 pounds. Although some people dropped out of the study, most of the patients stuck with the extreme low-calorie diet, which consisted of special shakes called Optifast and two cups of low-starch vegetables, totaling just 500 to 550 calories a day for eight weeks. Ten weeks in, the dieters lost an average of 30 pounds.

At that point, the 34 patients who remained stopped dieting and began working to maintain the new lower weight. Nutritionists counseled them in person and by phone, promoting regular exercise and urging them to eat more vegetables and less fat. But despite the effort, they slowly began to put on weight. After a year, the patients already had regained an average of 11 of the pounds they struggled so hard to lose. They also reported feeling far more hungry and preoccupied with food than before they lost the weight.

While researchers have known for decades that the body undergoes various metabolic and hormonal changes while it’s losing weight, the Australian team detected something new. A full year after significant weight loss, these men and women remained in what could be described as a biologically altered state. Their still-plump bodies were acting as if they were starving and were working overtime to regain the pounds they lost. For instance, a gastric hormone called ghrelin, often dubbed the “hunger hormone,” was about 20 percent higher than at the start of the study. Another hormone associated with suppressing hunger, peptide YY, was also abnormally low. Levels of leptin, a hormone that suppresses hunger and increases metabolism, also remained lower than expected. A cocktail of other hormones associated with hunger and metabolism all remained significantly changed compared to pre-dieting levels. It was almost as if weight loss had put their bodies into a unique metabolic state, a sort of post-dieting syndrome that set them apart from people who hadn’t tried to lose weight in the first place.

“What we see here is a coordinated defense mechanism with multiple components all directed toward making us put on weight,” Proietto says. “This, I think, explains the high failure rate in obesity treatment.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Science Daily.[end-div]

Morality and Machines

Fans of science fiction and Isaac Asimov in particular may recall his three laws of robotics:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.
  2. A robot must obey the orders given to it by human beings, except where such orders would conflict with the First Law.
  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

Of course, technology has marched forward relentlessly since Asimov penned these guidelines in 1942. But while the ideas may seem trite and somewhat contradictory the ethical issue remains – especially as our machines become ever more powerful and independent. Though, perhaps first humans, in general, ought to agree on a set of fundamental principles for themselves.

Colin Allen for the Opinionator column reflects on the moral dilemma. He is Provost Professor of Cognitive Science and History and Philosophy of Science at Indiana University, Bloomington.

[div class=attrib]From the New York Times:[end-div]

A robot walks into a bar and says, “I’ll have a screwdriver.” A bad joke, indeed. But even less funny if the robot says “Give me what’s in your cash register.”

The fictional theme of robots turning against humans is older than the word itself, which first appeared in the title of Karel ?apek’s 1920 play about artificial factory workers rising against their human overlords.

The prospect of machines capable of following moral principles, let alone understanding them, seems as remote today as the word “robot” is old. Some technologists enthusiastically extrapolate from the observation that computing power doubles every 18 months to predict an imminent “technological singularity” in which a threshold for machines of superhuman intelligence will be suddenly surpassed. Many Singularitarians assume a lot, not the least of which is that intelligence is fundamentally a computational process. The techno-optimists among them also believe that such machines will be essentially friendly to human beings. I am skeptical about the Singularity, and even if “artificial intelligence” is not an oxymoron, “friendly A.I.” will require considerable scientific progress on a number of fronts.

The neuro- and cognitive sciences are presently in a state of rapid development in which alternatives to the metaphor of mind as computer have gained ground. Dynamical systems theory, network science, statistical learning theory, developmental psychobiology and molecular neuroscience all challenge some foundational assumptions of A.I., and the last 50 years of cognitive science more generally. These new approaches analyze and exploit the complex causal structure of physically embodied and environmentally embedded systems, at every level, from molecular to social. They demonstrate the inadequacy of highly abstract algorithms operating on discrete symbols with fixed meanings to capture the adaptive flexibility of intelligent behavior. But despite undermining the idea that the mind is fundamentally a digital computer, these approaches have improved our ability to use computers for more and more robust simulations of intelligent agents — simulations that will increasingly control machines occupying our cognitive niche. If you don’t believe me, ask Siri.

This is why, in my view, we need to think long and hard about machine morality. Many of my colleagues take the very idea of moral machines to be a kind of joke. Machines, they insist, do only what they are told to do. A bar-robbing robot would have to be instructed or constructed to do exactly that. On this view, morality is an issue only for creatures like us who can choose to do wrong. People are morally good only insofar as they must overcome the urge to do what is bad. We can be moral, they say, because we are free to choose our own paths.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Asimov Foundation / Wikipedia.[end-div]

Pulsars Signal the Beat

Cosmology meets music. German band Reimhaus samples the regular pulse of pulsars in its music. A pulsar is the rapidly spinning remains of an exploded star — as the pulsar spins it emits a detectable beam of energy that has a very regular beat, sometimes sub-second.

[div class=attrib]From Discover:[end-div]

Some pulsars spin hundreds of times per second, some take several seconds to spin once. If you take that pulse of light and translate it into sound, you get a very steady thumping beat with very precise timing. So making it into a song is a natural thought.
But we certainly didn’t take it as far as the German band Reimhaus did, making a music video out of it! They used several pulsars for their song “Echoes, Silence, Pulses & Waves”. So here’s the cosmic beat:

[tube]86IeHiXEZ3I[/tube]

The First Interplanetary Travel Reservations

[div class=attrib]From Wired:[end-div]

Today, space travel is closer to reality for ordinary people than it has ever been. Though currently only the super rich are actually getting to space, several companies have more affordable commercial space tourism in their sights and at least one group is going the non-profit DIY route into space.

But more than a decade before it was even proven that man could reach space, average people were more positive about their own chances of escaping Earth’s atmosphere. This may have been partly thanks to the Interplanetary Tour Reservation desk at the American Museum of Natural History.

In 1950, to promote its new space exhibit, the AMNH had the brilliant idea to ask museum visitors to sign up to reserve their space on a future trip to the moon, Mars, Jupiter or Saturn. They advertised the opportunity in newspapers and magazines and received letters requesting reservations from around the world. The museum pledged to pass their list on to whichever entity headed to each destination first.

Today, to promote its newest space exhibit, “Beyond Planet Earth: The Future of Space Exploration,” the museum has published some of these requests. The letters manage to be interesting, hopeful, funny and poignant all at once. Some even included sketches of potential space capsules, rockets and spacesuits. The museum shared some of its favorites with Wired for this gallery.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Hayden Planetarium Space Tours Schedule. Courtesy of American Museum of Natural History / Wired.[end-div]

Social Influence Through Social Media: Not!

Online social networks are an unprecedentedly rich source of material for psychologists, social scientists and observers of human behavior. Now a recent study shows that influence through these networks may not be as powerful or widespread as first thought. The study, “Social Selection and Peer Influence in an Online Social Network,” by Kevin Lewis, Marco Gonzalez and Jason Kaufman is available here.

[div class=attrib]From the Wall Street Journal:[end-div]

Social media gives ordinary people unprecedented power to broadcast their taste in movies, books and film, but for the most part those tastes don’t rub off on other people, a new study of college students finds. Instead, social media appears to strengthen our bonds with people whose tastes already resemble ours.

Researchers followed the Facebook pages and networks of some 1,000 students, at one college, for four years (looking only at public information). The strongest determinant of Facebook friendship was “mere propinquity” — living in the same building, studying the same subject—but people also self-segregated by gender, race, socioeconomic background and place of origin.

When it came to culture, researchers used an algorithm to identify taste “clusters” within the categories of music, movies, and books. They learned that fans of “lite/classic rock”* and “classical/jazz” were significantly more likely than chance would predict to form and maintain friendships, as were devotees of films featuring “dark satire” or “raunchy comedy / gore.” But this was the case for no other music or film genre — and for no books.

What’s more, “jazz/classical” was the only taste to spread from people who possessed it to those who lacked it. The researchers suggest that this is because liking jazz and classical music serves as a class marker, one that college-age people want to acquire. (I’d prefer to believe that they adopt those tastes on aesthetic grounds, but who knows?) “Indie/alt” music, in fact, was the opposite of contagious: People whose friends liked that style music tended to drop that preference themselves, over time.

[div class=attrib]Read the entire article here.[end-div]

The Internet of Things

The term “Internet of Things” was first coined in 1999 by Kevin Ashton. It refers to the notion whereby physical objects of all kinds are equipped with small identifying devices and connected to a network. In essence: everything connected to anytime, anywhere by anyone. One of the potential benefits is that this would allow objects to be tracked, inventoried and status continuously monitored.

[div class=attrib]From the New York Times:[end-div]

THE Internet likes you, really likes you. It offers you so much, just a mouse click or finger tap away. Go Christmas shopping, find restaurants, locate partying friends, tell the world what you’re up to. Some of the finest minds in computer science, working at start-ups and big companies, are obsessed with tracking your online habits to offer targeted ads and coupons, just for you.

But now — nothing personal, mind you — the Internet is growing up and lifting its gaze to the wider world. To be sure, the economy of Internet self-gratification is thriving. Web start-ups for the consumer market still sprout at a torrid pace. And young corporate stars seeking to cash in for billions by selling shares to the public are consumer services — the online game company Zynga last week, and the social network giant Facebook, whose stock offering is scheduled for next year.

As this is happening, though, the protean Internet technologies of computing and communications are rapidly spreading beyond the lucrative consumer bailiwick. Low-cost sensors, clever software and advancing computer firepower are opening the door to new uses in energy conservation, transportation, health care and food distribution. The consumer Internet can be seen as the warm-up act for these technologies.

The concept has been around for years, sometimes called the Internet of Things or the Industrial Internet. Yet it takes time for the economics and engineering to catch up with the predictions. And that moment is upon us.

“We’re going to put the digital ‘smarts’ into everything,” said Edward D. Lazowska, a computer scientist at the University of Washington. These abundant smart devices, Dr. Lazowska added, will “interact intelligently with people and with the physical world.”

The role of sensors — once costly and clunky, now inexpensive and tiny — was described this month in an essay in The New York Times by Larry Smarr, founding director of the California Institute for Telecommunications and Information Technology; he said the ultimate goal was “the sensor-aware planetary computer.”

That may sound like blue-sky futurism, but evidence shows that the vision is beginning to be realized on the ground, in recent investments, products and services, coming from large industrial and technology corporations and some ambitious start-ups.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Internet of Things. Courtesy of Cisco.[end-div]

Walking Through Doorways and Forgetting

[div class=attrib]From Scientific American:[end-div]

The French poet Paul Valéry once said, “The purpose of psychology is to give us a completely different idea of the things we know best.”  In that spirit, consider a situation many of us will find we know too well:  You’re sitting at your desk in your office at home. Digging for something under a stack of papers, you find a dirty coffee mug that’s been there so long it’s eligible for carbon dating.  Better wash it. You pick up the mug, walk out the door of your office, and head toward the kitchen.  By the time you get to the kitchen, though, you’ve forgotten why you stood up in the first place, and you wander back to your office, feeling a little confused—until you look down and see the cup.

So there’s the thing we know best:  The common and annoying experience of arriving somewhere only to realize you’ve forgotten what you went there to do.  We all know why such forgetting happens: we didn’t pay enough attention, or too much time passed, or it just wasn’t important enough.  But a “completely different” idea comes from a team of researchers at the University of Notre Dame.  The first part of their paper’s title sums it up:  “Walking through doorways causes forgetting.”

Gabriel Radvansky, Sabine Krawietz and Andrea Tamplin seated participants in front of a computer screen running a video game in which they could move around using the arrow keys.  In the game, they would walk up to a table with a colored geometric solid sitting on it. Their task was to pick up the object and take it to another table, where they would put the object down and pick up a new one. Whichever object they were currently carrying was invisible to them, as if it were in a virtual backpack.??Sometimes, to get to the next object the participant simply walked across the room. Other times, they had to walk the same distance, but through a door into a new room. From time to time, the researchers gave them a pop quiz, asking which object was currently in their backpack.  The quiz was timed so that when they walked through a doorway, they were tested right afterwards.  As the title said, walking through doorways caused forgetting: Their responses were both slower and less accurate when they’d walked through a doorway into a new room than when they’d walked the same distance within the same room.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Doorway, Titicaca, Bolivia. Courtesy of M.Gerra Assoc.[end-div]

Irrational Exuberance and Holiday Shopping

‘Tis the season to buy, give, receive and “re-gift” mostly useless and unwanted “stuff”. That’s how many economists would characterize these days of retail madness. Matthew Yglesias over a Slate ponders a more efficient way to re-distribute wealth.

[div class=attrib]From Slate:[end-div]

Christmas is not the most wonderful time of the year for economists. The holiday spirit is puzzlingly difficult to model: It plays havoc with the notion of rational utility-maximization. There’s so much waste! Price-insensitive travelers pack airports beyond capacity on Dec. 24 only to leave planes empty on Christmas Day. Even worse are the gifts, which represent an abandonment of our efficient system of monetary exchange in favor of a semi-barbaric form of bartering.

Still, even the most rational and Scroogey of economists must concede that gift-giving is clearly here to stay. What’s needed is a bit of advice: What can economics tell us about efficient gifting so that your loved ones get the most bang for your buck?

We need to start with the basic problem of gift-giving and barter in general: preference heterogeneity. Different people, in other words, want different stuff and they value it differently.

In a system of monetary exchange, everything has more or less one price. In that sense, we can say that a Lexus or a pile of coconuts is “worth” a certain amount: its market price. But I, personally, would have little use for a Lexus. I live in an apartment building near a Metro station and above a supermarket; I walk to work; and driving up to New York to visit my family is much less practical than taking a bus or a train. So while of course I won’t complain if you buy me a Lexus, its value to me will be low relative to its market price. Similarly, I don’t like coconuts and I’m not on the verge of starvation. If you dump a pile of coconuts in my living room, all you’re doing is creating a hassle for me. The market price of coconuts is low, but the utility I would derive from a gift of coconuts is actually negative.

In the case of the Lexus, the sensible thing for me to do would be to sell the car. But this would be a bit of a hassle and would doubtless leave me with less money in my pocket than you spent.

This gap between what something is worth to me and what it actually costs is “deadweight loss.” The deadweight loss can be thought of in monetary terms, or you might think of it as the hassle involved in returning something for store credit. It’s the gap in usefulness between a $20 gift certificate to the Olive Garden and a $20 bill that could, among other things, be used to buy $20 worth of food at Olive Garden. Research suggests that there’s quite a lot of deadweight loss during the holiday season. Joel Waldfogel’s classic paper (later expanded into a short book) suggests that gift exchange carries with it an average deadweight loss of 10 percent to a third of the value of the gifts. The National Retail Federation is projecting total holiday spending of more than $460 billion, implying $46-$152 billion worth of holiday wastage, potentially equivalent to an entire year’s worth of output from Iowa.

Partially rescuing Christmas is the reality that a lot of gift-giving isn’t exchange at all. Rather, it’s a kind of Robin Hood transfer in which we take resources from (relatively) rich parents and grandparents and give them to kids with little or no income. This is welfare enhancing for the same reason that redistributive taxation is welfare enhancing: People with less money need the stuff more.

[div class=attrib]Read the entire article here.[end-div]

MondayPoem: The Snow Is Deep on the Ground

We celebrate the arrival of winter to the northern hemisphere with an evocative poem by Kenneth Patchen.

[div class=attrib]From Poetry Foundation:[end-div]

An inspiration for the Beat Generation and a true “people’s poet,” Kenneth Patchen was a prolific writer, visual artist and performer whose exuberant, free-form productions celebrate spontaneity and attack injustices, materialism, and war.

By Kenneth Patchen

– The Snow Is Deep on the Ground

The snow is deep on the ground.
Always the light falls
Softly down on the hair of my belovèd.

This is a good world.
The war has failed.
God shall not forget us.
Who made the snow waits where love is.

Only a few go mad.
The sky moves in its whiteness
Like the withered hand of an old king.
God shall not forget us.
Who made the sky knows of our love.

The snow is beautiful on the ground.
And always the lights of heaven glow
Softly down on the hair of my belovèd.

[div class=attrib]Image: Kenneth Patchen. Courtesy of Wikipedia.[end-div]

The Psychology of Gift Giving

[div class=attrib]From the Wall Street Journal:[end-div]

Many of my economist friends have a problem with gift-giving. They view the holidays not as an occasion for joy but as a festival of irrationality, an orgy of wealth-destruction.

Rational economists fixate on a situation in which, say, your Aunt Bertha spends $50 on a shirt for you, and you end up wearing it just once (when she visits). Her hard-earned cash has evaporated, and you don’t even like the present! One much-cited study estimated that as much as a third of the money spent on Christmas is wasted, because recipients assign a value lower than the retail price to the gifts they receive. Rational economists thus make a simple suggestion: Give cash or give nothing.

But behavioral economics, which draws on psychology as well as on economic theory, is much more appreciative of gift giving. Behavioral economics better understands why people (rightly, in my view) don’t want to give up the mystery, excitement and joy of gift giving.

In this view, gifts aren’t irrational. It’s just that rational economists have failed to account for their genuine social utility. So let’s examine the rational and irrational reasons to give gifts.

Some gifts, of course, are basically straightforward economic exchanges. This is the case when we buy a nephew a package of socks because his mother says he needs them. It is the least exciting kind of gift but also the one that any economist can understand.

A second important kind of gift is one that tries to create or strengthen a social connection. The classic example is when somebody invites us for dinner and we bring something for the host. It’s not about economic efficiency. It’s a way to express our gratitude and to create a social bond with the host.

Another category of gift, which I like a lot, is what I call “paternalistic” gifts—things you think somebody else should have. I like a certain Green Day album or Julian Barnes novel or the book “Predictably Irrational,” and I think that you should like it, too. Or I think that singing lessons or yoga classes will expand your horizons—and so I buy them for you.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Google search results for ‘gifts’.[end-div]

What Did You Have for Breakfast Yesterday? Ask Google

Memory is, well, so 1990s. Who needs it when we have Google, Siri and any number of services to help answer and recall everything we’ve ever perceived and wished to remember or wanted to know. Will our personal memories become another shared service served up from the “cloud”?

[div class=attrib]From the Wilson Quarterly:[end-div]

In an age when most information is just a few keystrokes away, it’s natural to wonder: Is Google weakening our powers of memory? According to psychologists Betsy Sparrow of Columbia University, Jenny Liu of the University of Wisconsin, Madison, and Daniel M. Wegner of Harvard, the Internet has not so much diminished intelligent recall as tweaked it.

The trio’s research shows what most computer users can tell you anecdotally: When you know you have the Internet at hand, your memory relaxes. In one of their experiments, 46 Harvard undergraduates were asked to answer 32 trivia questions on computers. After each one, they took a quick Stroop test, in which they were shown words printed in different colors and then asked to name the color of each word. They took more time to name the colors of Internet-related words, such as modem and browser. According to Stroop test conventions, this is because the words were related to something else that they were already thinking about—yes, they wanted to fire up Google to answer those tricky trivia questions.

In another experiment, the authors uncovered evidence suggesting that access to computers plays a fundamental role in what people choose to commit to their God-given hard drive. Subjects were instructed to type 40 trivia-like statements into a dialog box. Half were told that the computer would erase the information and half that it would be saved. Afterward, when asked to recall the statements, the students who were told their typing would be erased remembered much more. Lacking a computer backup, they apparently committed more to memory.

[div class=attrib]Read the entire article here.[end-div]

Everyone’s an Artist, Designer, Critic. But Curator?

Digital cameras and smartphones have enabled their users to become photographers. Affordable composition and editing tools have made us all designers and editors. Social media have enabled, encouraged and sometimes rewarded us for posting content, reviews and opinions for everything under the sun. So, now we are all critics. So, now are we all curators as well?

[div class=attrib]From dis:[end-div]

As far as word trends go, the word curate still exists in a somewhat rarified air. One can use curate knowingly with tongue in cheek: “Let’s curate our spice rack!” Or, more commonly and less nerdily, in the service of specialized artisanal commerce: “curating food stands” of the Brooklyn Flea swap meet, or a site that lets women curate their own clothing store from featured brands, earning 10% on any sales from their page. Curate used pejoratively indicates The Man- “If The Huffington Post wants to curate Twitter…” [uh, users will be upset]. And then there is that other definition specific to the practice of art curating. In the past ten years, as curate has exploded in popular culture and as a consumer buzz-word, art curators have felt residual effects. Those who value curating as an actual practice are generally loathe to see it harnessed by commercial culture, and conversely, feel sheepish about some deep-set pretensions this move has brought front and center. Simultaneously, curate has become a lightning-rod in the art world, inspiring countless journal articles and colloquia in which academics and professionals discuss issues around curating with a certain amount of anxiety.

Everyone’s a critic but who’s a curator?
In current usage, curating as discipline, which involves assembling and arranging artworks, has been usurped by curating as a nebulous expression of taste, presumed to be inherent rather than learned. This presumption is of course steeped in its own mire of regionalism, class bias and aspirations towards whomever’s privileged lifestyle is currently on-trend or in power. Suffice it to say that taste is problematic. But that curating swung so easily towards taste, indicates that it wasn’t a very hard association to make.

To some extent taste has been wedded to curating since the latter’s inception. A close forebear of the modern curated exhibition was the Renaissance cabinet of curiosities. The practice of selecting finely crafted objects for display first appeared in the 15th century and extended for several centuries after. A gentleman’s cabinet of curiosities showcased treasures bought or collected during travel, and ranged culturally and from collector to collector according to his interests, from mythical?/biblical? relics to artworks to ancient and exotic artifacts. As a practice, this sort of acquisition existed separately from the tradition of patronage of a particular artist. (For a vivid and intricately rendered description of the motivations and mindset of the 18th century collector, which gives way after half the book to a tour-de-force historical novel and then finally, to a political manifesto by a thinly veiled stand-in for the author, see Susan Sontag’s weird and special novel The Volcano Lover.) In Europe and later the United States, these collections of curiosities would give rise to the culture of the museum. In an 1858 New York Times article, the sculptor Bartholomew was described as having held the position of Curator for the Wadsworth Gallery in Hartford, a post he soon abandoned to render marble busts. The Wadsworth, incidentally, was the first public art museum to emerge in the United States, and would anticipate the museum boom of the 20th century.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Dean&Deluca. Courtesy of dis.[end-div]

Life Without Facebook

Perhaps it’s time to re-think your social network when through it you know all about the stranger with whom you are sharing the elevator.

[div class=attrib]From the New York Times:[end-div]

Tyson Balcomb quit Facebook after a chance encounter on an elevator. He found himself standing next to a woman he had never met — yet through Facebook he knew what her older brother looked like, that she was from a tiny island off the coast of Washington and that she had recently visited the Space Needle in Seattle.

“I knew all these things about her, but I’d never even talked to her,” said Mr. Balcomb, a pre-med student in Oregon who had some real-life friends in common with the woman. “At that point I thought, maybe this is a little unhealthy.”

As Facebook prepares for a much-anticipated public offering, the company is eager to show off its momentum by building on its huge membership: more than 800 million active users around the world, Facebook says, and roughly 200 million in the United States, or two-thirds of the population.

But the company is running into a roadblock in this country. Some people, even on the younger end of the age spectrum, just refuse to participate, including people who have given it a try.

One of Facebook’s main selling points is that it builds closer ties among friends and colleagues. But some who steer clear of the site say it can have the opposite effect of making them feel more, not less, alienated.

“I wasn’t calling my friends anymore,” said Ashleigh Elser, 24, who is in graduate school in Charlottesville, Va. “I was just seeing their pictures and updates and felt like that was really connecting to them.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook user. Courtesy of the New York Times.[end-div]

A Most Beautiful Equation

Many mathematicians and those not mathematically oriented would consider Albert Einstein’s equation stating energy=mass equivalence to be singularly simple and beautiful. Indeed, e=mc2 is perhaps one of the few equations to have entered the general public consciousness. However, there are a number of other less well known mathematical constructs that convey this level of significance and fundamental beauty as well. Wired lists several to consider.

[div class=attrib]From Wired:[end-div]

Even for those of us who finished high school algebra on a wing and a prayer, there’s something compelling about equations. The world’s complexities and uncertainties are distilled and set in orderly figures, with a handful of characters sufficing to capture the universe itself.

For your enjoyment, the Wired Science team has gathered nine of our favorite equations. Some represent the universe; others, the nature of life. One represents the limit of equations.

We do advise, however, against getting any of these equations tattooed on your body, much less branded. An equation t-shirt would do just fine.

The Beautiful Equation: Euler’s Identity

ei? + 1 = 0

Also called Euler’s relation, or the Euler equation of complex analysis, this bit of mathematics enjoys accolades across geeky disciplines.

Swiss mathematician Leonhard Euler first wrote the equality, which links together geometry, algebra, and five of the most essential symbols in math — 0, 1, i, pi and e — that are essential tools in scientific work.

Theoretical physicist Richard Feynman was a huge fan and called it a “jewel” and a “remarkable” formula. Fans today refer to it as “the most beautiful equation.”

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image: Euler’s Relation. Courtesy of Wired.[end-div]

Woman and Man, and Fish?

A widely held aphorism states that owners often look like their pets, or visa versa. So, might it apply to humans and fish? Well, Ted Sabarese a photographer based in New York provides an answer in a series of fascinating portraits.

[div class=attrib]From Kalliopi Monoyios over at Scientific American:[end-div]

I can’t say for certain whether New York based photographer Ted Sabarese had science or evolution in mind when he conceived of this series. But I’m almost glad he never responded to my follow-up questions about his inspiration behind these. Part of the fun of art is its mirror-like quality: everyone sees something different when faced with it because everyone brings a different set of experiences and expectations to the table. When I look at these I see equal parts “you are what you eat,” “your inner fish,” and “United Colors of Benetton.”

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Discover more of Ted Sabarese’s work here.[end-div]

Can Anyone Say “Neuroaesthetics”

As in all other branches of science, there seem to be fascinating new theories, research and discoveries in neuroscience on a daily, if not hourly, basis. With this in mind, brain and cognitive researchers have recently turned their attentions to the science of art, or more specifically to addressing the question “how does the human brain appreciate art?” Yes, welcome to the world of “neuroaesthetics”.

[div class=attrib]From Scientific American:[end-div]

The notion of “the aesthetic” is a concept from the philosophy of art of the 18th century according to which the perception of beauty occurs by means of a special process distinct from the appraisal of ordinary objects. Hence, our appreciation of a sublime painting is presumed to be cognitively distinct from our appreciation of, say, an apple. The field of “neuroaesthetics” has adopted this distinction between art and non-art objects by seeking to identify brain areas that specifically mediate the aesthetic appreciation of artworks.

However, studies from neuroscience and evolutionary biology challenge this separation of art from non-art. Human neuroimaging studies have convincingly shown that the brain areas involved in aesthetic responses to artworks overlap with those that mediate the appraisal of objects of evolutionary importance, such as the desirability of foods or the attractiveness of potential mates. Hence, it is unlikely that there are brain systems specific to the appreciation of artworks; instead there are general aesthetic systems that determine how appealing an object is, be that a piece of cake or a piece of music.

We set out to understand which parts of the brain are involved in aesthetic appraisal. We gathered 93 neuroimaging studies of vision, hearing, taste and smell, and used statistical analyses to determine which brain areas were most consistently activated across these 93 studies. We focused on studies of positive aesthetic responses, and left out the sense of touch, because there were not enough studies to arrive at reliable conclusions.

The results showed that the most important part of the brain for aesthetic appraisal was the anterior insula, a part of the brain that sits within one of the deep folds of the cerebral cortex. This was a surprise. The anterior insula is typically associated with emotions of negative quality, such as disgust and pain, making it an unusual candidate for being the brain’s “aesthetic center.” Why would a part of the brain known to be important for the processing of pain and disgust turn out to the most important area for the appreciation of art?

[div class=attrib]Read entire article here.[end-div]

[div class=attrib]Image: The Birth of Venus by Sandro Botticelli. Courtesy of Wikipedia.[end-div]

Hitchens Returns to Stardust

Having just posted this article on Christopher Hitchens earlier in the week we at theDiagonal are compelled to mourn and signal his departure. Christopher Hitchens died on December 15, 2011 from pneumonia and complications from esophageal cancer.

His incisive mind, lucid reason, quick wit and forceful skepticism will be sorely missed. Luckily, his written words, of which there are many, will live on.

Richard Dawkins writes of his fellow atheist:

Farewell, great voice. Great voice of reason, of humanity, of humour. Great voice against cant, against hypocrisy, against obscurantism and pretension, against all tyrants including God.

Author Ian McEwan writes of his close friend’s last weeks, which we excerpt below.

[div class=attrib]From the Guardian:[end-div]

The place where Christopher Hitchens spent his last few weeks was hardly bookish, but he made it his own. Close to downtown Houston, Texas is the medical centre, a cluster of high-rises like La Défense of Paris, or the City of London, a financial district of a sort, where the common currency is illness. This complex is one of the world’s great concentrations of medical expertise and technology. Its highest building, 40 or 50 storeys up, denies the possibility of a benevolent god – a neon sign proclaims from its roof a cancer hospital for children. This “clean-sliced cliff”, as Larkin puts it in his poem about a tower-block hospital, was right across the way from Christopher’s place – which was not quite as high, and adults only.

No man was ever as easy to visit in hospital. He didn’t want flowers and grapes, he wanted conversation, and presence. All silences were useful. He liked to find you still there when he woke from his frequent morphine-induced dozes. He wasn’t interested in being ill, the way most ill people are. He didn’t want to talk about it.

When I arrived from the airport on my last visit, he saw sticking out of my luggage a small book. He held out his hand for it – Peter Ackroyd‘s London Under, a subterranean history of the city. Then we began a 10-minute celebration of its author. We had never spoken of him before, and Christopher seemed to have read everything. Only then did we say hello. He wanted the Ackroyd, he said, because it was small and didn’t hurt his wrist to hold. But soon he was making pencilled notes in its margins. By that evening he’d finished it.

He could have written a review, but he was due to turn in a long piece on Chesterton. And so this was how it would go: talk about books and politics, then he dozed while I read or wrote, then more talk, then we both read. The intensive care unit room was crammed with flickering machines and sustaining tubes, but they seemed almost decorative. Books, journalism, the ideas behind both, conquered the sterile space, or warmed it, they raised it to the condition of a good university library. And they protected us from the bleak high-rise view through the plate glass windows, of that world, in Larkin’s lines, whose loves and chances “are beyond the stretch/Of any hand from here!”

In the afternoon I was helping him out of bed, the idea being that he was to take a shuffle round the nurses’ station to exercise his legs. As he leaned his trembling, diminished weight on me, I said, only because I knew he was thinking it, “Take my arm old toad …” He gave me that shifty sideways grin I remembered so well from healthy days. It was the smile of recognition, or one that anticipates in late afternoon an “evening of shame” – that is to say, pleasure, or, one of his favourite terms, “sodality”.

His unworldly fluency never deserted him, his commitment was passionate, and he never deserted his trade. He was the consummate writer, the brilliant friend. In Walter Pater’s famous phrase, he burned “with this hard gem-like flame”. Right to the end.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Christopher Hitchens with Ian McEwan (left) and Martin Amis in Uruguay, posing for a picture which appeared in his memoirs, Hitch 22. Courtesy of Guardian / PR.[end-div]

Why Converse When You Can Text?

The holidays approach, which for many means spending a more than usual amount of time with extended family and distant relatives. So, why talk face-to-face when you could text Great Uncle Aloysius instead?

Dominique Browning suggests lowering the stress levels of family get-togethers through more texting and less face-time.

[div class=attrib]From the New York Times:[end-div]

ADMIT it. The holiday season has just begun, and already we’re overwhelmed by so much … face time. It’s hard, face-to-face emoting, face-to-face empathizing, face-to-face expressing, face-to-face criticizing. Thank goodness for less face time; when it comes to disrupting, if not severing, lifetimes of neurotic relational patterns, technology works even better than psychotherapy.

We look askance at those young adults in a swivet of tech-enabled multifriending, endlessly texting, tracking one another’s movements — always distracted from what they are doing by what they are not doing, always connecting to people they are not with rather than people right in front of them.

But being neither here nor there has real upsides. It’s less strenuous. And it can be more uplifting. Or, at least, safer, which has a lot going for it these days.

Face time — or what used to be known as spending time with friends and family — is exhausting. Maybe that’s why we’re all so quick to abandon it. From grandfathers to tweenies, we’re all taking advantage of the ways in which we can avoid actually talking, much less seeing, one another — but still stay connected.

The last time I had face time with my mother, it started out fine. “What a lovely blouse,” she said, plucking lovingly (as I chose to think) at my velvet sleeve. I smiled, pleased that she was noticing that I had made an effort. “Too bad it doesn’t go with your skirt.” Had we been on Skype, she would never have noticed my (stylishly intentional, I might add, just ask Marni) intriguing mix of textures. And I would have been spared another bout of regressive face time freak-out.

Face time means you can’t search for intriguing recipes while you are listening to a fresh round of news about a friend’s search for a soul mate. You can’t mute yourself out of an endless meeting, or listen to 10 people tangled up in planning while you vacuum the living room. You can’t get “cut off” — Whoops! Sorry! Tunnel! — in the middle of a tedious litany of tax problems your accountant has spotted.

My move away from face time started with my children; they are generally the ones who lead us into the future. It happened gradually. First, they left home. That did it for face time. Then I stopped getting return phone calls to voice mails. That did it for voice time, which I’d used to wean myself from face time. What happened?

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: People texting. Courtesy of Mashable.com.[end-div]

Consciousness as Illusion?

Massimo Pigliucci over at Rationally Speaking ponders free will, moral responsibility and consciousness and, as always, presents a well reasoned and eloquent argument — we do exist!

[div class=attrib]From Rationally Speaking:[end-div]

For some time I have been noticing the emergence of a strange trinity of beliefs among my fellow skeptics and freethinkers: an increasing number of them, it seems, don’t believe that they can make decisions (the free will debate), don’t believe that they have moral responsibility (because they don’t have free will, or because morality is relative — take your pick), and they don’t even believe that they exist as conscious beings because, you know, consciousness is an illusion.

As I have argued recently, there are sensible ways to understand human volition (a much less metaphysically loaded and more sensible term than free will) within a lawful universe (Sean Carroll agrees and, interestingly, so does my sometime opponent Eliezer Yudkowsky). I also devoted an entire series on this blog to a better understanding of what morality is, how it works, and why it ain’t relative (within the domain of social beings capable of self-reflection). Let’s talk about consciousness then.

The oft-heard claim that consciousness is an illusion is an extraordinary one, as it relegates to an entirely epiphenomenal status what is arguably the most distinctive characteristic of human beings, the very thing that seems to shape and give meaning to our lives, and presumably one of the major outcome of millions of years of evolution pushing for a larger brain equipped with powerful frontal lobes capable to carry out reasoning and deliberation.

Still, if science tells us that consciousness is an illusion, we must bow to that pronouncement and move on (though we apparently cannot escape the illusion, partly because we have no free will). But what is the extraordinary evidence for this extraordinary claim? To begin with, there are studies of (very few) “split brain” patients which seem to indicate that the two hemispheres of the brain — once separated — display independent consciousness (under experimental circumstances), to the point that they may even try to make the left and right sides of the body act antagonistically to each other.

But there are a couple of obvious issues here that block an easy jump from observations on those patients to grand conclusions about the illusoriness of consciousness. First off, the two hemispheres are still conscious, so at best we have evidence that consciousness is divisible, not that it is an illusion (and that subdivision presumably can proceed no further than n=2). Second, these are highly pathological situations, and though they certainly tell us something interesting about the functioning of the brain, they are informative mostly about what happens when the brain does not function. As a crude analogy, imagine sawing a car in two, noticing that the front wheels now spin independently of the rear wheels, and concluding that the synchronous rotation of the wheels in the intact car is an “illusion.” Not a good inference, is it?

Let’s pursue this illusion thing a bit further. Sometimes people also argue that physics tells us that the way we perceive the world is also an illusion. After all, apparently solid objects like tables are made of quarks and the forces that bind them together, and since that’s the fundamental level of reality (well, unless you accept string theory) then clearly our senses are mistaken.

But our senses are not mistaken at all, they simply function at the (biologically) appropriate level of perception of reality. We are macroscopic objects and need to navigate the world as such. It would be highly inconvenient if we could somehow perceive quantum level phenomena directly, and in a very strong sense the solidity of a table is not an illusion at all. It is rather an emergent property of matter that our evolved senses exploit to allow us to sit down and have a nice meal at that table without worrying about the zillions of subnuclear interactions going on about it all the time.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Consciousness Art. Courtesy of Google search.[end-div]

How to Make Social Networking Even More Annoying

What do you get when you take a social network, add sprinkles of mobile telephony, and throw in a liberal dose of proximity sensing? You get the first “social accessory” that creates a proximity network around you as you move about your daily life. Welcome to the world of a yet another social networking technology startup, this one, called magnetU. The company’s tagline is:

It was only a matter of time before your social desires became wearable!

magnetU markets a wearable device, about the size of a memory stick, that lets people wear and broadcast their social desires, allowing immediate social gratification anywhere and anytime. When a magnetU user comes into proximity with others having similar social profiles the system notifies the user of a match. A social match is signaled as either “attractive”, “hot” or “red hot”. So, if you want to find a group of anonymous but like minds (or bodies) for some seriously homogeneous partying magnetU is for you.

Time will tell whether this will become successful and pervasive, or whether it will be consigned to the tech start-up waste bin of history. If magnetU becomes as ubiquitous as Facebook then humanity be entering a disastrous new phase characterized by the following: all social connections become a marketing opportunity; computer algorithms determine when and whom to like (or not) instantly; the content filter bubble extends to every interaction online and in the real world; people become ratings and nodes on a network; advertisers insert themselves into your daily conversations; Big Brother is watching you!

[div class=attrib]From Technology Review:[end-div]

MagnetU is a $24 device that broadcasts your social media profile to everyone around you. If anyone else with a MagnetU has a profile that matches yours sufficiently, the device will alert both of you via text and/or an app. Or, as founder Yaron Moradi told Mashable in a video interview, “MagnetU brings Facebook, Linkedin, Twitter and other online social networks to the street.”

Moradi calls this process “wearing your social desires,” and anyone who’s ever attempted online dating can tell you that machines are poor substitutes for your own judgement when it comes to determining with whom you’ll actually want to connect.

You don’t have to be a pundit to come up with a long list of Mr. McCrankypants reasons this is a terrible idea, from the overwhelming volume of distraction we already face to the fact that unless this is a smash hit, the only people MagnetU will connect you to are other desperately lonely geeks.

My primary objection, however, is not that this device or something like it won’t work, but that if it does, it will have the Facebook-like effect of pushing even those who loathe it on principle into participating, just because everyone else is using it and those who don’t will be left out in real life.

“MagnetU lets you wear your social desires… Anything from your social and dating preferences to business matches in conferences,” says Moradi. By which he means this will be very popular with Robert Scoble and anyone who already has Grindr loaded onto his or her phone.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Facebook founder Mark Zuckerberg. Courtesy of Rocketboom.[end-div]