Google’s GDP

According to the infographic below Google had revenues of $29.3 billion in 2010. Not bad! Interestingly, that’s more than the combined Gross Domestic Product (GDP) of the world’s 28 poorest nations.

[div class=attrib]Infographic courtesy of MBA.org / dailyinfographic.[end-div]

 

The Debunking Handbook

A valuable resource if you ever find yourself having to counter and debunk a myth and misinformation. It applies equally regardless of the type of myth in debate: Santa, creationism, UFOs, political discourse, climate science denial, science denial in general. You can find the download here.

[div class=attrib]From Skeptical Science:[end-div]

The Debunking Handbook, a guide to debunking misinformation, is now freely available to download. Although there is a great deal of psychological research on misinformation, there’s no summary of the literature that offers practical guidelines on the most effective ways of reducing the influence of myths. The Debunking Handbook boils the research down into a short, simple summary, intended as a guide for communicators in all areas (not just climate) who encounter misinformation.

The Handbook explores the surprising fact that debunking myths can sometimes reinforce the myth in peoples’ minds. Communicators need to be aware of the various backfire effects and how to avoid them, such as:

  • The Familiarity Backfire Effect
  • The Overkill Backfire Effect
  • The Worldview Backfire Effect

It also looks at a key element to successful debunking: providing an alternative explanation. The Handbook is designed to be useful to all communicators who have to deal with misinformation (eg – not just climate myths).

[div class=attrib]Read more here.[end-div]

Boost Your Brainpower: Chew Gum

So you wish to boost your brain function? Well, forget the folate, B vitamins, omega-3 fatty acids, ginko biloba, and the countless array of other supplements. Researchers have confirmed that chewing gum increases cognitive abilities. However, while gum chewers perform significantly better on a battery of psychological tests, the boost is fleeting — lasting only on average for the first 20 minutes of testing.

[div class=attrib]From Wired:[end-div]

Why do people chew gum? If an anthropologist from Mars ever visited a typical supermarket, they’d be confounded by those shelves near the checkout aisle that display dozens of flavored gum options. Chewing without eating seems like such a ridiculous habit, the oral equivalent of running on a treadmill. And yet, people have been chewing gum for thousands of years, ever since the ancient Greeks began popping wads of mastic tree resin in their mouth to sweeten the breath. Socrates probably chewed gum.

It turns out there’s an excellent rationale for this long-standing cultural habit: Gum is an effective booster of mental performance, conferring all sorts of benefits without any side effects. The latest investigation of gum chewing comes from a team of psychologists at St. Lawrence University. The experiment went like this: 159 students were given a battery of demanding cognitive tasks, such as repeating random numbers backward and solving difficult logic puzzles. Half of the subjects chewed gum (sugar-free and sugar-added) while the other half were given nothing. Here’s where things get peculiar: Those randomly assigned to the gum-chewing condition significantly outperformed those in the control condition on five out of six tests. (The one exception was verbal fluency, in which subjects were asked to name as many words as possible from a given category, such as “animals.”) The sugar content of the gum had no effect on test performance.

While previous studies achieved similar results — chewing gum is often a better test aid than caffeine — this latest research investigated the time course of the gum advantage. It turns out to be rather short lived, as gum chewers only showed an increase in performance during the first 20 minutes of testing. After that, they performed identically to non-chewers.

What’s responsible for this mental boost? Nobody really knows. It doesn’t appear to depend on glucose, since sugar-free gum generated the same benefits. Instead, the researchers propose that gum enhances performance due to “mastication-induced arousal.” The act of chewing, in other words, wakes us up, ensuring that we are fully focused on the task at hand.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Chewing gum tree, Mexico D.F. Courtesy of mexicolore.[end-div]

Pickled Sharks and All

Regardless of what you may believe about Damien Hirst or think about his art it would not be stretching the truth to say he single-handedly resurrected the British contemporary art scene over the last 15 years.

Our favorite mainstream blogger on all things art, Jonathan Jones, revisits Hirst and his “pickled shark”.

[div class=attrib]From the Guardian:[end-div]

I had no job and didn’t know where I was going in life when I walked into the Saatchi Gallery in 1992 and saw a tiger shark swimming towards me. Standing in front of Damien Hirst’s The Physical Impossibility of Death in the Mind of Someone Living in its original pristine state was a disconcerting and marvellous experience. The shark, then, did not look pickled, it looked alive. It seemed to move as you moved around the tank that contained it, because the refractions of the liquid inside which it “swam” caused your vision of it to jump as you changed your angle.

There it was: life, or was it death, relentlessly approaching me through deep waters. It was galvanising, energising. It was a great work of art.

I knew what I thought great art looked like. I doted on Leonardo da Vinci, I loved Picasso. I still revere them both. But it was Hirst’s shark that made me believe art made with fish, glass vitrines and formaldehyde – and therefore with anything – can be great. I found his work not just interesting or provocative but genuinely profound. As a memento mori, as an exploration of the limits of art, as a meditation on the power of spectacle, even as a comment on the shark-infested waters of post-Thatcherite Britain, it moved me deeply.

I’m looking forward to Damien Hirst’s retrospective at Tate Modern because it will be a new chance to understand the power I have, in my life, sensed in his imagination and intellect. I think Hirst is a much more exciting modern artist than Marcel Duchamp. To be honest, the word “exciting” just doesn’t go with the word “Duchamp”. Get a load of that exciting urinal!

Picasso is exciting; Duchamp is an academic cult. The readymade as it was deployed by Duchamp gave birth to conceptual forms that are “interesting” but rarely grab you where it matters.

Hirst is more Picasso than Duchamp – the Picasso who put a bicycle seat and handlebars together to create a bull’s head. He’s even more Holbein than Duchamp – the Holbein who painted a skull across a portrait of two Renaissance gentlemen.

He is a giant of modern art.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: The Physical Impossibility of Death in the Mind of Someone Living by Damien Hirst (1991). Courtesy of Wikipedia.[end-div]

It’s Actually 4.74 Degrees of Kevin Bacon

Six degrees of separation is commonly held urban myth that on average everyone on Earth is six connections or less away from any other person. That is, through a chain of friend of a friend (of a friend, etc) relationships you can find yourself linked to the President, the Chinese Premier, a farmer on the steppes of Mongolia, Nelson Mandela, the editor of theDiagonal, and any one of the other 7 billion people on the planet.

The recent notion of degrees of separation stems from original research by Michael Gurevich at Massachusetts Institute of Technology on the structure of social networks in his 1961. Subsequently, an Austrian mathematician, Manfred Kochen, proposed in his theory of connectedness for a U.S.-sized population, that “it is practically certain that any two individuals can contact one another by means of at least two intermediaries.” In 1967 psychologist Stanley Milgram and colleagues validated this through his acquaintanceship network experiments on what was then called the Small World Problem. In one example, with 296 volunteers who were asked to send a message by postcard, through friends and then friends of friends, to a specific person living near Boston. Milgram’s work published in Psychology Today showed that people in the United States seemed to be connected by approximately three friendship links, on average. The experiment generated a tremendous amount of publicity, and as a result to this day he is incorrectly attributed with originating the ideas and quantification of interconnectedness and even the statement “six degrees of separation”.

In fact, the statement was originally articulated in 1929 by Hungarian author, Frigyes Karinthy and later popularized by in a play written by John Guare. Karinthy believed that the modern world was ‘shrinking’ due to the accelerating interconnectedness of humans. He hypothesized that any two individuals could be connected through at most five acquaintances. In 1990, playwright John Guare unveiled a play (followed by a movie in 1993) titled “Six Degrees of Separation”. This popularized the notion and enshrined it into popular culture. In the play one of the characters reflects on the idea that any two individuals are connected by at most five others:

I read somewhere that everybody on this planet is separated by only six other people. Six degrees of separation between us and everyone else on this planet. The President of the United States, a gondolier in Venice, just fill in the names. I find it A) extremely comforting that we’re so close, and B) like Chinese water torture that we’re so close because you have to find the right six people to make the right connection… I am bound to everyone on this planet by a trail of six people.

Then in 1994 along came the Kevin Bacon trivia game, “Six Degrees of Kevin Bacon” invented as a play on the original concept. The goal of the game is to link any actor to Kevin Bacon through no more than six connections, where two actors are connected if they have appeared in a movie or commercial together.

Now, in 2011 comes a study of connectedness of Facebook users. Using Facebook’s population of over 700 million users, researchers found that the average number of links from any arbitrarily selected user to another was 4.74; for Facebook users in the U.S., the average number of of links was just 4.37. Facebook posted detailed findings on its site, here.

So, the Small World Problem popularized by Milgram and colleagues is actually becoming smaller as Frigyes Karinthy had originally suggested back in 1929. As a result, you may not be as “far” from the Chinese Premier or Nelson Mandela as you may have previously believed.

[div class=attrib]Image: Six Degrees of Separation Poster by James McMullan. Courtesy of Wikipedia.[end-div]

How the World May End: Science Versus Brimstone

Every couple of years a (hell)fire and brimstone preacher floats into the national consciousness and makes the headlines with certain predictions from the book regarding imminent destruction of our species and home. Most recently Harold Camping, the radio evangelist, predicted the apocalypse would begin on Saturday, May 21, 2011. His subsequent revision placed the “correct date” at October 21, 2011. Well, we’re still here, so the next apocalyptic date to prepare for, according to watchers of all things Mayan, is December 21, 2012.

So not to be outdone by prophesy from one particular religion or another, science has come out swinging with its own list of potential apocalyptic end-of-days. No surprise, many scenarios may well be at our own hands.

[div class=attrib]From the Guardian:[end-div]

Stories of brimstone, fire and gods make good tales and do a decent job of stirring up the requisite fear and jeopardy. But made-up doomsday tales pale into nothing, creatively speaking, when contrasted with what is actually possible. Look through the lens of science and “the end” becomes much more interesting.

Since the beginning of life on Earth, around 3.5 billion years ago, the fragile existence has lived in the shadow of annihilation. On this planet, extinction is the norm – of the 4 billion species ever thought to have evolved, 99% have become extinct. In particular, five times in this past 500 million years the steady background rate of extinction has shot up for a period of time. Something – no one knows for sure what – turned the Earth into exactly the wrong planet for life at these points and during each mass extinction, more than 75% of the existing species died off in a period of time that was, geologically speaking, a blink of the eye.

One or more of these mass extinctions occurred because of what we could call the big, Hollywood-style, potential doomsday scenarios. If a big enough asteroid hit the Earth, for example, the impact would cause huge earthquakes and tsunamis that could cross the globe. There would be enough dust thrown into the air to block out the sun for several years. As a result, the world’s food resources would be destroyed, leading to famine. It has happened before: the dinosaurs (along with more than half the other species on Earth) were wiped out 65 million years ago by a 10km-wide asteroid that smashed into the area around Mexico.

Other natural disasters include sudden changes in climate or immense volcanic eruptions. All of these could cause global catastrophes that would wipe out large portions of the planet’s life, but, given we have survived for several hundreds of thousands of years while at risk of these, it is unlikely that a natural disaster such as that will cause catastrophe in the next few centuries.

In addition, cosmic threats to our existence have always been with us, even thought it has taken us some time to notice: the collision of our galaxy, the Milky Way, with our nearest neighbour, Andromeda, for example, or the arrival of a black hole. Common to all of these threats is that there is very little we can do about them even when we know the danger exists, except trying to work out how to survive the aftermath.

But in reality, the most serious risks for humans might come from our own activities. Our species has the unique ability in the history of life on Earth to be the first capable of remaking our world. But we can also destroy it.

All too real are the human-caused threats born of climate change, excess pollution, depletion of natural resources and the madness of nuclear weapons. We tinker with our genes and atoms at our own peril. Nanotechnology, synthetic biology and genetic modification offer much potential in giving us better food to eat, safer drugs and a cleaner world, but they could also go wrong if misapplied or if we charge on without due care.

Some strange ways to go and their corresponding danger signs listed below:

DEATH BY EUPHORIA

Many of us use drugs such as caffeine or nicotine every day. Our increased understanding of physiology brings new drugs that can lift mood, improve alertness or keep you awake for days. How long before we use so many drugs we are no longer in control? Perhaps the end of society will not come with a bang, but fade away in a haze.

Danger sign: Drugs would get too cheap to meter, but you might be too doped up to notice.

VACUUM DECAY

If the Earth exists in a region of space known as a false vacuum, it could collapse into a lower-energy state at any point. This collapse would grow at the speed of light and our atoms would not hold together in the ensuing wave of intense energy – everything would be torn apart.

Danger sign: There would be no signs. It could happen half way through this…

STRANGELETS

Quantum mechanics contains lots of frightening possibilities. Among them is a particle called a strangelet that can transform any other particle into a copy of itself. In just a few hours, a small chunk of these could turn a planet into a featureless mass of strangelets. Everything that planet was would be no more.

Danger sign: Everything around you starts cooking, releasing heat.

END OF TIME

What if time itself somehow came to a finish because of the laws of physics? In 2007, Spanish scientists proposed an alternative explanation for the mysterious dark energy that accounts for 75% of the mass of the universe and acts as a sort of anti-gravity, pushing galaxies apart. They proposed that the effects we observe are due to time slowing down as it leaked away from our universe.

Danger sign: It could be happening right now. We would never know.

MEGA TSUNAMI

Geologists worry that a future volcanic eruption at La Palma in the Canary Islands might dislodge a chunk of rock twice the volume of the Isle of Man into the Atlantic Ocean, triggering waves a kilometre high that would move at the speed of a jumbo jet with catastrophic effects for the shores of the US, Europe, South America and Africa.

Danger sign: Half the world’s major cities are under water. All at once.

GEOMAGNETIC REVERSAL

The Earth’s magnetic field provides a shield against harmful radiation from our sun that could rip through DNA and overload the world’s electrical systems. Every so often, Earth’s north and south poles switch positions and, during the transition, the magnetic field will weaken or disappear for many years. The last known transition happened almost 780,000 years ago and it is likely to happen again.

Danger sign: Electronics stop working.

GAMMA RAYS FROM SPACE

When a supermassive star is in its dying moments, it shoots out two beams of high-energy gamma rays into space. If these were to hit Earth, the immense energy would tear apart the atmosphere’s air molecules and disintegrate the protective ozone layer.

Danger sign: The sky turns brown and all life on the surface slowly dies.

RUNAWAY BLACK HOLE

Black holes are the most powerful gravitational objects in the universe, capable of tearing Earth into its constituent atoms. Even within a billion miles, a black hole could knock Earth out of the solar system, leaving our planet wandering through deep space without a source of energy.

Danger sign: Increased asteroid activity; the seasons get really extreme.

INVASIVE SPECIES

Invasive species are plants, animals or microbes that turn up in an ecosystem that has no protection against them. The invader’s population surges and the ecosystem quickly destabilises towards collapse. Invasive species are already an expensive global problem: they disrupt local ecosystems, transfer viruses, poison soils and damage agriculture.

Danger sign: Your local species disappear.

TRANSHUMANISM

What if biological and technological enhancements took humans to a level where they radically surpassed anything we know today? “Posthumans” might consist of artificial intelligences based on the thoughts and memories of ancient humans, who uploaded themselves into a computer and exist only as digital information on superfast computer networks. Their physical bodies might be gone but they could access and store endless information and share their thoughts and feelings immediately and unambiguously with other digital humans.

Danger sign: You are outcompeted, mentally and physically, by a cyborg.

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]End is Nigh Sign. Courtesy of frontporchrepublic.com.[end-div]

MondayPoem: Inferno – Canto I

Dante Alighieri is held in high regard in Italy, where he is often referred to as il Poeta, the poet. He is best known for the monumental poem La Commedia, later renamed La Divina Commedia – The Divine Comedy. Scholars consider it to be the greatest work of literature in the Italian language. Many also consider Dante to be symbolic father of the Italian language.

[div class=attrib]According to Wikipedia:[end-div]

He wrote the Comedy in a language he called “Italian”, in some sense an amalgamated literary language mostly based on the regional dialect of Tuscany, with some elements of Latin and of the other regional dialects. The aim was to deliberately reach a readership throughout Italy, both laymen, clergymen and other poets. By creating a poem of epic structure and philosophic purpose, he established that the Italian language was suitable for the highest sort of expression. In French, Italian is sometimes nicknamed la langue de Dante. Publishing in the vernacular language marked Dante as one of the first (among others such as Geoffrey Chaucer and Giovanni Boccaccio) to break free from standards of publishing in only Latin (the language of liturgy, history, and scholarship in general, but often also of lyric poetry). This break set a precedent and allowed more literature to be published for a wider audience—setting the stage for greater levels of literacy in the future.

By Dante Alighieri

(translated by the Rev. H. F. Cary)

– Inferno, Canto I

In the midway of this our mortal life,
I found me in a gloomy wood, astray
Gone from the path direct: and e’en to tell
It were no easy task, how savage wild
That forest, how robust and rough its growth,
Which to remember only, my dismay
Renews, in bitterness not far from death.
Yet to discourse of what there good befell,
All else will I relate discover’d there.
How first I enter’d it I scarce can say,
Such sleepy dullness in that instant weigh’d
My senses down, when the true path I left,
But when a mountain’s foot I reach’d, where clos’d
The valley, that had pierc’d my heart with dread,
I look’d aloft, and saw his shoulders broad
Already vested with that planet’s beam,
Who leads all wanderers safe through every way.

Then was a little respite to the fear,
That in my heart’s recesses deep had lain,
All of that night, so pitifully pass’d:
And as a man, with difficult short breath,
Forespent with toiling, ‘scap’d from sea to shore,
Turns to the perilous wide waste, and stands
At gaze; e’en so my spirit, that yet fail’d
Struggling with terror, turn’d to view the straits,
That none hath pass’d and liv’d.  My weary frame
After short pause recomforted, again
I journey’d on over that lonely steep,

The hinder foot still firmer.  Scarce the ascent
Began, when, lo! a panther, nimble, light,
And cover’d with a speckled skin, appear’d,
Nor, when it saw me, vanish’d, rather strove
To check my onward going; that ofttimes
With purpose to retrace my steps I turn’d.

The hour was morning’s prime, and on his way
Aloft the sun ascended with those stars,
That with him rose, when Love divine first mov’d
Those its fair works: so that with joyous hope
All things conspir’d to fill me, the gay skin
Of that swift animal, the matin dawn
And the sweet season.  Soon that joy was chas’d,
And by new dread succeeded, when in view
A lion came, ‘gainst me, as it appear’d,

With his head held aloft and hunger-mad,
That e’en the air was fear-struck.  A she-wolf
Was at his heels, who in her leanness seem’d
Full of all wants, and many a land hath made
Disconsolate ere now.  She with such fear
O’erwhelmed me, at the sight of her appall’d,
That of the height all hope I lost.  As one,
Who with his gain elated, sees the time
When all unwares is gone, he inwardly
Mourns with heart-griping anguish; such was I,
Haunted by that fell beast, never at peace,
Who coming o’er against me, by degrees
Impell’d me where the sun in silence rests.

While to the lower space with backward step
I fell, my ken discern’d the form one of one,
Whose voice seem’d faint through long disuse of speech.
When him in that great desert I espied,
“Have mercy on me!”  cried I out aloud,
“Spirit! or living man! what e’er thou be!”

He answer’d: “Now not man, man once I was,
And born of Lombard parents, Mantuana both
By country, when the power of Julius yet
Was scarcely firm.  At Rome my life was past
Beneath the mild Augustus, in the time
Of fabled deities and false.  A bard
Was I, and made Anchises’ upright son
The subject of my song, who came from Troy,
When the flames prey’d on Ilium’s haughty towers.
But thou, say wherefore to such perils past
Return’st thou?  wherefore not this pleasant mount
Ascendest, cause and source of all delight?”
“And art thou then that Virgil, that well-spring,
From which such copious floods of eloquence
Have issued?”  I with front abash’d replied.
“Glory and light of all the tuneful train!
May it avail me that I long with zeal
Have sought thy volume, and with love immense
Have conn’d it o’er.  My master thou and guide!
Thou he from whom alone I have deriv’d
That style, which for its beauty into fame
Exalts me.  See the beast, from whom I fled.
O save me from her, thou illustrious sage!

“For every vein and pulse throughout my frame
She hath made tremble.”  He, soon as he saw
That I was weeping, answer’d, “Thou must needs
Another way pursue, if thou wouldst ‘scape
From out that savage wilderness.  This beast,
At whom thou criest, her way will suffer none
To pass, and no less hindrance makes than death:
So bad and so accursed in her kind,
That never sated is her ravenous will,
Still after food more craving than before.
To many an animal in wedlock vile
She fastens, and shall yet to many more,
Until that greyhound come, who shall destroy
Her with sharp pain.  He will not life support
By earth nor its base metals, but by love,
Wisdom, and virtue, and his land shall be
The land ‘twixt either Feltro.  In his might
Shall safety to Italia’s plains arise,
For whose fair realm, Camilla, virgin pure,
Nisus, Euryalus, and Turnus fell.
He with incessant chase through every town
Shall worry, until he to hell at length
Restore her, thence by envy first let loose.
I for thy profit pond’ring now devise,
That thou mayst follow me, and I thy guide
Will lead thee hence through an eternal space,
Where thou shalt hear despairing shrieks, and see
Spirits of old tormented, who invoke
A second death; and those next view, who dwell
Content in fire, for that they hope to come,
Whene’er the time may be, among the blest,
Into whose regions if thou then desire
T’ ascend, a spirit worthier then I
Must lead thee, in whose charge, when I depart,
Thou shalt be left: for that Almighty King,
Who reigns above, a rebel to his law,
Adjudges me, and therefore hath decreed,
That to his city none through me should come.
He in all parts hath sway; there rules, there holds
His citadel and throne.  O happy those,
Whom there he chooses!” I to him in few:
“Bard! by that God, whom thou didst not adore,
I do beseech thee (that this ill and worse
I may escape) to lead me, where thou saidst,
That I Saint Peter’s gate may view, and those
Who as thou tell’st, are in such dismal plight.”

Onward he mov’d, I close his steps pursu’d.

[div class=attrib]Read the entire poem here.[end-div]

[div class=attrib]Image: Dante Alighieri, engraving after the fresco in Bargello Chapel, painted by Giotto di Bondone. Courtesy of Wikipedia.[end-div]

Viewfinder Replaces the Eye

The ubiquity of point-and-click digital cameras and camera-equipped smartphones seems to be leading us towards an era where it is more common to snap and share a picture of the present via a camera lens than it is to experience the present individually and through one’s own eyes.

Roberta Smith over at the New York Times laments this growing trend, which we label “digitally-assisted Killroy-was-here” syndrome, particularly evident at art exhibits. Ruth Fremson, New York Times’ photographer, chronicled some of the leading offenders.

[div class=attrib]From the New York Times:[end-div]

SCIENTISTS have yet to determine what percentage of art-viewing these days is done through the viewfinder of a camera or a cellphone, but clearly the figure is on the rise. That’s why Ruth Fremson, the intrepid photographer for The New York Times who covered the Venice Biennale this summer, returned with so many images of people doing more or less what she was doing: taking pictures of works of art or people looking at works of art. More or less.

Only two of the people in these pictures is using a traditional full-service camera (similar to the ones Ms. Fremson carried with her) and actually holding it to the eye. Everyone else is wielding either a cellphone or a mini-camera and looking at a small screen, which tends to make the framing process much more casual. It is changing the look of photography.

The ubiquity of cameras in exhibitions can be dismaying, especially when read as proof that most art has become just another photo op for evidence of Kilroy-was-here passing through. More generously, the camera is a way of connecting, participating and collecting fleeting experiences.

For better and for worse, it has become intrinsic to many people’s aesthetic responses. (Judging by the number of pictures Ms. Fremson took of people photographing Urs Fischer’s life-size statue of the artist Rudolf Stingel as a lighted candle, it is one of the more popular pieces at the Biennale, which runs through Nov. 27.) And the camera’s presence in an image can seem part of its strangeness, as with Ms. Fremson’s shot of the gentleman photographing a photo-mural by Cindy Sherman that makes Ms. Sherman, costumed as a circus juggler, appear to be posing just for him. She looks more real than she did in the actual installation.

Of course a photograph of a person photographing an artist’s photograph of herself playing a role is a few layers of an onion, maybe the kind to be found only among picture-takers at an exhibition.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Visitors at the Venice Biennale capture Urs Fisher’s statue. Courtesy of Ruth Fremson / The New York Times.[end-div]

Driving Across the U.S. at 146,700 Miles per Hour

Through the miracle of time-lapse photography we bring you a journey of 12,225 miles across 32 States in 55 days compressed into 5 minutes. Brian Defrees snapped an image every five seconds from his car-mounted camera during the adventure, which began and ended in New York, via Washington D.C., Florida, Los Angeles and Washington State, and many points in between.

[tube]Tt-juyvIWMQ[/tube]

Cool Images of a Hot Star

Astronomers and planetary photographers, both amateur and professional, have been having an inspiring time recently in watching the Sun. Some of the most gorgeous images of our nearest star come courtesy of photographer Alan Friedman. One such spectacular image shows several huge, 50,000 mile high, solar flares, and groups of active sunspots larger than our planet. See more of Freidman’s captivating images at his personal website.

According to MSNBC:

For the past couple of weeks, astronomers have been tracking groups of sunspots as they move across the sun’s disk. Those active regions have been shooting off flares and outbursts of electrically charged particles into space — signaling that the sun is ramping up toward the peak of its 11-year activity cycle. Physicists expect that peak, also known as “Solar Max,” to come in 2013.

A full frontal view from New York photographer Alan Friedman shows the current activity in detail, as seen in a particular wavelength known as hydrogen-alpha. The colors have been tweaked to turn the sun look like a warm, fuzzy ball, with lacy prominences licking up from the edge of the disk.

Friedman focused on one flare in particular over the weekend: In the picture you see at right, the colors have been reversed to produce a dark sun and dusky prominence against the light background of space.

[div class=attirb]Read more of this article here.[end-div]

[div class=attrib]Image: Powerful sunspots and gauzy-looking prominences can be seen in Alan Friedman’s photo of the sun, shown in hydrogen-alpha wavelengths. Courtesy of MSNBC / Copyright Alan Friedman, avertedimagination.com.[end-div]

What Exactly is a Person?

The recent “personhood” amendment on the ballot in Mississippi has caused many to scratch their heads and ponder the meaning of “person”. Philosophers through the ages have tackled this thorny question with detailed treatises and little consensus.

Boethius suggested that a person is “the individual substance of a rational nature.” Descartes described a person as an agent, human or otherwise, possessing consciousness, and capable of creating and acting on a plan. John Locke extended this definition to include reason and reflection. Kant looked at a person as a being having a conceptualizing mind capable of purposeful thought. Charles Taylor takes this naturalistic view further, defining a person as an agent driven by matters of significance. Harry Frankfurt characterized as person as an entity enshrining free will driven by a hierarchy of desires. Still others provide their own definition of a person. Peter Singer offers self-awareness as a distinguishing trait; Thomas White suggests that a person has the following elements: is alive, is aware, feels sensations, has emotions, has a sense of self, controls its own behaviour, recognises other persons, and has a various cognitive abilities.

Despite the variation in positions, all would seem to agree that a fertilized egg is certainly not a person.

    [div class=attrib]A thoughtful take over at 13.7 Cosmos and Culture blog:[end-div]

    According to Catholic doctrine, the Father, the Son and Holy Spirit are three distinct persons even though they are one essence. Only one of those persons — Jesus Christ — is also a human being whose life had a beginning and an end.

    I am not an expert in Trinitarian theology. But I mention it here because, great mysteries aside, this Catholic doctrine uses the notion of person in what, from our point of view today, is the standard way.

    John Locke called person a forensic concept. What he had in mind is that a person is one to whom credit and blame may be attached, one who is deemed responsible. The concept of a person is the concept of an agent.

    Crucially, Locke argued, persons are not the same as human beings. Dr. Jekyl and Mr. Hyde may be one and the same human being, that is, one and the same continuously existing organic life; they share a birth event; but they are two distinct persons. And this is why we don’t blame the one for the other’s crimes. Multiple personality disorder might be a real world example of this.

    I don’t know whether Locke believed that two distinct persons could actually inhabit the same living human body, but he certainly thought there was nothing contradictory in the possibility. Nor did he think there was anything incoherent in the thought that one person could find existence in multiple distinct animal lives, even if, as a matter of fact, this may not be possible. If you believe in reincarnation, then you think this is a genuine possibility. For Locke, this was no more incoherent than the idea of two actors playing the same role in a play.

    Indeed, the word “person” derives from a Latin (and originally a Greek) word meaning “character in a drama” or “mask” (because actors wore masks). This usage survives today in the phrase “dramatis personae.” To be a person, from this standpoint, is to play a role. The person is the role played, however, not the player.

    From this standpoint, the idea of non-human, non-living person certainly makes sense, even if we find it disturbing. Corporations are persons under current law, and this makes sense. They are actors, after all, and we credit and blame them for the things they do. They play an important role in our society.

    [div class=attrib]Read the whole article here.[end-div]

    [div class=attrib]Image: Abstract painting of a person, titled WI (In Memoriam), by Paul Klee (1879–1940). Courtesy of Wikipedia.[end-div]

    Supercommittee and Innovation: Oxymoron Du Jour

    Today is deadline day for the U.S. Congressional Select Committee on Deficit Reduction to deliver. Perhaps, a little ironically the committee was commonly mistitled the “Super Committee”. Interestingly, pundits and public alike do not expect the committee to deliver any significant, long-term solution to the United States’ fiscal problems. In fact, many do not believe the committee with deliver anything at all beyond reinforcement of right- and left-leaning ideologies, political posturing, pandering to special interests of all colors and, of course, recriminations and spin.

    Could the Founders have had such dysfunction in mind when they designed the branches of government with its many checks and balances to guard against excess and tyranny. So, perhaps it’s finally time for the United States’ Congress to gulp a large dose of some corporate-style innovation.

    [div class=attrib]From the Washington Post:[end-div]

    … Fiscal catastrophe has been around the corner, on and off, for 15 years. In that period, Dole and President Bill Clinton, a Democrat, came together to produce a record-breaking $230 billion surplus. That was later depleted by actions undertaken by both sides, bringing us to the tense situation we have today.

    What does this have to do with innovation?

    As the profession of innovation management matures, we are learning a few key things, including that constraints can be a good thing — and the “supercommittee” clock is a big constraint. Given this, what is the best strategy when you need to innovate in a hurry?

    When innovating under the gun, the first thing you must do is assemble a small, diverse team to own and attack the challenge. The “supercommittee” team is handicapped from the start, since it is neither small (think 4-5 people) nor diverse (neither in age nor expertise). Second, successful innovators envision what success looks like and pursue it single-mindedly – failure is not an option.

    Innovators also divide big challenges into smaller challenges that a small team can feel passionate about and assault on an even shorter timeline than the overall challenge. This requires that you put as much (or more) effort into determining the questions that form the challenges as you do into trying to solve them. Innovators ask big questions that challenge the status quo, such as “How could we generate revenue without taxes?” or “What spending could we avoid and how?” or “How would my son or my grandmother approach this?”

    To solve the challenges, successful innovators recruit people not only with expertise most relevant to the challenge, but also people with expertise in distant specialties, which, in innovation, is often where the best solutions come from.

    But probably most importantly, all nine innovation roles — the revolutionary, the conscript, the connector, the artist, customer champion, troubleshooter, judge, magic maker and evangelist — must be filled for an innovation effort to be successful.

    [div class=attrib]Read the entire article here.[end-div]

    What of the Millennials?

    The hippies of the sixties wanted love; the beatniks sought transcendence. Then came the punks, who were all about rage. The slackers and generation X stood for apathy and worry. And, now coming of age we have generation Y, also known as the “millennials”, whose birthdays fall roughly between 1982-2000.

    A fascinating article by William Deresiewicz, excerpted below, posits the millennials as a “post-emotional” generation. Interestingly, while this generation seems to be fragmented, its members are much more focused on their own “brand identity” than previous generations.

    [div class=attrib]From the New York Times:[end-div]

    EVER since I moved three years ago to Portland, Ore., that hotbed of all things hipster, I’ve been trying to get a handle on today’s youth culture. The style is easy enough to describe — the skinny pants, the retro hats, the wall-to-wall tattoos. But style is superficial. The question is, what’s underneath? What idea of life? What stance with respect to the world?

    So what’s the affect of today’s youth culture? Not just the hipsters, but the Millennial Generation as a whole, people born between the late ’70s and the mid-’90s, more or less — of whom the hipsters are a lot more representative than most of them care to admit. The thing that strikes me most about them is how nice they are: polite, pleasant, moderate, earnest, friendly. Rock ’n’ rollers once were snarling rebels or chest-beating egomaniacs. Now the presentation is low-key, self-deprecating, post-ironic, eco-friendly. When Vampire Weekend appeared on “The Colbert Report” last year to plug their album “Contra,” the host asked them, in view of the title, what they were against. “Closed-mindedness,” they said.

    According to one of my students at Yale, where I taught English in the last decade, a colleague of mine would tell his students that they belonged to a “post-emotional” generation. No anger, no edge, no ego.

    What is this about? A rejection of culture-war strife? A principled desire to live more lightly on the planet? A matter of how they were raised — everybody’s special and everybody’s point of view is valid and everybody’s feelings should be taken care of?

    Perhaps a bit of each, but mainly, I think, something else. The millennial affect is the affect of the salesman. Consider the other side of the equation, the Millennials’ characteristic social form. Here’s what I see around me, in the city and the culture: food carts, 20-somethings selling wallets made from recycled plastic bags, boutique pickle companies, techie start-ups, Kickstarter, urban-farming supply stores and bottled water that wants to save the planet.

    Today’s ideal social form is not the commune or the movement or even the individual creator as such; it’s the small business. Every artistic or moral aspiration — music, food, good works, what have you — is expressed in those terms.

    Call it Generation Sell.

    Bands are still bands, but now they’re little businesses, as well: self-produced, self-published, self-managed. When I hear from young people who want to get off the careerist treadmill and do something meaningful, they talk, most often, about opening a restaurant. Nonprofits are still hip, but students don’t dream about joining one, they dream about starting one. In any case, what’s really hip is social entrepreneurship — companies that try to make money responsibly, then give it all away.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Image: Millennial Momentum, Authors: Morley Winograd and Michael D. Hais, Rutgers University Press.[end-div]

    Book Review: Thinking, Fast and Slow. Daniel Kahneman

    Daniel Kahneman brings together for the first time his decades of groundbreaking research and profound thinking in social psychology and cognitive science in his new book, Thinking Fast and Slow. He presents his current understanding of judgment and decision making and offers insight into how we make choices in our daily lives. Importantly, Kahneman describes how we can identify and overcome the cognitive biases that frequently lead us astray. This is an important work by one of our leading thinkers.

    [div class=attrib]From Skeptic:[end-div]

    The ideas of the Princeton University Psychologist Daniel Kahneman, recipient of the Nobel Prize in Economic Sciences for his seminal work that challenged the rational model of judgment and decision making, have had a profound and widely regarded impact on psychology, economics, business, law and philosophy. Until now, however, he has never brought together his many years of research and thinking in one book. In the highly anticipated Thinking, Fast and Slow, Kahneman introduces the “machinery of the mind.” Two systems drive the way we think and make choices: System One is fast, intuitive, and emotional; System Two is slower, more deliberative, and more logical. Examining how both systems function within the mind, Kahneman exposes the extraordinary capabilities and also the faults and biases of fast thinking, and the pervasive influence of intuitive impressions on our thoughts and our choices. Kahneman shows where we can trust our intuitions and how we can tap into the benefits of slow thinking. He offers practical and enlightening insights into how choices are made in both our business and personal lives, and how we can guard against the mental glitches that often get us into trouble. Kahneman will change the way you think about thinking.

    [div class=attrib]Image: Thinking, Fast and Slow, Daniel Kahneman. Courtesy of Publishers Weekly.[end-div]

    MondayPoem: First Thanksgiving

    A chronicler of the human condition and deeply personal emotion, poet Sharon Olds is no shrinking violet. Her contemporary poems have been both highly praised and condemned for their explicit frankness and intimacy.

    [div class=attrib]From Poetry Foundation:[end-div]

    In her Salon interview, Olds addressed the aims of her poetry. “I think that my work is easy to understand because I am not a thinker. I am not a…How can I put it? I write the way I perceive, I guess. It’s not really simple, I don’t think, but it’s about ordinary things—feeling about things, about people. I’m not an intellectual. I’m not an abstract thinker. And I’m interested in ordinary life.” She added that she is “not asking a poem to carry a lot of rocks in its pockets. Just being an ordinary observer and liver and feeler and letting the experience get through you onto the notebook with the pen, through the arm, out of the body, onto the page, without distortion.”

    Olds has won numerous awards for her work, including fellowships from the Guggenheim Foundation and the National Endowment for the Arts. Widely anthologized, her work has also been published in a number of journals and magazines. She was New York State Poet from 1998 to 2000, and currently teaches in the graduate writing program at New York University.

    By Sharon Olds

    – First Thanksgiving

    When she comes back, from college, I will see
    the skin of her upper arms, cool,
    matte, glossy. She will hug me, my old
    soupy chest against her breasts,
    I will smell her hair! She will sleep in this apartment,
    her sleep like an untamed, good object,
    like a soul in a body. She came into my life the
    second great arrival, after him, fresh
    from the other world—which lay, from within him,
    within me. Those nights, I fed her to sleep,
    week after week, the moon rising,
    and setting, and waxing—whirling, over the months,
    in a slow blur, around our planet.
    Now she doesn’t need love like that, she has
    had it. She will walk in glowing, we will talk,
    and then, when she’s fast asleep, I’ll exult
    to have her in that room again,
    behind that door! As a child, I caught
    bees, by the wings, and held them, some seconds,
    looked into their wild faces,
    listened to them sing, then tossed them back
    into the air—I remember the moment the
    arc of my toss swerved, and they entered
    the corrected curve of their departure.

    [div class=attrib]Image: Sharon Olds. Courtesy of squawvalleywriters.org.[end-div]

    The Adaptive Soundscape: Musak and the Social Network DJ

    Recollect the piped “musak” that once played, and still plays, in many hotel elevators and public waiting rooms. Remember the perfectly designed mood music in restaurants and museums. Now, re-imagine the ambient soundscape dynamically customized for a space based on the music preferences of the people inhabiting that space. Well, there is a growing list of apps for that.

    [div class=attrib]From Wired:[end-div]

    This idea of having environments automatically reflect the predilections of those who inhabit them seems like the stuff of science fiction, but it’s already established fact, though not many people likely realize it yet.

    Let me explain. You know how most of the music services we listen to these days “scrobble” what we hear to Facebook and/or Last.fm? Well, outside developers can access that information — with your permission, of course — in order to shape their software around your taste.

    At the moment, most developers of Facebook-connected apps we’ve spoken with are able to mine your Likes (when you “like” something on Facebook) and profile information (when you add a band, book, movie, etc. as a favorite thing within your Facebook profile).

    However, as we recently confirmed with a Facebook software developer (who was not speaking for Facebook at the time but as an independent developer in his free time), third-party software developers can also access your listening data — each song you’ve played in any Facebook-connected music service and possibly what your friends listened to as well. Video plays and news article reads are also counted, if those sources are connected to Facebook.

    Don’t freak out — you have to give these apps permission to harvest this data. But once you do, they can start building their service using information about what you listened to in another service.

    Right now, this is starting to happen in the world of software (if I listen to “We Ah Wi” by Javelin on MOG, Spotify can find out if I give them permission to do so). Soon, due to mobile devices’ locational awareness — also opt-in — these preferences will leech into the physical world.

    I’m talking about the kids who used to sit around on the quad listening to that station. The more interesting option for mainstream users is music selections that automatically shift in response to the people in the room. The new DJs? Well, they will simply be the social butterflies who are most permissive with their personal information.

    Here are some more apps for real-world locations that can adapt music based on the preferences of these social butterflies:

    Crowdjuke: Winner of an MTV O Music Award for “best music hack,” this web app pulls the preferences of people who have RSVPed to an event and creates the perfect playlist for that group. Attendees can also add specific tracks using a mobile app or even text messaging from a “dumb” phone.

    Automatic DJ: Talk about science fiction; this one lets people DJ a party merely by having their picture taken at it.

    AudioVroom: This iPhone app (also with a new web version) makes a playlist that reflects two users’ tastes when they meet in real life. There’s no venue-specific version of this, but there could be (see also: Myxer).

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Image: Elevator Music. A Surreal History of Muzak, Easy-Listening, and Other Moodsong; Revised and Expanded Edition. Courtesy of the University of Michigan Press.[end-div]

    The Nation’s $360 Billion Medical Bill

    The United States spends around $2.5 trillion per year on health care. Approximately 14 percent of this is administrative spending. That’s $360 billion, yes, billion with a ‘b’, annually. And, by all accounts a significant proportion of this huge sum is duplicate, redundant, wasteful and unnecessary spending — that’s a lot of paperwork.

    [div class=attrib]From the New York Times:[end-div]

     

    LAST year I had to have a minor biopsy. Every time I went in for an appointment, I had to fill out a form requiring my name, address, insurance information, emergency contact person, vaccination history, previous surgical history and current medical problems, medications and allergies. I must have done it four times in just three days. Then, after my procedure, I received bills — and, even more annoying, statements of charges that said they weren’t bills — almost daily, from the hospital, the surgeon, the primary care doctor, the insurance company.

    Imagine that repeated millions of times daily and you have one of the biggest money wasters in our health care system. Administration accounts for roughly 14 percent of what the United States spends on health care, or about $360 billion per year. About half of all administrative costs — $163 billion in 2009 — are borne by Medicare, Medicaid and insurance companies. The other half pays for the legions employed by doctors and hospitals to fill out billing forms, keep records, apply for credentials and perform the myriad other administrative functions associated with health care.

    The range of expert opinions on how much of this could be saved goes as high as $180 billion, or half of current expenditures. But a more conservative and reasonable estimate comes from David Cutler, an economist at Harvard, who calculates that for the whole system — for insurers as well as doctors and hospitals — electronic billing and credentialing could save $32 billion a year. And United Health comes to a similar estimate, with 20 percent of savings going to the government, 50 percent to physicians and hospitals and 30 percent to insurers. For health care cuts to matter, they have to be above 1 percent of total costs, or $26 billion a year, and this conservative estimate certainly meets that threshold.

    How do we get to these savings? First, electronic health records would eliminate the need to fill out the same forms over and over. An electronic credentialing system shared by all hospitals, insurance companies, Medicare, Medicaid, state licensing boards and other government agencies, like the Drug Enforcement Administration, could reduce much of the paperwork doctors are responsible for that patients never see. Requiring all parties to use electronic health records and an online system for physician credentialing would reduce frustration and save billions.

    But the real savings is in billing. There are at least six steps in the process: 1) determining a patient’s eligibility for services; 2) obtaining prior authorization for specialist visits, tests and treatments; 3) submitting claims by doctors and hospitals to insurers; 4) verifying whether a claim was received and where in the process it is; 5) adjudicating denials of claims; and 6) receiving payment.

    Substantial costs arise from the fact that doctors, hospitals and other care providers must bill multiple insurance companies. Instead of having a unified electronic billing system in which a patient could simply swipe an A.T.M.-like card for automatic verification of eligibility, claims processing and payment, we have a complicated system with lots of expensive manual data entry that produces costly mistakes.

    [div class=attrib]Read more of this article here.[end-div]

    [div class=attrib]Image: Piles of paperwork. Courtesy of the Guardian.[end-div]

    Definition of Technocrat

    The unfolding financial crises and political upheavals in Europe have taken several casualties. Notably, the fall of both leaders and their governments in Greece and Italy. Both have been replaced by so-called “technocrats”. So, what is a technocrat and why? State explains.

    [div class=attrib]From Slate:[end-div]

    Lucas Papademos was sworn in as the new prime minister of Greece Friday morning. In Italy, it’s expected that Silvio Berlusconi will be replaced by former EU commissioner Mario Monti. Both men have been described as “technocrats” in major newspapers. What, exactly, is a technocrat?

    An expert, not a politician. Technocrats make decisions based on specialized information rather than public opinion. For this reason, they are sometimes called upon when there’s no popular or easy solution to a problem (like, for example, the European debt crisis). The word technocrat derives from the Greek tekhne, meaning skill or craft, and an expert in a field like economics can be as much a technocrat as one in a field more commonly thought to be technological (like robotics). Both Papademos and Monti hold advanced degrees in economics, and have each held appointments at government institutions.

    The word technocrat can also refer to an advocate of a form of government in which experts preside. The notion of a technocracy remains mostly hypothetical, though some nations have been considered as such in the sense of being governed primarily by technical experts. Historian Walter A. McDougall argued that the Soviet Union was the world’s first technocracy, and indeed its Politburo included an unusually high proportion of engineers. Other nations, including Italy and Greece, have undergone some short periods under technocratic regimes. Carlo Azeglio Ciampi, formerly an economist and central banker, served as prime minister of Italy from 1993 to 1994. Economist and former Bank of Greece director Xenophon Zolotas served as Prime Minister of Greece from 1989 to 1990.

    In the United States, technocracy was most popular in the early years of the Great Depression. Inspired in part by the ideas of economist Thorstein Veblen, the movement was led by engineer Howard Scott, who proposed radical utopian ideas and solutions to the economic disaster in scientific language. His movement, founded in 1932, drew national interest—the New York Times was the first major news organization to report the phenomenon, and Liberty Digest declared, “Technocracy is all the rage. All over the country it is being talked about, explained, wondered at, praised, damned. It is found about as easy to explain … as the Einstein theory of relativity.” A year later, it had mostly flamed out. No popular Technocratic party exists in the United States today, but Scott’s organization, called Technocracy Incorporated, persists in drastically reduced form.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attirb]Image: Mario Monti. Courtesy of Daily Telegraph.[end-div]

    The Infant Universe

    Long before the first galaxy clusters and the first galaxies appeared in our universe, and before the first stars, came the first basic elements — hydrogen, helium and lithium.

    Results from a just published study identify these raw materials from what is theorized to be the universe’s first few minutes of existence.

    [div class=attrib]From Scientific American:[end-div]

    By peering into the distance with the biggest and best telescopes in the world, astronomers have managed to glimpse exploding stars, galaxies and other glowing cosmic beacons as they appeared just hundreds of millions of years after the big bang. They are so far away that their light is only now reaching Earth, even though it was emitted more than 13 billion years ago.

    Astronomers have been able to identify those objects in the early universe because their bright glow has remained visible even after a long, universe-spanning journey. But spotting the raw materials from which the first cosmic structures formed—the gas produced as the infant universe expanded and cooled in the first few minutes after the big bang—has not been possible. That material is not itself luminous, and everywhere astronomers have looked they have found not the primordial light-element gases hydrogen, helium and lithium from the big bang but rather material polluted by heavier elements, which form only in stellar interiors and in cataclysms such as supernovae.

    Now a group of researchers reports identifying the first known pockets of pristine gas, two relics of those first minutes of the universe’s existence. The team found a pair of gas clouds that contain no detectable heavy elements whatsoever by looking at distant quasars and the intervening material they illuminate. Quasars are bright objects powered by a ravenous black hole, and the spectral quality of their light reveals what it passed through on its way to Earth, in much the same way that the lamp of a projector casts the colors of film onto a screen. The findings appeared online November 10 in Science.

    “We found two gas clouds that show a significant abundance of hydrogen, so we know that they are there,” says lead study author Michele Fumagalli, a graduate student at the University of California, Santa Cruz. One of the clouds also shows traces of deuterium, also known as heavy hydrogen, the nucleus of which contains not only a proton, as ordinary hydrogen does, but also a neutron. Deuterium should have been produced in big bang nucleosynthesis but is easily destroyed, so its presence is indicative of a pristine environment. The amount of deuterium present agrees with theoretical predictions about the mixture of elements that should have emerged from the big bang. “But we don’t see any trace of heavier elements like carbon, oxygen and iron,” Fumagalli says. “That’s what tells us that this is primordial gas.”

    The newfound gas clouds, as Fumagalli and his colleagues see them, existed about two billion years after the big bang, at an epoch of cosmic evolution known as redshift 3. (Redshift is a sort of cosmological distance measure, corresponding to the degree that light waves have been stretched on their trip across an expanding universe.) By that time the first generation of stars, initially comprising only the primordial light elements, had formed and were distributing the heavier elements they forged via nuclear fusion reactions into interstellar space.

    But the new study shows that some nooks of the universe remained pristine long after stars had begun to spew heavy elements. “They have looked for these special corners of the universe, where things just haven’t been polluted yet,” says Massachusetts Institute of Technology astronomer Rob Simcoe, who did not contribute to the new study. “Everyplace else that we’ve looked in these environments, we do find these heavy elements.”

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Image: Simulation by Ceverino, Dekel and Primack. Courtesy of Scientific American.[end-div]

    One Pale Blue Dot, 55 Languages and 11 Billion Miles

    It was Carl Sagan’s birthday last week (November 9, to be precise). He would have been 77 years old — he returned to “star-stuff” in 1996. Thoughts of this charming astronomer and cosmologist reminded us of a project with which he was intimately involved — the Voyager program.

    In 1977, NASA launched two spacecraft to explore Jupiter and Saturn. The spacecraft performed so well that their missions were extended several times: first, to journey farther in the outer reaches of our solar system and explore the planets Neptune and Uranus; and second, to fly beyond our solar system into interstellar space. And, by all accounts both craft are now close to this boundary. The farthest, Voyager I, is currently over 11 billion miles away. For a real-time check on its distance, visit  JPL’s Voyager site here. JPL is NASA’s Jet Propulsion Lab in Pasadena, CA.

    Some may recall that Carl Sagan presided over the selection and installation of content from the Earth onto a gold plated disk that each Voyager carries on its continuing mission. The disk contains symbolic explanations of our planet and solar system, as well as images of its inhabitants and greetings spoken in 55 languages. After much wrangling over concerns about damaging Voyager’s imaging instruments by peering back at the Sun, Sagan was instrumental in having NASA reorient Voyager I’s camera back towards the Earth. This enabled the craft to snap one last set of images of our planet from its vantage point in deep space. One poignant image became know as the “Pale Blue Dot”, and Sagan penned some characteristically eloquent and philosophical words about this image in his book, Pale Blue Dot: A Vision of the Human Future in Space.

    [div class=attrib]From Carl Sagan:[end-div]

    From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Look again at that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

    [div class=attrib]About the image from NASA:[end-div]

    From Voyager’s great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters – violet, blue and green – and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification.

    To ease identification we have drawn a gray circle around the image of the Earth.

    [div class=attrib]Image courtesy of NASA / JPL.[end-div]

    Growing Complex Organs From Scratch

    In early 2010 a Japanese research team grew retina-like structures from a culture of mouse embryonic stem cells. Now, only a year later, the same team at the RIKEN Center for Developmental Biology announced their success in growing a much more complex structure following a similar process — a mouse pituitary gland. This is seen as another major step towards bioengineering replacement organs for human transplantation.

    [div class=attrib]From Technology Review:[end-div]

    The pituitary gland is a small organ at the base of the brain that produces many important hormones and is a key part of the body’s endocrine system. It’s especially crucial during early development, so the ability to simulate its formation in the lab could help researchers better understand how these developmental processes work. Disruptions in the pituitary have also been associated with growth disorders, such as gigantism, and vision problems, including blindness.

    The study, published in this week’s Nature, moves the medical field even closer to being able to bioengineer complex organs for transplant in humans.

    The experiment wouldn’t have been possible without a three-dimensional cell culture. The pituitary gland is an independent organ, but it can’t develop without chemical signals from the hypothalamus, the brain region that sits just above it. With a three-dimensional culture, the researchers could grow both types of tissue together, allowing the stem cells to self-assemble into a mouse pituitary. “Using this method, we could mimic the early mouse development more smoothly, since the embryo develops in 3-D in vivo,” says Yoshiki Sasai, the lead author of the study.

    The researchers had a vague sense of the signaling factors needed to form a pituitary gland, but they had to figure out the exact components and sequence through trial and error. The winning combination consisted of two main steps, which required the addition of two growth factors and a drug to stimulate a developmental protein called sonic hedgehog (named after the video game). After about two weeks, the researchers had a structure that resembled a pituitary gland.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]New gland: After 13 days in culture, mouse embryonic stem cells had self-assembled the precursor pouch, shown here, that gives rise to the pituitary gland. Image courtesy of Technlogy Review / Nature.[end-div]

    Why Do We Overeat? Supersizing and Social Status

    [div class=attrib]From Wired:[end-div]

    Human beings are notoriously terrible at knowing when we’re no longer hungry. Instead of listening to our stomach – a very stretchy container – we rely on all sorts of external cues, from the circumference of the dinner plate to the dining habits of those around us. If the serving size is twice as large (and American serving sizes have grown 40 percent in the last 25 years), we’ll still polish it off. And then we’ll go have dessert.

    Consider a clever study done by Brian Wansink, a professor of marketing at Cornell. He used a bottomless bowl of soup – there was a secret tube that kept on refilling the bowl with soup from below – to demonstrate that how much people eat is largely dependent on how much you give them. The group with the bottomless bowl ended up consuming nearly 70 percent more than the group with normal bowls. What’s worse, nobody even noticed that they’d just slurped far more soup than normal.

    Or look at this study, done in 2006 by psychologists at the University of Pennsylvania. One day, they left out a bowl of chocolate M&M’s in an upscale apartment building. Next to the bowl was a small scoop. The following day, they refilled the bowl with M&M’s but placed a much larger scoop beside it. The result would not surprise anyone who has ever finished a Big Gulp soda or a supersized serving of McDonald’s fries: when the scoop size was increased, people took 66 percent more M&M’s. Of course, they could have taken just as many candies on the first day; they simply would have had to take a few more scoops. But just as larger serving sizes cause us to eat more, the larger scoop made the residents more gluttonous.

    Serving size isn’t the only variable influencing how much we consume. As M.F.K. Fisher noted, eating is a social activity, intermingled with many of our deeper yearnings and instincts. And this leads me to a new paper by David Dubois, Derek Ruckner and Adam Galinsky, psychologists at HEC Paris and the Kellogg School of Management. The question they wanted to answer is why people opt for bigger serving sizes. If we know that we’re going to have a tough time not eating all those French fries, then why do we insist on ordering them? What drives us to supersize?

    The hypothesis of Galinsky, et. al. is that supersizing is a subtle marker of social status.

    Needless to say, this paper captures a tragic dynamic behind overeating. It appears that one of the factors causing us to consume too much food is a lack of social status, as we try to elevate ourselves by supersizing meals. Unfortunately, this only leads to rampant weight gain which, as the researchers note, “jeopardizes future rank through the accompanying stigma of being overweight.” In other words, it’s a sad feedback loop of obesity, a downward spiral of bigger serving sizes that diminish the very status we’re trying to increase.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Super Size Me movie. Image courtesy of Wikipedia.[end-div]

    MondayPoem: Voyager

    Poet, essayist and playwright Todd Hearon grew up in North Carolina. He earned a PhD in editorial studies from Boston University. He is winner of a number of national poetry and playwriting awards including the 2007 Friends of Literature Prize and a Dobie Paisano Fellowship from the University of Texas at Austin.

    By Todd Hearon

    – Voyager

    We’ve packed our bags, we’re set to fly
    no one knows where, the maps won’t do.
    We’re crossing the ocean’s nihilistic blue
    with an unborn infant’s opal eye.

    It has the clarity of earth and sky
    seen from a spacecraft, once removed,
    as through an amniotic lens, that groove-
    lessness of space, the last star by.

    We have set out to live and die
    into the interstices of a new
    nowhere to be or be returning to

    (a little like an infant’s airborne cry).
    We’ve set our sights on nothing left to lose
    and made of loss itself a lullaby.

    [div class=attrib]Todd Hearon. Image courtesy of Boston University.[end-div]

    Kodak: The Final Picture?

    If you’re over 30 years old, then you may still recall having used roll film with your analog, chemically-based camera. If you did then it’s likely you would have used a product, such as Kodachrome, manufactured by Eastman Kodak. The company was founded by George Eastman in 1892. Eastman invented roll film and helped make photography a mainstream pursuit.

    Kodak had been synonymous with photography for around a 100 years. However, in recent years it failed to change gears during the shift to digital media. Indeed it finally ceased production and processing of Kodachrome in 2009. While other companies, such as Nikon and Canon, managed the transition to a digital world, Kodak failed to anticipate and capitalize. Now, the company is struggling for survival.

    [div class=attrib]From Wired:[end-div]

    Eastman Kodak Co. is hemorrhaging money, the latest Polaroid to be wounded by the sweeping collapse of the market for analog film.

    In a statement to the Securities and Exchange Commission, Kodak reported that it needs to make more money out of its patent portfolio or to raise money by selling debt.

    Kodak has tried to recalibrate operations around printing, as the sale of film and cameras steadily decline, but it appears as though its efforts have been fruitless: in Q3 of last year, Kodak reported it had $1.4 billion in cash, ending the same quarter this year with just $862 million — 10 percent less than the quarter before.

    Recently, the patent suits have been a crutch for the crumbling company, adding a reliable revenue to the shrinking pot. But this year the proceeds from this sadly demeaning revenue stream just didn’t pan out. With sales down 17 percent, this money is critical, given the amount of cash being spent on restructuring lawyers and continued production.

    Though the company has no plans to seek bankruptcy, one thing is clear: Kodak’s future depends on its ability to make its Intellectual Property into a profit, no matter the method.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Image courtesy of Wired.[end-div]

    Lifecycle of a Webpage

    If you’ve ever “stumbled”, as in used the popular and addictive website Stumbleupon, the infographic below if for you. It’s a great way to broaden one’s exposure to related ideas and make serendipitous discoveries.

    Interestingly, the typical attention span of a Stumbleupon user seems to be much longer than that of the average Facebook follower.

    [div class=attrib]Infographic courtesy of Column Five Media.[end-div]

    Offshoring and Outsourcing of Innovation

    A fascinating article over at the Wall Street Journal contemplates the demise of innovation in the United States. It’s no surprise where it’s heading — China.

    [div class=attrib]From the Wall Street Journal:[end-div]

    At a recent business dinner, the conversation about intellectual-property theft in China was just getting juicy when an executive with a big U.S. tech company leaned forward and said confidently: “This isn’t such a problem for us because we plan on innovating new products faster than the Chinese can steal the old ones.”

    That’s a solution you often hear from U.S. companies: The U.S. will beat the Chinese at what the U.S. does best—innovation—because China’s bureaucratic, state-managed capitalism can’t master it.

    The problem is, history isn’t on the side of that argument, says Niall Ferguson, an economic historian whose new book, “Civilization: The West and the Rest,” was published this week. Mr. Ferguson, who teaches at Harvard Business School, says China and the rest of Asia have assimilated much of what made the West successful and are now often doing it better.

    “I’ve stopped believing that there’s some kind of cultural defect that makes the Chinese incapable of innovating,” he says. “They’re going to have the raw material of better educated kids that ultimately drives innovation.”

    Andrew Liveris, the chief executive of Dow Chemical, has pounded this drum for years, describing what he sees as a drift in engineering and manufacturing acumen from the West to Asia. “Innovation has followed manufacturing to China,” he told a group at the Wharton Business School recently.

    “Over time, when companies decide where to build R&D facilities, it will make more and more sense to do things like product support, upgrades and next-generation design in the same place where the product is made,” he said. “That is one reason why Dow has 500 Chinese scientists working in China, earning incredibly good money, and who are already generating more patents per scientist than our other locations.”

    For a statistical glimpse of this accretion at work, read the World Economic Forum’s latest annual competitiveness index, which ranks countries by a number of economic criteria. For the third year in a row, the U.S. has slipped and China has crept up. To be sure, the U.S. still ranks fifth in the world and China is a distant 26th, but the gap is slowly closing.

    [div class=attrib]Read the entire article here.[end-div]

    The Evils of Television

    Much has been written on the subject of television. Its effects on our culture in general and on the young minds of our children in particular have been studied and documented for decades. Increased levels of violence, the obesity epidemic, social fragmentation, vulgarity and voyeurism, caustic politics, poor attention span — all of these have been linked, at some time or other, to that little black box in the corner (increasingly, the big flat space above the mantle).

    In his article, A Nation of Vidiots, Jeffrey D. Sachs, weighs in on the subject.

    [div class=attrib]From Project Syndicate:[end-div]

    The past half-century has been the age of electronic mass media. Television has reshaped society in every corner of the world. Now an explosion of new media devices is joining the TV set: DVDs, computers, game boxes, smart phones, and more. A growing body of evidence suggests that this media proliferation has countless ill effects.

    The United States led the world into the television age, and the implications can be seen most directly in America’s long love affair with what Harlan Ellison memorably called “the glass teat.” In 1950, fewer than 8% of American households owned a TV; by 1960, 90% had one. That level of penetration took decades longer to achieve elsewhere, and the poorest countries are still not there.

    True to form, Americans became the greatest TV watchers, which is probably still true today, even though the data are somewhat sketchy and incomplete. The best evidence suggests that Americans watch more than five hours per day of television on average – a staggering amount, given that several hours more are spent in front of other video-streaming devices. Other countries log far fewer viewing hours. In Scandinavia, for example, time spent watching TV is roughly half the US average.

    The consequences for American society are profound, troubling, and a warning to the world – though it probably comes far too late to be heeded. First, heavy TV viewing brings little pleasure. Many surveys show that it is almost like an addiction, with a short-term benefit leading to long-term unhappiness and remorse. Such viewers say that they would prefer to watch less than they do.

    Moreover, heavy TV viewing has contributed to social fragmentation. Time that used to be spent together in the community is now spent alone in front of the screen. Robert Putnam, the leading scholar of America’s declining sense of community, has found that TV viewing is the central explanation of the decline of “social capital,” the trust that binds communities together. Americans simply trust each other less than they did a generation ago. Of course, many other factors are at work, but television-driven social atomization should not be understated.

    Certainly, heavy TV viewing is bad for one’s physical and mental health. Americans lead the world in obesity, with roughly two-thirds of the US population now overweight. Again, many factors underlie this, including a diet of cheap, unhealthy fried foods, but the sedentary time spent in front of the TV is an important influence as well.

    At the same time, what happens mentally is as important as what happens physically. Television and related media have been the greatest purveyors and conveyors of corporate and political propaganda in society.

    [div class=attrib]Read more of this article here.[end-div]

    [div class=attrib]Family watching television, c. 1958. Image courtesy of Wikipedia.[end-div]

    The Corporate One Percent of the One Percent

    With the Occupy Wall Street movement and related protests continuing to gather steam much recent media and public attention has focused on 1 percent versus the remaining 99 percent of the population. By most accepted estimates, 1 percent of households control around 40 percent of the global wealth, and there is a vast discrepancy between the top and bottom of the economic spectrum. While these statistics are telling, a related analysis of corporate wealth, highlighted in the New Scientist, shows a much tighter concentration among a very select group of transnational corporations (TNC).

    [div class=attrib]New Scientist:[end-div]

    An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

    The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

    The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).

    “Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”

    Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.

    The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.

    The work, to be published in PLoS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.

    When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

    [div class=attrib]Read the entire article here.[end-div]

    [div class=attrib]Image courtesy of New Scientist / PLoS One. The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue.[end-div]