All posts by Mike

The Infant Universe

Long before the first galaxy clusters and the first galaxies appeared in our universe, and before the first stars, came the first basic elements — hydrogen, helium and lithium.

Results from a just published study identify these raw materials from what is theorized to be the universe’s first few minutes of existence.

[div class=attrib]From Scientific American:[end-div]

By peering into the distance with the biggest and best telescopes in the world, astronomers have managed to glimpse exploding stars, galaxies and other glowing cosmic beacons as they appeared just hundreds of millions of years after the big bang. They are so far away that their light is only now reaching Earth, even though it was emitted more than 13 billion years ago.

Astronomers have been able to identify those objects in the early universe because their bright glow has remained visible even after a long, universe-spanning journey. But spotting the raw materials from which the first cosmic structures formed—the gas produced as the infant universe expanded and cooled in the first few minutes after the big bang—has not been possible. That material is not itself luminous, and everywhere astronomers have looked they have found not the primordial light-element gases hydrogen, helium and lithium from the big bang but rather material polluted by heavier elements, which form only in stellar interiors and in cataclysms such as supernovae.

Now a group of researchers reports identifying the first known pockets of pristine gas, two relics of those first minutes of the universe’s existence. The team found a pair of gas clouds that contain no detectable heavy elements whatsoever by looking at distant quasars and the intervening material they illuminate. Quasars are bright objects powered by a ravenous black hole, and the spectral quality of their light reveals what it passed through on its way to Earth, in much the same way that the lamp of a projector casts the colors of film onto a screen. The findings appeared online November 10 in Science.

“We found two gas clouds that show a significant abundance of hydrogen, so we know that they are there,” says lead study author Michele Fumagalli, a graduate student at the University of California, Santa Cruz. One of the clouds also shows traces of deuterium, also known as heavy hydrogen, the nucleus of which contains not only a proton, as ordinary hydrogen does, but also a neutron. Deuterium should have been produced in big bang nucleosynthesis but is easily destroyed, so its presence is indicative of a pristine environment. The amount of deuterium present agrees with theoretical predictions about the mixture of elements that should have emerged from the big bang. “But we don’t see any trace of heavier elements like carbon, oxygen and iron,” Fumagalli says. “That’s what tells us that this is primordial gas.”

The newfound gas clouds, as Fumagalli and his colleagues see them, existed about two billion years after the big bang, at an epoch of cosmic evolution known as redshift 3. (Redshift is a sort of cosmological distance measure, corresponding to the degree that light waves have been stretched on their trip across an expanding universe.) By that time the first generation of stars, initially comprising only the primordial light elements, had formed and were distributing the heavier elements they forged via nuclear fusion reactions into interstellar space.

But the new study shows that some nooks of the universe remained pristine long after stars had begun to spew heavy elements. “They have looked for these special corners of the universe, where things just haven’t been polluted yet,” says Massachusetts Institute of Technology astronomer Rob Simcoe, who did not contribute to the new study. “Everyplace else that we’ve looked in these environments, we do find these heavy elements.”

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image: Simulation by Ceverino, Dekel and Primack. Courtesy of Scientific American.[end-div]

One Pale Blue Dot, 55 Languages and 11 Billion Miles

It was Carl Sagan’s birthday last week (November 9, to be precise). He would have been 77 years old — he returned to “star-stuff” in 1996. Thoughts of this charming astronomer and cosmologist reminded us of a project with which he was intimately involved — the Voyager program.

In 1977, NASA launched two spacecraft to explore Jupiter and Saturn. The spacecraft performed so well that their missions were extended several times: first, to journey farther in the outer reaches of our solar system and explore the planets Neptune and Uranus; and second, to fly beyond our solar system into interstellar space. And, by all accounts both craft are now close to this boundary. The farthest, Voyager I, is currently over 11 billion miles away. For a real-time check on its distance, visit  JPL’s Voyager site here. JPL is NASA’s Jet Propulsion Lab in Pasadena, CA.

Some may recall that Carl Sagan presided over the selection and installation of content from the Earth onto a gold plated disk that each Voyager carries on its continuing mission. The disk contains symbolic explanations of our planet and solar system, as well as images of its inhabitants and greetings spoken in 55 languages. After much wrangling over concerns about damaging Voyager’s imaging instruments by peering back at the Sun, Sagan was instrumental in having NASA reorient Voyager I’s camera back towards the Earth. This enabled the craft to snap one last set of images of our planet from its vantage point in deep space. One poignant image became know as the “Pale Blue Dot”, and Sagan penned some characteristically eloquent and philosophical words about this image in his book, Pale Blue Dot: A Vision of the Human Future in Space.

[div class=attrib]From Carl Sagan:[end-div]

From this distant vantage point, the Earth might not seem of any particular interest. But for us, it’s different. Look again at that dot. That’s here, that’s home, that’s us. On it everyone you love, everyone you know, everyone you ever heard of, every human being who ever was, lived out their lives. The aggregate of our joy and suffering, thousands of confident religions, ideologies and economic doctrines, every hunter and forager, every hero and coward, every creator and destroyer of civilization, every king and peasant, every young couple in love, every mother and father, hopeful child, inventor and explorer, every teacher of morals, every corrupt politician, every “superstar,” every “supreme leader,” every saint and sinner in the history of our species lived there – on a mote of dust suspended in a sunbeam.

[div class=attrib]About the image from NASA:[end-div]

From Voyager’s great distance Earth is a mere point of light, less than the size of a picture element even in the narrow-angle camera. Earth was a crescent only 0.12 pixel in size. Coincidentally, Earth lies right in the center of one of the scattered light rays resulting from taking the image so close to the sun. This blown-up image of the Earth was taken through three color filters – violet, blue and green – and recombined to produce the color image. The background features in the image are artifacts resulting from the magnification.

To ease identification we have drawn a gray circle around the image of the Earth.

[div class=attrib]Image courtesy of NASA / JPL.[end-div]

Growing Complex Organs From Scratch

In early 2010 a Japanese research team grew retina-like structures from a culture of mouse embryonic stem cells. Now, only a year later, the same team at the RIKEN Center for Developmental Biology announced their success in growing a much more complex structure following a similar process — a mouse pituitary gland. This is seen as another major step towards bioengineering replacement organs for human transplantation.

[div class=attrib]From Technology Review:[end-div]

The pituitary gland is a small organ at the base of the brain that produces many important hormones and is a key part of the body’s endocrine system. It’s especially crucial during early development, so the ability to simulate its formation in the lab could help researchers better understand how these developmental processes work. Disruptions in the pituitary have also been associated with growth disorders, such as gigantism, and vision problems, including blindness.

The study, published in this week’s Nature, moves the medical field even closer to being able to bioengineer complex organs for transplant in humans.

The experiment wouldn’t have been possible without a three-dimensional cell culture. The pituitary gland is an independent organ, but it can’t develop without chemical signals from the hypothalamus, the brain region that sits just above it. With a three-dimensional culture, the researchers could grow both types of tissue together, allowing the stem cells to self-assemble into a mouse pituitary. “Using this method, we could mimic the early mouse development more smoothly, since the embryo develops in 3-D in vivo,” says Yoshiki Sasai, the lead author of the study.

The researchers had a vague sense of the signaling factors needed to form a pituitary gland, but they had to figure out the exact components and sequence through trial and error. The winning combination consisted of two main steps, which required the addition of two growth factors and a drug to stimulate a developmental protein called sonic hedgehog (named after the video game). After about two weeks, the researchers had a structure that resembled a pituitary gland.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]New gland: After 13 days in culture, mouse embryonic stem cells had self-assembled the precursor pouch, shown here, that gives rise to the pituitary gland. Image courtesy of Technlogy Review / Nature.[end-div]

Why Do We Overeat? Supersizing and Social Status

[div class=attrib]From Wired:[end-div]

Human beings are notoriously terrible at knowing when we’re no longer hungry. Instead of listening to our stomach – a very stretchy container – we rely on all sorts of external cues, from the circumference of the dinner plate to the dining habits of those around us. If the serving size is twice as large (and American serving sizes have grown 40 percent in the last 25 years), we’ll still polish it off. And then we’ll go have dessert.

Consider a clever study done by Brian Wansink, a professor of marketing at Cornell. He used a bottomless bowl of soup – there was a secret tube that kept on refilling the bowl with soup from below – to demonstrate that how much people eat is largely dependent on how much you give them. The group with the bottomless bowl ended up consuming nearly 70 percent more than the group with normal bowls. What’s worse, nobody even noticed that they’d just slurped far more soup than normal.

Or look at this study, done in 2006 by psychologists at the University of Pennsylvania. One day, they left out a bowl of chocolate M&M’s in an upscale apartment building. Next to the bowl was a small scoop. The following day, they refilled the bowl with M&M’s but placed a much larger scoop beside it. The result would not surprise anyone who has ever finished a Big Gulp soda or a supersized serving of McDonald’s fries: when the scoop size was increased, people took 66 percent more M&M’s. Of course, they could have taken just as many candies on the first day; they simply would have had to take a few more scoops. But just as larger serving sizes cause us to eat more, the larger scoop made the residents more gluttonous.

Serving size isn’t the only variable influencing how much we consume. As M.F.K. Fisher noted, eating is a social activity, intermingled with many of our deeper yearnings and instincts. And this leads me to a new paper by David Dubois, Derek Ruckner and Adam Galinsky, psychologists at HEC Paris and the Kellogg School of Management. The question they wanted to answer is why people opt for bigger serving sizes. If we know that we’re going to have a tough time not eating all those French fries, then why do we insist on ordering them? What drives us to supersize?

The hypothesis of Galinsky, et. al. is that supersizing is a subtle marker of social status.

Needless to say, this paper captures a tragic dynamic behind overeating. It appears that one of the factors causing us to consume too much food is a lack of social status, as we try to elevate ourselves by supersizing meals. Unfortunately, this only leads to rampant weight gain which, as the researchers note, “jeopardizes future rank through the accompanying stigma of being overweight.” In other words, it’s a sad feedback loop of obesity, a downward spiral of bigger serving sizes that diminish the very status we’re trying to increase.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Super Size Me movie. Image courtesy of Wikipedia.[end-div]

MondayPoem: Voyager

Poet, essayist and playwright Todd Hearon grew up in North Carolina. He earned a PhD in editorial studies from Boston University. He is winner of a number of national poetry and playwriting awards including the 2007 Friends of Literature Prize and a Dobie Paisano Fellowship from the University of Texas at Austin.

By Todd Hearon

– Voyager

We’ve packed our bags, we’re set to fly
no one knows where, the maps won’t do.
We’re crossing the ocean’s nihilistic blue
with an unborn infant’s opal eye.

It has the clarity of earth and sky
seen from a spacecraft, once removed,
as through an amniotic lens, that groove-
lessness of space, the last star by.

We have set out to live and die
into the interstices of a new
nowhere to be or be returning to

(a little like an infant’s airborne cry).
We’ve set our sights on nothing left to lose
and made of loss itself a lullaby.

[div class=attrib]Todd Hearon. Image courtesy of Boston University.[end-div]

Kodak: The Final Picture?

If you’re over 30 years old, then you may still recall having used roll film with your analog, chemically-based camera. If you did then it’s likely you would have used a product, such as Kodachrome, manufactured by Eastman Kodak. The company was founded by George Eastman in 1892. Eastman invented roll film and helped make photography a mainstream pursuit.

Kodak had been synonymous with photography for around a 100 years. However, in recent years it failed to change gears during the shift to digital media. Indeed it finally ceased production and processing of Kodachrome in 2009. While other companies, such as Nikon and Canon, managed the transition to a digital world, Kodak failed to anticipate and capitalize. Now, the company is struggling for survival.

[div class=attrib]From Wired:[end-div]

Eastman Kodak Co. is hemorrhaging money, the latest Polaroid to be wounded by the sweeping collapse of the market for analog film.

In a statement to the Securities and Exchange Commission, Kodak reported that it needs to make more money out of its patent portfolio or to raise money by selling debt.

Kodak has tried to recalibrate operations around printing, as the sale of film and cameras steadily decline, but it appears as though its efforts have been fruitless: in Q3 of last year, Kodak reported it had $1.4 billion in cash, ending the same quarter this year with just $862 million — 10 percent less than the quarter before.

Recently, the patent suits have been a crutch for the crumbling company, adding a reliable revenue to the shrinking pot. But this year the proceeds from this sadly demeaning revenue stream just didn’t pan out. With sales down 17 percent, this money is critical, given the amount of cash being spent on restructuring lawyers and continued production.

Though the company has no plans to seek bankruptcy, one thing is clear: Kodak’s future depends on its ability to make its Intellectual Property into a profit, no matter the method.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Wired.[end-div]

Lifecycle of a Webpage

If you’ve ever “stumbled”, as in used the popular and addictive website Stumbleupon, the infographic below if for you. It’s a great way to broaden one’s exposure to related ideas and make serendipitous discoveries.

Interestingly, the typical attention span of a Stumbleupon user seems to be much longer than that of the average Facebook follower.

[div class=attrib]Infographic courtesy of Column Five Media.[end-div]

Offshoring and Outsourcing of Innovation

A fascinating article over at the Wall Street Journal contemplates the demise of innovation in the United States. It’s no surprise where it’s heading — China.

[div class=attrib]From the Wall Street Journal:[end-div]

At a recent business dinner, the conversation about intellectual-property theft in China was just getting juicy when an executive with a big U.S. tech company leaned forward and said confidently: “This isn’t such a problem for us because we plan on innovating new products faster than the Chinese can steal the old ones.”

That’s a solution you often hear from U.S. companies: The U.S. will beat the Chinese at what the U.S. does best—innovation—because China’s bureaucratic, state-managed capitalism can’t master it.

The problem is, history isn’t on the side of that argument, says Niall Ferguson, an economic historian whose new book, “Civilization: The West and the Rest,” was published this week. Mr. Ferguson, who teaches at Harvard Business School, says China and the rest of Asia have assimilated much of what made the West successful and are now often doing it better.

“I’ve stopped believing that there’s some kind of cultural defect that makes the Chinese incapable of innovating,” he says. “They’re going to have the raw material of better educated kids that ultimately drives innovation.”

Andrew Liveris, the chief executive of Dow Chemical, has pounded this drum for years, describing what he sees as a drift in engineering and manufacturing acumen from the West to Asia. “Innovation has followed manufacturing to China,” he told a group at the Wharton Business School recently.

“Over time, when companies decide where to build R&D facilities, it will make more and more sense to do things like product support, upgrades and next-generation design in the same place where the product is made,” he said. “That is one reason why Dow has 500 Chinese scientists working in China, earning incredibly good money, and who are already generating more patents per scientist than our other locations.”

For a statistical glimpse of this accretion at work, read the World Economic Forum’s latest annual competitiveness index, which ranks countries by a number of economic criteria. For the third year in a row, the U.S. has slipped and China has crept up. To be sure, the U.S. still ranks fifth in the world and China is a distant 26th, but the gap is slowly closing.

[div class=attrib]Read the entire article here.[end-div]

The Evils of Television

Much has been written on the subject of television. Its effects on our culture in general and on the young minds of our children in particular have been studied and documented for decades. Increased levels of violence, the obesity epidemic, social fragmentation, vulgarity and voyeurism, caustic politics, poor attention span — all of these have been linked, at some time or other, to that little black box in the corner (increasingly, the big flat space above the mantle).

In his article, A Nation of Vidiots, Jeffrey D. Sachs, weighs in on the subject.

[div class=attrib]From Project Syndicate:[end-div]

The past half-century has been the age of electronic mass media. Television has reshaped society in every corner of the world. Now an explosion of new media devices is joining the TV set: DVDs, computers, game boxes, smart phones, and more. A growing body of evidence suggests that this media proliferation has countless ill effects.

The United States led the world into the television age, and the implications can be seen most directly in America’s long love affair with what Harlan Ellison memorably called “the glass teat.” In 1950, fewer than 8% of American households owned a TV; by 1960, 90% had one. That level of penetration took decades longer to achieve elsewhere, and the poorest countries are still not there.

True to form, Americans became the greatest TV watchers, which is probably still true today, even though the data are somewhat sketchy and incomplete. The best evidence suggests that Americans watch more than five hours per day of television on average – a staggering amount, given that several hours more are spent in front of other video-streaming devices. Other countries log far fewer viewing hours. In Scandinavia, for example, time spent watching TV is roughly half the US average.

The consequences for American society are profound, troubling, and a warning to the world – though it probably comes far too late to be heeded. First, heavy TV viewing brings little pleasure. Many surveys show that it is almost like an addiction, with a short-term benefit leading to long-term unhappiness and remorse. Such viewers say that they would prefer to watch less than they do.

Moreover, heavy TV viewing has contributed to social fragmentation. Time that used to be spent together in the community is now spent alone in front of the screen. Robert Putnam, the leading scholar of America’s declining sense of community, has found that TV viewing is the central explanation of the decline of “social capital,” the trust that binds communities together. Americans simply trust each other less than they did a generation ago. Of course, many other factors are at work, but television-driven social atomization should not be understated.

Certainly, heavy TV viewing is bad for one’s physical and mental health. Americans lead the world in obesity, with roughly two-thirds of the US population now overweight. Again, many factors underlie this, including a diet of cheap, unhealthy fried foods, but the sedentary time spent in front of the TV is an important influence as well.

At the same time, what happens mentally is as important as what happens physically. Television and related media have been the greatest purveyors and conveyors of corporate and political propaganda in society.

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Family watching television, c. 1958. Image courtesy of Wikipedia.[end-div]

The Corporate One Percent of the One Percent

With the Occupy Wall Street movement and related protests continuing to gather steam much recent media and public attention has focused on 1 percent versus the remaining 99 percent of the population. By most accepted estimates, 1 percent of households control around 40 percent of the global wealth, and there is a vast discrepancy between the top and bottom of the economic spectrum. While these statistics are telling, a related analysis of corporate wealth, highlighted in the New Scientist, shows a much tighter concentration among a very select group of transnational corporations (TNC).

[div class=attrib]New Scientist:[end-div]

An analysis of the relationships between 43,000 transnational corporations has identified a relatively small group of companies, mainly banks, with disproportionate power over the global economy.

The study’s assumptions have attracted some criticism, but complex systems analysts contacted by New Scientist say it is a unique effort to untangle control in the global economy. Pushing the analysis further, they say, could help to identify ways of making global capitalism more stable.

The idea that a few bankers control a large chunk of the global economy might not seem like news to New York’s Occupy Wall Street movement and protesters elsewhere (see photo). But the study, by a trio of complex systems theorists at the Swiss Federal Institute of Technology in Zurich, is the first to go beyond ideology to empirically identify such a network of power. It combines the mathematics long used to model natural systems with comprehensive corporate data to map ownership among the world’s transnational corporations (TNCs).

“Reality is so complex, we must move away from dogma, whether it’s conspiracy theories or free-market,” says James Glattfelder. “Our analysis is reality-based.”

Previous studies have found that a few TNCs own large chunks of the world’s economy, but they included only a limited number of companies and omitted indirect ownerships, so could not say how this affected the global economy – whether it made it more or less stable, for instance.

The Zurich team can. From Orbis 2007, a database listing 37 million companies and investors worldwide, they pulled out all 43,060 TNCs and the share ownerships linking them. Then they constructed a model of which companies controlled others through shareholding networks, coupled with each company’s operating revenues, to map the structure of economic power.

The work, to be published in PLoS One, revealed a core of 1318 companies with interlocking ownerships (see image). Each of the 1318 had ties to two or more other companies, and on average they were connected to 20. What’s more, although they represented 20 per cent of global operating revenues, the 1318 appeared to collectively own through their shares the majority of the world’s large blue chip and manufacturing firms – the “real” economy – representing a further 60 per cent of global revenues.

When the team further untangled the web of ownership, it found much of it tracked back to a “super-entity” of 147 even more tightly knit companies – all of their ownership was held by other members of the super-entity – that controlled 40 per cent of the total wealth in the network. “In effect, less than 1 per cent of the companies were able to control 40 per cent of the entire network,” says Glattfelder. Most were financial institutions. The top 20 included Barclays Bank, JPMorgan Chase & Co, and The Goldman Sachs Group.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of New Scientist / PLoS One. The 1318 transnational corporations that form the core of the economy. Superconnected companies are red, very connected companies are yellow. The size of the dot represents revenue.[end-div]

MondayPoem: When the World Ended as We Knew It

Joy Harjo is an acclaimed poet, musician and noted teacher. Her poetry is grounded in the United States’ Southwest and often encompasses Native American stories and values.

As Poetry Foundation remarks:

Consistently praised for the depth and thematic concerns in her writings, Harjo has emerged as a major figure in contemporary American poetry.

She once commented, “I feel strongly that I have a responsibility to all the sources that I am: to all past and future ancestors, to my home country, to all places that I touch down on and that are myself, to all voices, all women, all of my tribe, all people, all earth, and beyond that to all beginnings and endings. In a strange kind of sense [writing] frees me to believe in myself, to be able to speak, to have voice, because I have to; it is my survival.” Harjo’s work is largely autobiographical, informed by her love of the natural world and her preoccupation with survival and the limitations of language.

By Joy Harjo

– When the World Ended as We Knew It

We were dreaming on an occupied island at the farthest edge
of a trembling nation when it went down.

Two towers rose up from the east island of commerce and touched
the sky. Men walked on the moon. Oil was sucked dry
by two brothers. Then it went down. Swallowed
by a fire dragon, by oil and fear.
Eaten whole.

It was coming.

We had been watching since the eve of the missionaries in their
long and solemn clothes, to see what would happen.

We saw it
from the kitchen window over the sink
as we made coffee, cooked rice and
potatoes, enough for an army.

We saw it all, as we changed diapers and fed
the babies. We saw it,
through the branches
of the knowledgeable tree
through the snags of stars, through
the sun and storms from our knees
as we bathed and washed
the floors.

The conference of the birds warned us, as the flew over
destroyers in the harbor, parked there since the first takeover.
It was by their song and talk we knew when to rise
when to look out the window
to the commotion going on—
the magnetic field thrown off by grief.

We heard it.
The racket in every corner of the world. As
the hunger for war rose up in those who would steal to be president
to be king or emperor, to own the trees, stones, and everything
else that moved about the earth, inside the earth
and above it.

We knew it was coming, tasted the winds who gathered intelligence
from each leaf and flower, from every mountain, sea
and desert, from every prayer and song all over this tiny universe
floating in the skies of infinite
being.

And then it was over, this world we had grown to love
for its sweet grasses, for the many-colored horses
and fishes, for the shimmering possibilities
while dreaming.

But then there were the seeds to plant and the babies
who needed milk and comforting, and someone
picked up a guitar or ukulele from the rubble
and began to sing about the light flutter
the kick beneath the skin of the earth
we felt there, beneath us

a warm animal
a song being born between the legs of her;
a poem.

[div class=attrib]Image courtesy of PBS.[end-div]

The Hiddeous Sound of Chalk on a Blackboard

We promise. There is no screeching embedded audio of someone slowly dragging a piece of chalk, or worse, fingernails, across a blackboard! Though, even the thought of this sound causes many to shudder. Why? A plausible explanation over at Wired UK.

[div class=attrib]From Wired:[end-div]

Much time has been spent, over the past century, on working out exactly what it is about the sound of fingernails on a blackboard that’s so unpleasant. A new study pins the blame on psychology and the design of our ear canals.

Previous research on the subject suggested that the sound is acoustically similar to the warning call of a primate, but that theory was debunked after monkeys responded to amplitude-matched white noise and other high-pitched sounds, whereas humans did not. Another study, in 1986, manipulated a recording of blackboard scraping and found that the medium-pitched frequencies are the source of the adverse reaction, rather than the the higher pitches (as previously thought). The work won author Randolph Blake an Ig Nobel Prize in 2006.

The latest study, conducted by musicologists Michael Oehler of the Macromedia University for Media and Communication in Cologne, Germany, and Christoph Reuter of the University of Vienna, looked at other sounds that generate a similar reaction — including chalk on slate, styrofoam squeaks, a plate being scraped by a fork, and the ol’ fingernails on blackboard.

Some participants were told the genuine source of the sound, and others were told that the sounds were part of a contemporary music composition. Researchers asked the participants to rank which were the worst, and also monitored physical indicators of distress — heart rate, blood pressure and the electrical conductivity of skin.

They found that disturbing sounds do cause a measurable physical reaction, with skin conductivity changing significantly, and that the frequencies involved with unpleasant sounds also lie firmly within the range of human speech — between 2,000 and 4,000 Hz. Removing those frequencies from the sound made them much easier to listen to. But, interestingly, removing the noisy, scraping part of the sound made little difference.

A powerful psychological component was identified. If the listeners knew that the sound was fingernails on the chalkboard, they rated it as more unpleasant than if they were told it was from a musical composition. Even when they thought it was from music, however, their skin conductivity still changed consistently, suggesting that the physical part of the response remained.

[div class=attrib]Read the full article here.[end-div]

[div class=attrib]Images courtesy of Wired / Flickr.[end-div]

Lights That You Can Print

The lowly incandescent light bulb continues to come under increasing threat. First, came the fluorescent tube, then the compact fluorescent. More recently the LED (light emitting diode) seems to be gaining ground. Now LED technology takes another leap forward with printed LED “light sheets”.

[div class=attrib]From Technology Review:[end-div]

A company called Nth Degree Technologies hopes to replace light bulbs with what look like glowing sheets of paper (as shown in this video). The company’s first commercial product is a two-by-four-foot-square light, which it plans to start shipping to select customers for evaluation by the end of the year.

The technology could allow for novel lighting designs at costs comparable to the fluorescent light bulbs and fixtures used now, says Neil Shotton, Nth Degree’s president and CEO. Light could be emitted over large areas from curved surfaces of unusual shapes. The printing processes used to make the lights also make it easy to vary the color and brightness of the light emitted by a fixture. “It’s a new kind of lighting,” Shotton says.

Nth Degree makes its light sheets by first carving up a wafer of gallium nitride to produce millions of tiny LEDs—one four-inch wafer yields about eight million of them. The LEDs are then mixed with resin and binders, and a standard screen printer is used to deposit the resulting “ink” over a large surface.

In addition to the LED ink, there’s a layer of silver ink for the back electrical contact, a layer of phosphors to change the color of light emitted by the LEDs (from blue to various shades of white), and an insulating layer to prevent short circuits between the front and back. The front electrical contact, which needs to be transparent to let the light out, is made using an ink that contains invisibly small metal wires.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of Technology Review.[end-div]

The Battle of Evidence and Science versus Belief and Magic

An insightful article over at the Smithsonian ponders the national (U.S.) decline in the trust of science. Regardless of the topic in question — climate change, health supplements, vaccinations, air pollution, “fracking”, evolution — and regardless of the specific position on a particular topic, scientific evidence continues to be questioned, ignored, revised, and politicized. And perhaps it is in this last issue, that of politics, that we may see a possible cause for a growing national pandemic of denialism. The increasingly fractured, fractious and rancorous nature of the U.S. political system threatens to undermine all debate and true skepticism, whether based on personal opinion or scientific fact.

[div class=attrib]From the Smithsonian:[end-div]

A group of scientists and statisticians led by the University of California at Berkeley set out recently to conduct an independent assessment of climate data and determine once and for all whether the planet has warmed in the last century and by how much. The study was designed to address concerns brought up by prominent climate change skeptics, and it was funded by several groups known for climate skepticism. Last week, the group released its conclusions: Average land temperatures have risen by about 1.8 degrees Fahrenheit since the middle of the 20th century. The result matched the previous research.

The skeptics were not happy and immediately claimed that the study was flawed.

Also in the news last week were the results of yet another study that found no link between cell phones and brain cancer. Researchers at the Institute of Cancer Epidemiology in Denmark looked at data from 350,000 cell phone users over an 18-year period and found they were no more likely to develop brain cancer than people who didn’t use the technology.

But those results still haven’t killed the calls for more monitoring of any potential link.

Study after study finds no link between autism and vaccines (and plenty of reason to worry about non-vaccinated children dying from preventable diseases such as measles). But a quarter of parents in a poll released last year said that they believed that “some vaccines cause autism in healthy children” and 11.5 percent had refused at least one vaccination for their child.

Polls say that Americans trust scientists more than, say, politicians, but that trust is on the decline. If we’re losing faith in science, we’ve gone down the wrong path. Science is no more than a process (as recent contributors to our “Why I Like Science” series have noted), and skepticism can be a good thing. But for many people that skepticism has grown to the point that they can no longer accept good evidence when they get it, with the result that “we’re now in an epidemic of fear like one I’ve never seen and hope never to see again,” says Michael Specter, author of Denialism, in his TEDTalk below.

If you’re reading this, there’s a good chance that you think I’m not talking about you. But here’s a quick question: Do you take vitamins? There’s a growing body of evidence that vitamins and dietary supplements are no more than a placebo at best and, in some cases, can actually increase the risk of disease or death. For example, a study earlier this month in the Archives of Internal Medicine found that consumption of supplements, such as iron and copper, was associated with an increased risk of death among older women. In a related commentary, several doctors note that the concept of dietary supplementation has shifted from preventing deficiency (there’s a good deal of evidence for harm if you’re low in, say, folic acid) to one of trying to promote wellness and prevent disease, and many studies are showing that more supplements do not equal better health.

But I bet you’ll still take your pills tomorrow morning. Just in case.

[div class=attrib]Read the entire article here.[end-div]

Texi as the Plural for Texas?

Imagine more than one state of Texas. Or, imagine the division of Texas into a handful of sub-states smaller in size and perhaps more manageable. Frank Jacobs over at Strange Maps ponders a United States where there could be more than one Texas.

[div class=attrib]From Strange Maps:[end-div]

The plural of Texas? My money’s on Texases, even though that sounds almost as wrong as Texae, Texi or whatever alternative you might try to think up. Texas is defiantly singular. It is the Lone Star State, priding itself on its brief independence and distinct culture. Discounting Alaska, it is also the largest state in the Union.

Texas is both a maverick and a behemoth, and as much an claimant to exceptionalism within the US as America itself is on the world stage. Texans are superlative Americans. When other countries reach for an American archetype to caricature (or to demonise), it’s often one they imagine having a Texan drawl: the greedy oil baron, the fundamentalist preacher, the trigger-happy cowboy (1).

Texans will rightly object to being pigeonholed, but they probably won’t mind the implied reference to their tough-guy image. Nobody minds being provided with some room to swagger. See also the popularity of the slogan Don’t Mess With Texas, the state’s unofficial motto. It is less historical than it sounds, beginning life only in 1986 as the tagline of an anti-littering campaign.

You’d have to be crazy to mess with a state that’s this big and fierce. In fact, you’d have to be Texas to mess with Texas. Really. That’s not just a clever put-down. It’s the law. When Texas joined the Union in 1845, voluntarily giving up its independence, it was granted the right by Congress to form “new States of convenient size, not exceeding four in number and in addition to the said State of Texas.”

This would increase the total number of Texases to five, and enhance their political weight – at least in the US Senate, which would have to make room for 10 Senators from all five states combined, as opposed to just the twosome that represents the single state of Texas now.

In 2009, the political blog FiveThirtyEight overlaid their plan on a county-level map of the Obama-McCain presidential election results (showing Texas to be overwhelmingly red, except for a band of blue along the Rio Grande). The five Texases are:

  • (New) Texas, comprising the Austin-San Antonio metropolitan area in central Texas;
  • Trinity, uniting Dallas, Fort Worth and Arlington;
  • Gulfland, along the coast and including Houston;
  • Plainland, from Lubbock all the way up the panhandle (with 40% of Texas’s territory, the largest successor state);
  • El Norte, south of the other states but north of Mexico, where most of the new state’s 85% Hispanics would have their roots.

[div class=attrib]Read the entire article here.[end-div]

A Better Way to Study and Learn

Our current educational process in one sentence: assume student is empty vessel; provide student with content; reward student for remembering and regurgitating content; repeat.

Yet, we have known for a while, and an increasing body of research corroborates our belief, that this method of teaching and learning is not very effective, or stimulating for that matter. It’s simply an efficient mechanism for the mass production of an adequate resource for the job market. Of course, for most it then takes many more decades following high school or college to unlearn the rote trivia and re-learn what is really important.

Mind Hacks reviews some recent studies that highlight better approaches to studying.

[div class=attrib]From Mind Hacks:[end-div]

Decades old research into how memory works should have revolutionised University teaching. It didn’t.

If you’re a student, what I’m about to tell you will let you change how you study so that it is more effective, more enjoyable and easier. If you work at a University, you – like me – should hang your head in shame that we’ve known this for decades but still teach the way we do.

There’s a dangerous idea in education that students are receptacles, and teachers are responsible for providing content that fills them up. This model encourages us to test students by the amount of content they can regurgitate, to focus overly on statements rather than skills in assessment and on syllabuses rather than values in teaching. It also encourages us to believe that we should try and learn things by trying to remember them. Sounds plausible, perhaps, but there’s a problem. Research into the psychology of memory shows that intention to remember is a very minor factor in whether you remember something or not. Far more important than whether you want to remember something is how you think about the material when you encounter it.

A classic experiment by Hyde and Jenkins (1973) illustrates this. These researchers gave participants lists of words, which they later tested recall of, as their memory items. To affect their thinking about the words, half the participants were told to rate the pleasentness of each word, and half were told to check if the word contained the letters ‘e’ or ‘g’. This manipulation was designed to affect ‘depth of processing’. The participants in the rating-pleasentness condition had to think about what the word meant, and relate it to themselves (how they felt about it) – “deep processing”. Participants in the letter-checking condition just had to look at the shape of the letters, they didn’t even have to read the word if they didn’t want to – “shallow processing”. The second, independent, manipulation concerned whether participants knew that they would be tested later on the words. Half of each group were told this – the “intentional learning” condition – and half weren’t told, the test would come as a surprise – the “incidental learning” condition.

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Image courtesy of the Telegraph / AP.[end-div]

The Middleman is Dead; Long Live the Middleman

In another sign of Amazon’s unquenchable thirst for all things commerce, the company is now moving more aggressively into publishing.

[div class=attrib]From the New York Times:[end-div]

Amazon.com has taught readers that they do not need bookstores. Now it is encouraging writers to cast aside their publishers.

Amazon will publish 122 books this fall in an array of genres, in both physical and e-book form. It is a striking acceleration of the retailer’s fledging publishing program that will place Amazon squarely in competition with the New York houses that are also its most prominent suppliers.

It has set up a flagship line run by a publishing veteran, Laurence Kirshbaum, to bring out brand-name fiction and nonfiction. It signed its first deal with the self-help author Tim Ferriss. Last week it announced a memoir by the actress and director Penny Marshall, for which it paid $800,000, a person with direct knowledge of the deal said.

Publishers say Amazon is aggressively wooing some of their top authors. And the company is gnawing away at the services that publishers, critics and agents used to provide.

Several large publishers declined to speak on the record about Amazon’s efforts. “Publishers are terrified and don’t know what to do,” said Dennis Loy Johnson of Melville House, who is known for speaking his mind.

“Everyone’s afraid of Amazon,” said Richard Curtis, a longtime agent who is also an e-book publisher. “If you’re a bookstore, Amazon has been in competition with you for some time. If you’re a publisher, one day you wake up and Amazon is competing with you too. And if you’re an agent, Amazon may be stealing your lunch because it is offering authors the opportunity to publish directly and cut you out.

[div class=attrib]Read more here.[end-div]

The World Wide Web of Terrorism

[div class=attrib]From Eurozine:[end-div]

There are clear signs that Internet-radicalization was behind the terrorism of Anders Behring Breivik. Though most research on this points to jihadism, it can teach us a lot about how Internet-radicalization of all kinds can be fought.

On 21 September 2010, Interpol released a press statement on their homepage warning against extremist websites. They pointed out that this is a global threat and that ever more terrorist groups use the Internet to radicalize young people.

“Terrorist recruiters exploit the web to their full advantage as they target young, middle class vulnerable individuals who are usually not on the radar of law enforcement”, said Secretary General Ronald K. Noble. He continued: “The threat is global; it is virtual; and it is on our doorsteps. It is a global threat that only international police networks can fully address.”

Noble pointed out that the Internet has made the radicalization process easier and the war on terror more difficult. Part of the reason, he claimed, is that much of what takes place is not really criminal.

Much research has been done on Internet radicalization over the last few years but the emphasis has been on Islamist terror. The phenomenon can be summarized thus: young boys and men of Muslim background have, via the Internet, been exposed to propaganda, films from war zones, horrifying images of war in Afghanistan, Iraq and Chechnya, and also extreme interpretations of Islam. They are, so to speak, caught in the web, and some have resorted to terrorism, or at least planned it. The BBC documentary Generation Jihad gives an interesting and frightening insight into the phenomenon.

Researchers Tim Stevens and Peter Neumann write in a report focused on Islamist Internet radicalization that Islamist groups are hardly unique in putting the Internet in the service of political extremism:
Although Al Qaeda-inspired Islamist militants represented the most significant terrorist threat to the United Kingdom at the time of writing, Islamist militants are not the only – or even the predominant – group of political extremists engaged in radicalization and recruitment on the internet. Visitor numbers are notoriously difficult to verify, but some of the most popular Islamist militant web forums (for example, Al Ekhlaas, Al Hesbah, or Al Boraq) are easily rivalled in popularity by white supremacist websites such as Stormfront.

Strikingly, Stormfront – an international Internet forum advocating “white nationalism” and dominated by neo-Nazis – is one of the websites visited by the terrorist Anders Behring Breivik, and a forum where he also left comments. In one place he writes about his hope that “the various fractured rightwing movements in Europe and the US reach a common consensus regarding the ‘Islamification of Europe/US’ can try and reach a consensus regarding the issue”. He continues: “After all, we all want the best for our people, and we owe it to them to try to create the most potent alliance which will have the strength to overthrow the governments which support multiculturalism.”

[div class=attrib]Read more of this article here.[end-div]

[div class=attrib]Image courtesy of Eurozine.[end-div]

Corporations As People And the Threat to Truth

In 2010 the U.S. Supreme Court ruled that corporations can be treated as people, assigning companies First Amendment rights under the Constitution. So, it’s probably only a matter of time before a real person legally marries (and divorces) a corporation. And, we’re probably not too far from a future where an American corporate CEO can take the life of competing company’s boss and “rightfully” declare that it was in competitive self-defense.

In the meantime, the growing, and much needed, debate over corporate power, corporate responsibility and corporate consciousness rolls on. A timely opinion by Gary Gutting over at the New York Times, gives us more on which to chew.

[div class=attrib]From the New York Times:[end-div]

The Occupy Wall Street protest movement has raised serious questions about the role of capitalist institutions, particularly corporations, in our society.   Well before the first protester set foot in Zucotti Park, a heckler urged Mitt Romney to tax corporations rather than people.  Romney’s response — “Corporations are people” — stirred a brief but intense controversy.  Now thousands of demonstrators have in effect joined the heckler, denouncing corporations as ”enemies of the people.”

Who’s right? Thinking pedantically, we can see ways in which Romney was literally correct; for example, corporations are nothing other than the people who own, run and work for them, and they are recognized as “persons” in some technical legal sense.  But it is also obvious that corporations are not people in a full moral sense: they cannot, for example, fall in love, write poetry or be depressed.

Far more important than questions about what corporations are (ontological questions, as philosophers say) is the question of what attitude we should have toward them.  Should we, as corporate public relations statements often suggest, think of them as friends (if we buy and are satisfied with their products) or as family (if we work for them)?  Does it make sense to be loyal to a corporation as either a customer or as an employee?  More generally, even granted that corporations are not fully persons in the way that individuals are, do they have some important moral standing in our society?

My answer to all these questions is no, because corporations have no core dedication to fundamental human values.  (To be clear, I am speaking primarily of large, for-profit, publicly owned corporations.)  Such corporations exist as instruments of profit for their shareholders.  This does not mean that they are inevitably evil or that they do not make essential economic contributions to society.  But it does mean that their moral and social value is entirely instrumental.   There are ways we can use corporations as means to achieve fundamental human values, but corporations do not of themselves work for these values. In fact, left to themselves, they can be serious threats to human values that conflict with the goal of corporate profit.

Corporations are a particular threat to truth, a value essential in a democracy, which places a premium on the informed decisions of individual citizens.  The corporate threat is most apparent in advertising, which explicitly aims at convincing us to prefer a product regardless of its actual merit.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Time Saving Truth from Falsehood and Envy by François Lemoyne. Image courtesy of Wikipedia / Wallace Collection, London.[end-div]

The Myth of Bottled Water

In 2010 the world spent around $50 Billion on bottled water, with over a third accounted for by the United States alone. During this period the United States House of Representatives spent $860,000 on bottled water for its 435 members. This is close to $2,000 per person per year. (Figures according to Corporate Accountability International).

This is despite the fact that on average bottled water costs around 1,900 times more than it’s cheaper, less glamorous sibling — tap water. Bottled water has become a truly big business even though science shows no discernible benefit of bottled water over that from the faucet. In fact, around 40 percent of bottled water comes from municipal water supplies anyway.

In 2007 Charles Fishman wrote a ground-breaking cover story on the bottled water industry for Fast Company. We excerpt part of the article, Message in a Bottle, below.

[div class=attrib]By Charles Fishman:[end-div]

The largest bottled-water factory in North America is located on the outskirts of Hollis, Maine. In the back of the plant stretches the staging area for finished product: 24 million bottles of Poland Spring water. As far as the eye can see, there are double-stacked pallets packed with half-pint bottles, half-liters, liters, “Aquapods” for school lunches, and 2.5-gallon jugs for the refrigerator.

Really, it is a lake of Poland Spring water, conveniently celled off in plastic, extending across 6 acres, 8 feet high. A week ago, the lake was still underground; within five days, it will all be gone, to supermarkets and convenience stores across the Northeast, replaced by another lake’s worth of bottles.

Looking at the piles of water, you can have only one thought: Americans sure are thirsty.

Bottled water has become the indispensable prop in our lives and our culture. It starts the day in lunch boxes; it goes to every meeting, lecture hall, and soccer match; it’s in our cubicles at work; in the cup holder of the treadmill at the gym; and it’s rattling around half-finished on the floor of every minivan in America. Fiji Water shows up on the ABC show Brothers & Sisters; Poland Spring cameos routinely on NBC’s The Office. Every hotel room offers bottled water for sale, alongside the increasingly ignored ice bucket and drinking glasses. At Whole Foods, the upscale emporium of the organic and exotic, bottled water is the number-one item by units sold.

Thirty years ago, bottled water barely existed as a business in the United States. Last year, we spent more on Poland Spring, Fiji Water, Evian, Aquafina, and Dasani than we spent on iPods or movie tickets–$15 billion. It will be $16 billion this year.

Bottled water is the food phenomenon of our times. We–a generation raised on tap water and water fountains–drink a billion bottles of water a week, and we’re raising a generation that views tap water with disdain and water fountains with suspicion. We’ve come to pay good money–two or three or four times the cost of gasoline–for a product we have always gotten, and can still get, for free, from taps in our homes.

When we buy a bottle of water, what we’re often buying is the bottle itself, as much as the water. We’re buying the convenience–a bottle at the 7-Eleven isn’t the same product as tap water, any more than a cup of coffee at Starbucks is the same as a cup of coffee from the Krups machine on your kitchen counter. And we’re buying the artful story the water companies tell us about the water: where it comes from, how healthy it is, what it says about us. Surely among the choices we can make, bottled water isn’t just good, it’s positively virtuous.

Except for this: Bottled water is often simply an indulgence, and despite the stories we tell ourselves, it is not a benign indulgence. We’re moving 1 billion bottles of water around a week in ships, trains, and trucks in the United States alone. That’s a weekly convoy equivalent to 37,800 18-wheelers delivering water. (Water weighs 81/3 pounds a gallon. It’s so heavy you can’t fill an 18-wheeler with bottled water–you have to leave empty space.)

Meanwhile, one out of six people in the world has no dependable, safe drinking water. The global economy has contrived to deny the most fundamental element of life to 1 billion people, while delivering to us an array of water “varieties” from around the globe, not one of which we actually need. That tension is only complicated by the fact that if we suddenly decided not to purchase the lake of Poland Spring water in Hollis, Maine, none of that water would find its way to people who really are thirsty.

[div class=attrib]Please read the entire article here.[end-div]

[div class=attrib]Image courtesy of Wikipedia.[end-div]

Brokering the Cloud

Computer hardware reached (or plummeted, depending upon your viewpoint) the level of commodity a while ago. And of course, some types of operating systems platforms, and software and applications have followed suit recently — think Platform as a Service (PaaS) and Software as a Service (SaaS). So, it should come as no surprise to see new services arise that try to match supply and demand, and profit in the process. Welcome to the “cloud brokerage”.

[div class=attrib]From MIT Technology Review:[end-div]

Cloud computing has already made accessing computer power more efficient. Instead of buying computers, companies can now run websites or software by leasing time at data centers run by vendors like Amazon or Microsoft. The idea behind cloud brokerages is to take the efficiency of cloud computing a step further by creating a global marketplace where computing capacity can be bought and sold at auction.

Such markets offer steeply discounted rates, and they may also offer financial benefits to companies running cloud data centers, some of which are flush with excess capacity. “The more utilized you are as a [cloud services] provider … the faster return on investment you’ll realize on your hardware,” says Reuven Cohen, founder of Enomaly, a Toronto-based firm that last February launched SpotCloud, cloud computing’s first online spot market.

On SpotCloud, computing power can be bought and sold like coffee, soybeans, or any other commodity. But it’s caveat emptor for buyers, since unlike purchasing computer time with Microsoft, buying on SpotCloud doesn’t offer many contractual guarantees. There is no assurance computers won’t suffer an outage, and sellers can even opt to conceal their identity in a blind auction, so buyers don’t always know whether they’re purchasing capacity from an established vendor or a fly-by-night startup.

[div class=attrib]Read more here.[end-div]

[div class=attrib]Image courtesy of MIT Technology Review.[end-div]

In Praise of the Bad Bookstore

Tens of thousands of independent bookstores have disappeared from the United States and Europe over the last decade. Even mega-chains like Borders have fallen prey to monumental shifts in the distribution of ideas and content. The very notion of the physical book is under increasing threat from the accelerating momentum of digitalization.

For bibliophiles, particularly those who crave the feel of physical paper, there is a peculiar attractiveness even to the “bad” bookstore or bookshop (in the UK): the airport bookshop of last resort, the pulp fiction bookstore in a suburban mall. Mark O’Connell over at The Millions tells us there is no such thing as a bad bookstore.

[div class=attrib]From The Millions:[end-div]

Cultural anxieties are currently running high about the future of the book as a physical object, and about the immediate prospects for survival of actual brick and mortar booksellers. When most people think about the (by now very real) possibility of the retail side of the book business disappearing entirely into the online ether, they mostly tend to focus on the idea of their favorite bookshops shutting their doors for the last time. Sub-Borgesian bibliomaniac that I am (or, if you prefer, pathetic nerd), I have a mental image of the perfect bookshop that I hold in my mind. It’s a sort of Platonic ideal of the retail environment, a perfect confluence of impeccable curation and expansive selection, artfully cluttered and with the kind of quietly hospitable ambiance that makes the passage of time seem irrelevant once you start in on browsing the shelves. For me, the actual place that comes closest to embodying this ideal is the London Review Bookshop in Bloomsbury, run by the people behind the London Review of Books. It’s a beautifully laid-out space in a beautiful building, and its selection of books makes it feel less like an actual shop than the personal library of some extremely wealthy and exceptionally well-read individual. It’s the kind of place, in other words, where you don’t so much want to buy half the books in the shop as buy the shop itself, move in straight away and start living in it. The notion that places like this might no longer exist in a decade or so is depressing beyond measure.

But I don’t live in Bloomsbury, or anywhere near it. I live in a suburb of Dublin where the only bookshop within any kind of plausible walking distance is a small and frankly feeble set-up on the second floor of a grim 1970s-era shopping center, above a large supermarket. It’s flanked by two equally moribund concerns, a small record store and a travel agent, thereby forming the centerpiece of a sad triptych of retail obsolescence. It’s one of those places that makes you wonder how it manages to survive at all.

But I have an odd fondness for it anyway, and I’ll occasionally just wander up there in order to get out of the apartment, or to see whether, through some fluke convergence of whim and circumstance, they have something I might actually want to buy. I’ve often bought books there that I would never have thought to pick up in a better bookshop, gravitating toward them purely by virtue of the fact that there’s nothing else remotely interesting to be had.

And this brings me to the point I want to make about bad bookshops, which is that they’re rarely actually as bad as they seem. In a narrow and counterintuitive sense, they’re sometimes better than good bookshops. The way I see it, there are three basic categories of retail bookseller. There’s the vast warehouse that has absolutely everything you could possibly think of (Strand Bookstore in New York’s East Village, for instance, is a fairly extreme representative of this group, or at least it was the last time I was there ten years ago). Then there’s the “boutique” bookshop, where you get a sense of a strong curatorial presence behind the scenes, and which seems to cater for some aspirational ideal of your better intellectual self. The London Review Bookshop is, for me at least, the ultimate instance of this. And then there’s the third — and by far the largest — category, which is the rubbish bookshop. There are lots of subgenii to this grouping. The suburban shopping center fiasco, as discussed above. The chain outlet crammed with celebrity biographies and supernatural teen romances. The opportunistic fly-by-night operation that takes advantage of some short-term lease opening to sell off a random selection of remaindered titles at low prices before shutting down and moving elsewhere. And, of course, the airport bookshop of last resort.

[div class=attrib]Catch more of this essay here.[end-div]

[div class=attrib]Image courtesy of The Millions.[end-div]

Book Review: The Big Thirst. Charles Fishman

Charles Fishman has a fascinating new book entitled The Big Thirst: The Secret Life and Turbulent Future of Water. In it Fishman examines the origins of water on our planet and postulates an all to probable future where water becomes an increasingly limited and precious resource.

[div class=attrib]A brief excerpt from a recent interview, courtesy of NPR:[end-div]

For most of us, even the most basic questions about water turn out to be stumpers.

Where did the water on Earth come from?

Is water still being created or added somehow?

How old is the water coming out of the kitchen faucet?

For that matter, how did the water get to the kitchen faucet?

And when we flush, where does the water in the toilet actually go?

The things we think we know about water — things we might have learned in school — often turn out to be myths.

We think of Earth as a watery planet, indeed, we call it the Blue Planet; but for all of water’s power in shaping our world, Earth turns out to be surprisingly dry. A little water goes a long way.

We think of space as not just cold and dark and empty, but as barren of water. In fact, space is pretty wet. Cosmic water is quite common.

At the most personal level, there is a bit of bad news. Not only don’t you need to drink eight glasses of water every day, you cannot in any way make your complexion more youthful by drinking water. Your body’s water-balance mechanisms are tuned with the precision of a digital chemistry lab, and you cannot possibly “hydrate” your skin from the inside by drinking an extra bottle or two of Perrier. You just end up with pee sourced in France.

In short, we know nothing of the life of water — nothing of the life of the water inside us, around us, or beyond us. But it’s a great story — captivating and urgent, surprising and funny and haunting. And if we’re going to master our relationship to water in the next few decades — really, if we’re going to remaster our relationship to water — we need to understand the life of water itself.

[div class=attrib]Read more of this article and Charles Fishman’s interview with NPR here.[end-div]

Science at its Best: The Universe is Expanding AND Accelerating

The 2011 Nobel Prize in Physics was recently awarded to three scientists: Adam Riess, Saul Perlmutter and Brian Schmidt. Their computations and observations of a very specific type of exploding star upended decades of commonly accepted beliefs of our universe. Namely, that the expansion of the universe is accelerating.

Prior to their observations, first publicly articulated in 1998, general scientific consensus held that the universe would expand at a steady rate forever or slow, and eventually fold back in on itself in a cosmic Big Crunch.

The discovery by Riess, Perlmutter and Schmidt laid the groundwork for the idea that a mysterious force called “dark energy” is fueling the acceleration. This dark energy is now believed to make up 75 percent of the universe. Direct evidence of dark energy is lacking, but most cosmologists now accept that universal expansion is indeed accelerating.

Re-published here are the notes and a page scan from Riess’s logbook that led to this year’s Nobel Prize, which show the value of the scientific process:

[div class=attrib]The original article is courtesy of Symmetry Breaking:[end-div]

In the fall of 1997, I was leading the calibration and analysis of data gathered by the High-z Supernova Search Team, one of two teams of scientists—the other was the Supernova Cosmology Project—trying to determine the fate of our universe: Will it expand forever, or will it halt and contract, resulting in the Big Crunch?

To find the answer, we had to determine the mass of the universe. It can be calculated by measuring how much the expansion of the universe is slowing.

First, we had to find cosmic candles—distant objects of known brightness—and use them as yardsticks. On this page, I checked the reliability of the supernovae, or exploding stars, that we had collected to serve as our candles. I found that the results they yielded for the present expansion rate of the universe (known as the Hubble constant) did not appear to be affected by the age or dustiness of their host galaxies.

Next, I used the data to calculate ?M, the relative mass of the universe.

It was significantly negative!

The result, if correct, meant that the assumption of my analysis was wrong. The expansion of the universe was not slowing. It was speeding up! How could that be?

I spent the next few days checking my calculation. I found one could explain the acceleration by introducing a vacuum energy, also called the cosmological constant, that pushes the universe apart. In March 1998, we submitted these results, which were published in September 1998.

Today, we know that 74 percent of the universe consists of this dark energy. Understanding its nature remains one of the most pressing tasks for physicists and astronomers alike.

Adam Riess, Johns Hopkins University

The discovery, and many others like it both great and small, show the true power of the scientific process. Scientific results are open for constant refinement, or re-evaluation or refutation and re-interpretation. The process leads to inexorable progress towards greater and greater knowledge and understanding, and eventually to truth that most skeptics can embrace. That is, until the next and better theory and corresponding results come along.

[div class=attrib]Image courtesy of Symmetry Breaking, Adam Riess.[end-div]

MondayPoem: Water

This week, theDiagonal focuses its energies on that most precious of natural resources — water.

In his short poem “Water”, Ralph Waldo Emerson reminds us of its more fundamental qualities.

Emerson published his first book, Nature, in 1836, in which he outlined his transcendentalist philosophy. As Poetry Foundation elaborates:

His manifesto stated that the world consisted of Spirit (thought, ideas, moral laws, abstract truth, meaning itself ) and Nature (all of material reality, all that atoms comprise); it held that the former, which is timeless, is the absolute cause of the latter, which serves in turn to express Spirit, in a medium of time and space, to the senses. In other words, the objective, physical world—what Emerson called the “Not-Me”—is symbolic and exists for no other purpose than to acquaint human beings with its complement—the subjective, ideational world, identified with the conscious self and referred to in Emersonian counterpoint as the “Me.” Food, water, and air keep us alive, but the ultimate purpose for remaining alive is simply to possess the meanings of things, which by definition involves a translation of the attention from the physical fact to its spiritual value.

By Ralph Waldo Emerson

– Water

The water understands
Civilization well;
It wets my foot, but prettily,
It chills my life, but wittily,
It is not disconcerted,
It is not broken-hearted:
Well used, it decketh joy,
Adorneth, doubleth joy:
Ill used, it will destroy,
In perfect time and measure
With a face of golden pleasure
Elegantly destroy.

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

Greatest Literary Suicides

Hot on the heals of our look at literary deaths we look specifically at the greatest suicides in literature. Although subject to personal taste and sensibility the starter list excerpted below is a fine beginning, and leaves much to ponder.

[div class=attrib]From Flavorpill:[end-div]

1. Ophelia, Hamlet, William Shakespeare

Hamlet’s jilted lover Ophelia drowns in a stream surrounded by the flowers she had held in her arms. Though Ophelia’s death can be parsed as an accident, her growing madness and the fact that she was, as Gertrude says, “incapable of her own distress.” And as far as we’re concerned, Gertrude’s monologue about Ophelia’s drowning is one of the most beautiful descriptions of death in Shakespeare.

2. Anna Karenina, Anna Karenina, Leo Tolstoy

In an extremely dramatic move only befitting the emotional mess that is Anna Karenina, the heroine throws herself under a train in her despair, mirroring the novel’s early depiction of a railway worker’s death by similar means.

3. Cecilia Lisbon, The Virgin Suicides, Jeffrey Eugenides

Eugenides’ entire novel deserves to be on this list for its dreamy horror of five sisters killing themselves in the 1970s Michigan suburbs. But the death of the youngest, Cecilia, is the most brutal and distressing. Having failed to kill herself by cutting her wrists, she leaves her own party to throw herself from her bedroom window, landing impaled on the steel fence below.

4. Emma Bovary, Madame Bovary, Gustave Flaubert

In life, Emma Bovary wished for romance, for intrigue, to escape the banalities of her provincial life as a doctor’s wife. Hoping to expire gracefully, she eats a bowl of arsenic, but is punished by hours of indelicate and public suffering before she finally dies.

5. Edna Pontellier, The Awakening, Kate Chopin

This is the first suicide that many students experience in literature, and it is a strange and calm one: Edna simply walks into the water. We imagine the reality of drowning yourself would be much messier, but Chopin’s version is a relief, a cool compress against the pains of Edna’s psyche in beautiful, fluttering prose.

Topping out the top 10 we have:

Lily Bart, The House of Mirth, Edith Wharton
Septimus Warren Smith, Mrs. Dalloway, Virginia Woolf
James O. Incandeza, Infinite Jest, David Foster Wallace
Romeo and Juliet, Romeo and Juliet, William Shakespeare
Inspector Javert, Les Misérables, Victor Hugo

[div class=attrib]Read the entire article here.[end-div]

[div class=attrib]Ophelia by John Everett Millais (1829–1896). Image courtesy of Wikipedia / Creative Commons.[end-div]

How Many People Have Died?

Ever wonder how many people have gone before? The succinct infographic courtesy of Jon Gosier takes a good stab at answering the question. First, a few assumptions and explanations:

The numbers in this piece are speculative but are as accurate as modern research allows. It’s widely accepted that prior to 2002 there had been somewhere between 106 and 140 billion homo sapiens born to the world. The graphic below uses the conservative number (106 bn) as the basis for a circle graph. The center dot represents how many people are currently living (red) versus the dead (white). The dashed vertical line shows how much time passed between milestones. The spectral graph immediately below this text illustrates the population ‘benchmarks’ that were used to estimate the population over time. Adding the population numbers gets you to 106 billion. The red sphere is then used to compare against other data.

[div class=attrib]Checkout the original here.[end-div]

Greatest Literary Deaths

Tim Lott over at the Guardian Book Blog wonders which are the most dramatic literary deaths — characters rather than novelist. Think Heathcliff in Emily Brontë’s Wuthering Heights.

[div class=attrib]From the Guardian:[end-div]

What makes for a great literary death scene? This is the question I and the other four judges of the 2012 Wellcome Trust book prize for medicine in literature have been pondering in advance of an event at the Cheltenham festival.

I find many famous death scenes more ludicrous than lachrymose. As with Oscar Wilde’s comment on the death of Dickens’s Little Nell, you would have to have a heart of stone not to laugh at the passing of the awful Tullivers in Mill on the Floss, dragged down clutching one another as the river deliciously finishes them off. More consciously designed to wring laughter out of tragedy, the suicide of Ronald Nimkin in Roth’s Portnoy’s Complaint takes some beating, with Nimkins’s magnificent farewell note to his mother: “Mrs Blumenthal called. Please bring your mah-jongg rules to the game tonight.”

To write a genuinely moving death scene is a challenge for any author. The temptation to retreat into cliché is powerful. For me, the best and most affecting death is that of Harry “Rabbit” Angstrom in John Updike’s Rabbit at Rest. I remember my wife reading this to me out loud as I drove along a motorway. We were both in tears, as he says his farewell to his errant son, Nelson, and then runs out of words, and life itself – “enough. Maybe. Enough.”

But death is a matter of personal taste. The other judges were eclectic in their choices. Roger Highfield, editor of New Scientist, admired the scenes in Sebastian Junger’s A Perfect Storm. At the end of the chapter that seals the fate of the six men on board, Junger writes: “The body could be likened to a crew that resorts to increasingly desperate measures to keep their vessel afloat. Eventually the last wire has shorted out, the last bit of decking has settled under the water.” “The details of death by drowning,” Highfield says, “are so rich and dispassionately drawn that they feel chillingly true.”

[div class=attrib]Read the entire article here.[end-div]