All posts by Mike

How Beauty? Why Beauty?

A recent study by Tomohiro Ishizu and Semir Zeki from University College London places the seat of our sense of beauty in the medial orbitofrontal cortex (mOFC). Not very romantic of course, but thoroughly reasonable that this compound emotion would be found in an area of the brain linked with reward and pleasure.

[div class=attrib]The results are described over at Not Exactly Rocket Science / Discover:[end-div]

Tomohiro Ishizu and Semir Zeki from University College London watched the brains of 21 volunteers as they looked at 30 paintings and listened to 30 musical excerpts. All the while, they were lying inside an fMRI scanner, a machine that measures blood flow to different parts of the brain and shows which are most active. The recruits rated each piece as “beautiful”, “indifferent” or “ugly”.

The scans showed that one part of their brains lit up more strongly when they experienced beautiful images or music than when they experienced ugly or indifferent ones – the medial orbitofrontal cortex or mOFC.

Several studies have linked the mOFC to beauty, but this is a sizeable part of the brain with many roles. It’s also involved in our emotions, our feelings of reward and pleasure, and our ability to make decisions. Nonetheless, Ishizu and Zeki found that one specific area, which they call “field A1” consistently lit up when people experienced beauty.

The images and music were accompanied by changes in other parts of the brain as well, but only the mOFC reacted to beauty in both forms. And the more beautiful the volunteers found their experiences, the more active their mOFCs were. That is not to say that the buzz of neurons in this area produces feelings of beauty; merely that the two go hand-in-hand.

Clearly, this is a great start, and as brain scientists get their hands on ever improving fMRI technology and other brain science tools our understanding will only get sharper. However, what still remains very much a puzzle is “why does our sense of beauty exist”?

The researchers go on to explain their results, albeit tentatively:

Our proposal shifts the definition of beauty very much in favour of the perceiving subject and away from the characteristics of the apprehended object. Our definition… is also indifferent to what is art and what is not art. Almost anything can be considered to be art, but only creations whose experience has, as a correlate, activity in mOFC would fall into the classification of beautiful art… A painting by Francis Bacon may be executed in a painterly style and have great artistic merit but may not qualify as beautiful to a subject, because the experience of viewing it does not correlate with activity in his or her mOFC.

In proposing this the researchers certainly seem to have hit on the underlying “how” of beauty, and it’s reliably consistent, though the sample was not large enough to warrant statistical significance. However, the researchers conclude that “A beautiful thing is met with the same neural changes in the brain of a wealthy cultured connoisseur as in the brain of a poor, uneducated novice, as long as both of them find it beautiful.”

But what of the “why” of beauty. Why is the perception of beauty socially and cognitively important and why did it evolve? After all, as Jonah Lehrer over at Wired questions:

But why does beauty exist? What’s the point of marveling at a Rembrandt self portrait or a Bach fugue? To paraphrase Auden, beauty makes nothing happen. Unlike our more primal indulgences, the pleasure of perceiving beauty doesn’t ensure that we consume calories or procreate. Rather, the only thing beauty guarantees is that we’ll stare for too long at some lovely looking thing. Museums are not exactly adaptive.

The answer to this question has stumped the research community for quite some time, and will undoubtedly continue to do so for some time to come. Several recent cognitive research studies hint at possible answers related to reinforcement for curious and inquisitive behavior, reward for and feedback from anticipation responses, and pattern seeking behavior.

[div class=attrib]More from Jonah Lehrer for Wired:[end-div]

What I like about this speculation is that it begins to explain why the feeling of beauty is useful. The aesthetic emotion might have begun as a cognitive signal telling us to keep on looking, because there is a pattern here that we can figure out it. In other words, it’s a sort of a metacognitive hunch, a response to complexity that isn’t incomprehensible. Although we can’t quite decipher this sensation – and it doesn’t matter if the sensation is a painting or a symphony – the beauty keeps us from looking away, tickling those dopaminergic neurons and dorsal hairs. Like curiosity, beauty is a motivational force, an emotional reaction not to the perfect or the complete, but to the imperfect and incomplete. We know just enough to know that we want to know more; there is something here, we just don’t what. That’s why we call it beautiful.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Claude Monet, Water-Lily Pond and Weeping Willow. Image courtesy of Wikipedia / Creative Commons.[end-div]

[div class=attrib]First page of the manuscript of Bach’s lute suite in G Minor. Image courtesy of Wikipedia / Creative Commons.[end-div]

MondayPoem: The Brain — is wider than the Sky

Ushering in this week’s focus on the brain and the cognitive sciences is an Emily Dickinson poem.

Born is Amherst, Massachusetts in 1830, Emily Dickinson is often characterized as having lead a very private and eccentric life. While few of her poems were published during her lifetime, Emily Dickinson is now regarded as a major American poet for her innovative, pre-modernist poetry.
[div class=attrib]By Emily Dickinson:[end-div]

The Brain is wider than the Sky

The Brain — is wider than the Sky —
For — put them side by side —
The one the other will contain
With ease — and You — beside —

The Brain is deeper than the sea —
For — hold them — Blue to Blue —
The one the other will absorb —
As Sponges — Buckets — do —

The Brain is just the weight of God —
For — Heft them — Pound for Pound —
And they will differ — if they do —
As Syllable from Sound —

More on Emily Dickinson from the Poetry Foundation.

[div class=attrib]Image courtesy of Todd-Bingham Picture Collection and Family Papers, Yale University Manuscripts & Archives Digital Images Database, Yale University.[end-div]

Tim Berners-Lee’s “Baby” Hits 20 – Happy Birthday World Wide Web

In early 1990 at CERN headquarters in Geneva, Switzerland, Tim Berners-Lee and Robert Cailliau published a formal proposal to build a “Hypertext project” called “WorldWideWeb” as a “web” of “hypertext documents” to be viewed by “browsers”.

Following development work the pair introduced the proposal to a wider audience in December, and on August 6, 1991, 20 years ago, the World Wide Web officially opened for business on the internet. On that day Berners-Lee posted the first web page — a short summary of the World Wide Web project on the alt.hypertext newsgroup.

The page authored by Tim Berners-Lee was http://info.cern.ch/hypertext/WWW/TheProject.html. A later version on the page can be found here. The page described Berners-Lee’s summary of a project for organizing information on a computer network using a web or links. In fact, the the effort was originally coined “Mesh”, but later became the “World Wide Web”.

The first photograph on the web was uploaded by Berners-Lee in 1992, an image of the CERN house band Les Horribles Cernettes. Twenty years on, one website alone — Flickr – hosts around 5.75 billion images.

[div class=attrib]Photograph of Les Horribles Cernettes, the very first photo to be published on the world wide web in 1992. Image courtesy of Cernettes / Silvano de Gennaro. Granted under fair use.[end-div]

If Televisions Could See Us

A fascinating and disturbing series of still photographs from Andris Feldmanis shows us what the television “sees” as its viewers glare seemingly mindlessly at the box. As Feldmanis describes,

An average person in Estonia spends several hours a day watching the television. This is the situation reversed, the people portrayed here are posing for their television sets. It is not a critique of mass media and its influence, it is a fictional document of what the TV sees.

Makes one wonder what the viewers were watching. Or does it even matter? More of the series courtesy of Art Fag City, here. All the images show the one-sidedness of the human-television relationship.

[div class=attrib]Image courtesy of Andris Feldmanis.[end-div]

Dawn Over Vesta

More precisely NASA’s Dawn spacecraft entered into orbit around the asteroid Vesta on July 15, 2011. Vesta is the second largest of our solar system’s asteroids and is located in the asteroid belt between Mars and Jupiter.

Now that Dawn is safely in orbit, the spacecraft will circle about 10,000 miles above Vesta’s surface for a year and use two different cameras, a gamma-ray detector and a neutron detector, to study the asteroid.

Then in July 2012, Dawn will depart for a visit to Vesta’s close neighbor and largest object in the asteroid belt, Ceres.

The image of Vesta above was taken from a distance of about 9,500 miles (15,000 kilometers) away.

[div class=attrib]Image courtesy of NASA/JPL-Caltech/UCLA/MPS/DLR/IDA.[end-div]

Seven Sisters Star Cluster

The Seven Sisters star cluster, also known as the Pleiades, consists of many, young, bright and hot stars. While the cluster contains hundreds of stars it is so named because only seven are typically visible to the naked eye. The Seven Sisters is visible from the northern hemisphere, and resides in the constellation Taurus.

[div class=attrib]Image and supporting text courtesy of Davide De Martin over at Skyfactory.[end-div]

This image is a composite from black and white images taken with the Palomar Observatory’s 48-inch (1.2-meter) Samuel Oschin Telescope as a part of the second National Geographic Palomar Observatory Sky Survey (POSS II). The images were recorded on two type of glass photographic plates – one sensitive to red light and the other to blue and later they were digitized. Credit: Caltech, Palomar Observatory, Digitized Sky Survey.

In order to produce the color image seen here, I worked with data coming from 2 different photographic plates taken in 1986 and 1989. Original file is 10.252 x 9.735 pixels with a resolution of about 1 arcsec per pixel. The image shows an area of sky large 2,7° x 2,7° (for comparison, the full-Moon is about 0,5° in diameter).

[div class=attrib]More from theSource here.[end-div]

MondayPoem: Starlight

Monday’s poem authored by William Meredith, was selected for it is in keeping with our cosmology theme this week.

William Meredith was born in New York City in 1919. He studied English at Princeton University where he graduated Magna Cum Laude. His senior thesis focused on the poetry of Robert Frost, a major influence for Meredith throughout his career.
[div class=attrib]By William Meredith, courtesy of Poets.org:[end-div]

Going abruptly into a starry night
It is ignorance we blink from, dark, unhoused;
There is a gaze of animal delight
Before the human vision. Then, aroused
To nebulous danger, we may look for easy stars,
Orion and the Dipper; but they are not ours,

These learned fields. Dark and ignorant,
Unable to see here what our forebears saw,
We keep some fear of random firmament
Vestigial in us. And we think, Ah,
If I had lived then, when these stories were made up, I
Could have found more likely pictures in haphazard sky.

But this is not so. Indeed, we have proved fools
When it comes to myths and images. A few
Old bestiaries, pantheons and tools
Translated to the heavens years ago—
Scales and hunter, goat and horologe—are all
That save us when, time and again, our systems fall.

And what would we do, given a fresh sky
And our dearth of image? Our fears, our few beliefs
Do not have shapes. They are like that astral way
We have called milky, vague stars and star-reefs
That were shapeless even to the fecund eye of myth—
Surely these are no forms to start a zodiac with.

To keep the sky free of luxurious shapes
Is an occupation for most of us, the mind
Free of luxurious thoughts. If we choose to escape,
What venial constellations will unwind
Around a point of light, and then cannot be found
Another night or by another man or from other ground.

As for me, I would find faces there,
Or perhaps one face I have long taken for guide;
Far-fetched, maybe, like Cygnus, but as fair,
And a constellation anyone could read
Once it was pointed out; an enlightenment of night,
The way the pronoun you will turn dark verses bright.

And You Thought Being Direct and Precise Was Good

A new psychological study upends our understanding of the benefits of direct and precise information as a motivational tool. Results from the study by Himanshu Mishra and Baba Shiv describe the cognitive benefits of vague and inarticulate feedback over precise information. At first glance this seems to be counter-intuitive. After all, fuzzy math, blurred reasoning and unclear directives would seem to be the banes of current societal norms that value data in as a precise a form as possible. We measure, calibrate, verify and re-measure and report information to the nth degree.

[div class=attrib]Stanford Business:[end-div]

Want to lose weight in 2011? You’ve got a better chance of pulling it off if you tell yourself, “I’d like to slim down and maybe lose somewhere between 5 and 15 pounds this year” instead of, “I’d like to lose 12 pounds by July 4.”

In a paper to be published in an upcoming issue of the journal Psychological Science, business school Professor Baba Shiv concludes that people are more likely to stay motivated and achieve a goal if it’s sketched out in vague terms than if it’s set in stone as a rigid or precise plan.

“For one to be successful, one needs to be motivated,” says Shiv, the Stanford Graduate School of Business Sanwa Bank, Limited, Professor of Marketing. He is coauthor of the paper “In Praise of Vagueness: Malleability of Vague Information as a Performance Booster” with Himanshu Mishra and Arul Mishra, both of the University of Utah. Presenting information in a vague way — for instance using numerical ranges or qualitative descriptions — “allows you to sample from the information that’s in your favor,” says Shiv, whose research includes studying people’s responses to incentives. “You’re sampling and can pick the part you want,” the part that seems achievable or encourages you to keep your expectations upbeat to stay on track, says Shiv.

By comparison, information presented in a more-precise form doesn’t let you view it in a rosy light and so can be discouraging. For instance, Shiv says, a coach could try to motivate a sprinter by reviewing all her past times, recorded down to the thousandths of a second. That would remind her of her good times but also the poor ones, potentially de-motivating her. Or, the coach could give the athlete less-precise but still-accurate qualitative information. “Good coaches get people not to focus on the times but on a dimension that is malleable,” says Shiv. “They’ll say, “You’re mentally tough.’ You can’t measure that.” The runner can then zero in on her mental strength to help her concentrate on her best past performances, boosting her motivation and ultimately improving her times. “She’s cherry-picking her memories, and that’s okay, because that’s allowing her to get motivated,” says Shiv.

Of course, Shiv isn’t saying there’s no place for precise information. A pilot needs exact data to monitor a plane’s location, direction, and fuel levels, for instance. But information meant to motivate is different, and people seeking motivation need the chance to focus on just the positive. When it comes to motivation, Shiv said, “negative information outweighs positive. If I give you five pieces of negative information and five pieces of positive information, the brain weighs the negative far more than the positive … It’s a survival mechanism. The brain weighs the negative to keep us secure.”

[div class=attrib]More from theSource here.[end-div]

 

Morality 1: Good without gods

[div class=attrib]From QualiaSoup:[end-div]

Some people claim that morality is dependent upon religion, that atheists cannot possibly be moral since god and morality are intertwined (well, in their minds). Unfortunately, this is one way that religious people dehumanise atheists who have a logical way of thinking about what constitutes moral social behaviour. More than simply being a (incorrect) definition in the Oxford dictionary, morality is actually the main subject of many philosophers’ intellectual lives. This video, the first of a multi-part series, begins this discussion by defining morality and then moving on to look at six hypothetical cultures’ and their beliefs.

[tube]T7xt5LtgsxQ[/tube]

Favela Futurism, Very Chic

[div class=attrib]From BigThink:[end-div]

The future of global innovation is the Brazilian favela, the Mumbai slum and the Nairobi shanty-town. At a time when countries across the world, from Latin America to Africa to Asia, are producing new mega-slums on an epic scale, when emerging mega-cities in China are pushing the limits of urban infrastructure by adding millions of new inhabitants each year, it is becoming increasingly likely that the lowly favela, slum or ghetto may hold the key to the future of human development.

Back in 2009, futurist and science fiction writer Bruce Sterling first introduced Favela Chic as a way of thinking about our modern world. What is favela chic? It’s what happens “when you’ve lost everything materially… but are wired to the gills and are big on Facebook.” Favela chic doesn’t have to be exclusively an emerging market notion, either. As Sterling has noted, it can be a hastily thrown-together high-rise in downtown Miami, covered over with weeds, without any indoor plumbing, filled with squatters.

Flash forward to the end of 2010, when the World Future Society named favela innovation one of the Top 10 trends to watch in 2011: “Dwellers of slums, favelas, and ghettos have learned to use and reuse resources and commodities more efficiently than their wealthier counterparts. The neighborhoods are high-density and walkable, mixing commercial and residential areas rather than segregating these functions. In many of these informal cities, participants play a role in communal commercial endeavors such as growing food or raising livestock.”

What’s fascinating is that the online digital communities we are busy creating in “developed” nations more closely resemble favelas than they do carefully planned urban cities. They are messy, emergent and always in beta. With few exceptions, there are no civil rights and no effective ways to organize. When asked how to define favela chic at this year’s SXSW event in Austin, Sterling referred to Facebook as the poster child of a digital favela. It’s thrown-up, in permanent beta, and easily disposed of quickly. Apps and social games are the corrugated steel of our digital shanty-towns.

[div class=attrib]More from theSource here.[end-div]

Just Another Week at Fermilab

Another day, another particle, courtesy of scientists at Fermilab. The CDF group working with data from Fermilab’s Tevatron particle collider announced the finding of a new, neutron-like particle last week. The particle known as a neutral Xi-sub-b is a heavy relative of the neutron and is made up of a strange quark, an up quark and a bottom quark, hence the “s-u-b” moniker.

[div class=attrib]Here’s more from Symmetry Breaking:[end-div]

While its existence was predicted by the Standard Model, the observation of the neutral Xi-sub-b is significant because it strengthens our understanding of how quarks form matter. Fermilab physicist Pat Lukens, a member of the CDF collaboration, presented the discovery at Fermilab on Wednesday, July 20.

The neutral Xi-sub-b is the latest entry in the periodic table of baryons. Baryons are particles formed of three quarks, the most common examples being the proton (two up quarks and a down quark) and the neutron (two down quarks and an up quark). The neutral Xi-sub-b belongs to the family of bottom baryons, which are about six times heavier than the proton and neutron because they all contain a heavy bottom quark. The particles are produced only in high-energy collisions, and are rare and very difficult to observe.

Although Fermilab’s Tevatron particle collider is not a dedicated bottom quark factory, sophisticated particle detectors and trillions of proton-antiproton collisions have made it a haven for discovering and studying almost all of the known bottom baryons. Experiments at the Tevatron discovered the Sigma-sub-b baryons (?b and ?b*) in 2006, observed the Xi-b-minus baryon (?b) in 2007, and found the Omega-sub-b (?b) in 2009.

[div class=attrib]Image courtesy of Fermilab/CDF Collaboration.[end-div]

Bad reasoning about reasoning

[div class=attrib]By Massimo Pigliucci at Rationally Speaking:[end-div]

A recent paper on the evolutionary psychology of reasoning has made mainstream news, with extensive coverage by the New York Times, among others. Too bad the “research” is badly flawed, and the lesson drawn by Patricia Cohen’s commentary in the Times is precisely the wrong one.

Readers of this blog and listeners to our podcast know very well that I tend to be pretty skeptical of evolutionary psychology in general. The reason isn’t because there is anything inherently wrong about thinking that (some) human behavioral traits evolved in response to natural selection. That’s just an uncontroversial consequence of standard evolutionary theory. The devil, rather, is in the details: it is next to impossible to test specific evopsych hypotheses because the crucial data are often missing. The fossil record hardly helps (if we are talking about behavior), there are precious few closely related species for comparison (and they are not at all that closely related), and the current ecological-social environment is very different from the “ERE,” the Evolutionarily Relevant Environment (which means that measuring selection on a given trait in today’s humans is pretty much irrelevant).
That said, I was curious about Hugo Mercier and Dan Sperber’s paper, “Why do humans reason? Arguments for an argumentative theory,” published in Behavioral and Brain Sciences (volume 34, pp. 57-111, 2011), which is accompanied by an extensive peer commentary. My curiosity was piqued in particular because of the Times’ headline from the June 14 article: “Reason Seen More as Weapon Than Path to Truth.” Oh crap, I thought.

Mercier and Sperber’s basic argument is that reason did not evolve to allow us to seek truth, but rather to win arguments with our fellow human beings. We are natural lawyers, not natural philosophers. This, according to them, explains why people are so bad at reasoning, for instance why we tend to fall for basic mistakes such as the well known confirmation bias — a tendency to seek evidence in favor of one’s position and discount contrary evidence that is well on display in politics and pseudoscience. (One could immediately raise the obvious “so what?” objection to all of this: language possibly evolved to coordinate hunting and gossip about your neighbor. That doesn’t mean we can’t take writing and speaking courses and dramatically improve on our given endowment, natural selection be damned.)

The first substantive thing to notice about the paper is that there isn’t a single new datum to back up the central hypothesis. It is one (long) argument in which the authors review well known cognitive science literature and simply apply evopsych speculation to it. If that’s the way to get into the New York Times, I better increase my speculation quotient.

[div class=attrib]More from theSource here.[end-div]

Rechargeable Nanotube-Based Solar Energy Storage

[div class=attrib]From Ars Technica:[end-div]

Since the 1970s, chemists have worked on storing solar energy in molecules that change state in response to light. These photoactive molecules could be the ideal solar fuel, as the right material should be transportable, affordable, and rechargeable. Unfortunately, scientists haven’t had much success.

One of the best examples in recent years, tetracarbonly-diruthenium fulvalene, requires the use of ruthenium, which is rare and expensive. Furthermore, the ruthenium compound has a volumetric energy density (watt-hours per liter) that is several times smaller than that of a standard lithium-ion battery.
Alexie Kolpak and Jeffrey Grossman from the Massachusetts Institute of Technology propose a new type of solar thermal fuel that would be affordable, rechargeable, thermally stable, and more energy-dense than lithium-ion batteries. Their proposed design combines an organic photoactive molecule, azobenzene, with the ever-popular carbon nanotube.

Before we get into the details of their proposal, we’ll quickly go over how photoactive molecules store solar energy. When a photoactive molecule absorbs sunlight, it undergoes a conformational change, moving from the ground energy state into a higher energy state. The higher energy state is metastable (stable for the moment, but highly susceptible to energy loss), so a trigger—voltage, heat, light, etc.—will cause the molecule to fall back to the ground state. The energy difference between the higher energy state and the ground state (termed ?H) is then discharged. A useful photoactive molecule will be able to go through numerous cycles of charging and discharging.

The challenge in making a solar thermal fuel is finding a material that will have both a large ?H and large activation energy. The two factors are not always compatible. To have a large ?H, you want a big energy difference between the ground and higher energy state. But you don’t want the higher energy state to be too energetic, as it would be unstable. Instability means that the fuel will have a small activation energy and be prone to discharging its stored energy too easily.

Kolpak and Grossman managed to find the right balance between ?H and activation energy when they examined computational models of azobenzene (azo) bound to carbon nanotubes (CNT) in azo/CNT nanostructures.

[div class=attrib]More from theSource here.[end-div]

Postcards from the Atomic Age

Remember the lowly tourist postcard? Undoubtedly, you will have sent one or two “Wish you where here!” missives to your parents or work colleagues while vacationing in the Caribbean or hiking in Austria. Or, you may still have some in a desk drawer. Remember, those that you never mailed because you had neither time or local currency to purchase a stamp. If not, someone in your extended family surely has a collection of old postcards with strangely saturated and slightly off-kilter colors, chronicling family travels to interesting and not-so-interesting places.

Then, there are postcards of a different kind, sent from places that wouldn’t normally spring to mind as departure points for a quick and trivial dispatch. Tom Vanderbilt over at Slate introduces us to a new book, Atomic Postcards:

“Having a great time,” reads the archetypical postcard. “Wish you were here.” But what about when the “here” is the blasted, irradiated wastes of Frenchman’s Flat, in the Nevada desert? Or the site of America’s worst nuclear disaster? John O’Brian and Jeremy Borsos’ new book, Atomic Postcards, fuses the almost inherently banal form of the canned tourist dispatch with the incipient peril, and nervously giddy promise, of the nuclear age. Collected within are two-sided curios spanning the vast range of the military-industrial complex—”radioactive messages from the Cold War,” as the book promises. They depict everything from haunting afterimages of atomic incineration on the Nagasaki streets to achingly prosaic sales materials from atomic suppliers to a gauzy homage to the “first atomic research reactor in Israel,” a concrete monolith jutting from the sand, looking at once futuristic and ancient. Taken as a whole, the postcards form a kind of de facto and largely cheery dissemination campaign for the wonder of atomic power (and weapons). And who’s to mind if that sunny tropical beach is flecked with radionuclides?

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image: Marshall Islands, 1955. Image courtesy of Atomic Postcards.[end-div]

“Spectacular nuclear explosion” reads a caption on the back (or “verso,” as postcard geeks would say) of this card—released by “Ray Helberg’s Pacific Service”—of a test in the Marshall Islands. The disembodied cloud—a ferocious water funnel of water thrust upward, spreading into a toroid of vapor—recalls a Dutch sea painting with something new and alien in its center. “Quite a site [sic] to watch,” reads a laconic comment on the back. Outside the frame of the stylized blast cloud are its consequences. As Nathan Hodge and Sharon Weinberg write in Nuclear Family Vacation, “[F]or the people of the Marshall Islands, the consequences of atomic testing in the Pacific were extraordinary. Traditional communities were displaced by the tests; prolonged exposure to radiation created a legacy of illness and disease.”

Is Anyone There?

[div class=attrib]From the New York Times:[end-div]

“WHEN people don’t answer my e-mails, I always think maybe something tragic happened,” said John Leguizamo, the writer and performer, whose first marriage ended when his wife asked him by e-mail for a divorce. “Like maybe they got hit by a meteorite.”

Betsy Rapoport, an editor and life coach, said: “I don’t believe I have ever received an answer from any e-mail I’ve ever sent my children, now 21 and 18. Unless you count ‘idk’ as a response.”

The British linguist David Crystal said that his wife recently got a reply to an e-mail she sent in 2006. “It was like getting a postcard from the Second World War,” he said.

The roaring silence. The pause that does not refresh. The world is full of examples of how the anonymity and remove of the Internet cause us to write and post things that we later regret. But what of the way that anonymity and remove sometimes leave us dangling like a cartoon character that has run off a cliff?

For every fiery screed or gushy, tear-streaked confession in the ethersphere, it seems there’s a big patch of grainy, unresolved black. Though it would comfort us to think that these long silences are the product of technical failure or mishap, the more likely culprits are lack of courtesy and passive aggression.

“The Internet is something very informal that happened to a society that was already very informal,” said P. M. Forni, an etiquette expert and the author of “Choosing Civility.” “We can get away with murder, so to speak. The endless amount of people we can contact means we are not as cautious or kind as we might be. Consciously or unconsciously we think of our interlocutors as disposable or replaceable.”

Judith Kallos, who runs a site on Internet etiquette called netmanners.com, said the No. 1 complaint is that “people feel they’re being ignored.”

[div class=attrib]More from theSource here.[end-div]

The five top regrets of dying people

Social scientists may have already examined the cross-cultural regrets of those nearing end of life. If not, it would make fascinating reading to explore the differences and similarities. However, despite the many traits and beliefs that divide humanity, it’s likely that many of these are common.

[div class=attrib]By Massimo Pigliucci at Rationally Speaking:[end-div]

Bronnie Ware is the author (a bit too much on the mystical-touchy-feely side for my taste) of the blog “Inspiration and Chai” (QED). But she has also worked for years in palliative care, thereby having the life-altering experience of sharing people’s last few weeks and listening to what they regretted the most about their now about to end lives. The result is this list of “top five” things people wished they had done differently:

1. I wish I’d had the courage to live a life true to myself, not the life others expected of me.
2. I wish I didn’t work so hard.
3. I wish I’d had the courage to express my feelings.
4. I wish I had stayed in touch with my friends.
5. I wish that I had let myself be happier.

This is, of course, anecdotal evidence from a single source, and as such it needs to be taken with a rather large grain of salt. But it is hard to read the list and not begin reflecting on your own life — even if you are (hopefully!) very far from the end.

Ware’s list, of course, is precisely why Socrates famously said that “the unexamined life is not worth living” (in Apology 38a, Plato’s rendition of Socrates’ speech at his trial), and why Aristotle considered the quest for eudaimonia (flourishing) a life-long commitment the success of which can be assessed only at the very end.

Let’s then briefly consider the list and see what we can learn from it. Beginning with the first entry, I’m not sure what it means for someone to be true to oneself, but I take it that the notion attempts to get at the fact that too many of us cave to societal forces early on and do not actually follow our aspirations. The practicalities of life have a way of imposing themselves on us, beginning with parental pressure to enter a remunerative career path and continuing with the fact that no matter what your vocation is you still have to somehow pay the bills and put dinner on the table every evening. And yet, you wouldn’t believe the number of people I’ve met in recent years who — about midway through their expected lifespan — suddenly decided that what they had been doing with their lives during the previous couple of decades was somewhat empty and needed to change. Almost without exception, these friends in their late ‘30s or early ‘40s contemplated — and many actually followed through — going back to (graduate) school and preparing for a new career in areas that they felt augmented the meaningfulness of their lives (often, but not always, that meant teaching). One could argue that such self-examination should have occurred much earlier, but we are often badly equipped, in terms of both education and life experience, to ask ourselves that sort of question when we are entering college. Better midway than at the end, though…

[div class=attrib]More from theSource here.[end-div]

Atomic Poems: Oppenheimer, Ginsberg and Linkin Park

Sixty-six years ago on July 16, 1945 the world witnessed the first atomic bomb test. The bomb lit up the sky and scorched the earth at the White Sands Proving Ground over the Jornada del Muerto desert in New Mexico. The test of the implosion-design plutonium device was codenamed Trinity, part of the Manhattan Project.

The lead physicist was J. Robert Oppenheimer. He named the atomic test “Trinity” in a conflicted homage to John Donne’s poem, “Holy Sonnet XIV: Batter My Heart, Three-Personed God”:

[div class=attrib]By John Donne:[end-div]

Batter my heart, three-personed God; for you
As yet but knock, breathe, shine, and seek to mend;
That I may rise and stand, o’erthrow me, and bend
Your force to break, blow, burn, and make me new.
I, like an usurped town, to another due,
Labor to admit you, but O, to no end;
Reason, your viceroy in me, me should defend,
but is captived, and proves weak or untrue.
yet dearly I love you, and would be loved fain,
But am betrothed unto your enemy.
Divorce me, untie or break that knot again;
Take me to you, imprison me, for I,
Except you enthrall me, never shall be free,
Nor even chaste, except you ravish me.

Thirty-three years after the Trinity test on July 16, 1978, poet Allen Ginsberg published his nuclear protest poem “Plutonian Ode”, excerpted here:

. . .

Radioactive Nemesis were you there at the beginning 
        black dumb tongueless unsmelling blast of Disil-
        lusion?
I manifest your Baptismal Word after four billion years
I guess your birthday in Earthling Night, I salute your
        dreadful presence last majestic as the Gods,
Sabaot, Jehova, Astapheus, Adonaeus, Elohim, Iao, 
        Ialdabaoth, Aeon from Aeon born ignorant in an
        Abyss of Light,
Sophia's reflections glittering thoughtful galaxies, whirl-
        pools of starspume silver-thin as hairs of Einstein!
Father Whitman I celebrate a matter that renders Self
        oblivion!
Grand Subject that annihilates inky hands & pages'
        prayers, old orators' inspired Immortalities,
I begin your chant, openmouthed exhaling into spacious
        sky over silent mills at Hanford, Savannah River,
        Rocky Flats, Pantex, Burlington, Albuquerque
I yell thru Washington, South Carolina, Colorado, 
        Texas, Iowa, New Mexico,
Where nuclear reactors creat a new Thing under the 
        Sun, where Rockwell war-plants fabricate this death
        stuff trigger in nitrogen baths,
Hanger-Silas Mason assembles the terrified weapon
        secret by ten thousands, & where Manzano Moun-
        tain boasts to store
its dreadful decay through two hundred forty millenia
        while our Galaxy spirals around its nebulous core.
I enter your secret places with my mind, I speak with 
        your presence, I roar your Lion Roar with mortal
        mouth.
One microgram inspired to one lung, ten pounds of 
        heavy metal dust adrift slow motion over grey
        Alps
the breadth of the planet, how long before your radiance
        speeds blight and death to sentient beings?
Enter my body or not I carol my spirit inside you,
        Unnaproachable Weight,
O heavy heavy Element awakened I vocalize your con-
        sciousness to six worlds
I chant your absolute Vanity.  Yeah monster of Anger
        birthed in fear O most
Ignorant matter ever created unnatural to Earth! Delusion
        of metal empires!
Destroyer of lying Scientists! Devourer of covetous
        Generals, Incinerator of Armies & Melter of Wars!
Judgement of judgements, Divine Wind over vengeful 
        nations, Molester of Presidents, Death-Scandal of
        Capital politics! Ah civilizations stupidly indus-
        trious!
Canker-Hex on multitudes learned or illiterate! Manu-
        factured Spectre of human reason! O solidified
        imago of practicioner in Black Arts
I dare your reality, I challenge your very being! I 
        publish your cause and effect!
I turn the wheel of Mind on your three hundred tons!
        Your name enters mankind's ear! I embody your
        ultimate powers!
My oratory advances on your vaunted Mystery! This 
        breath dispels your braggart fears! I sing your 
        form at last
behind your concrete & iron walls inside your fortress
        of rubber & translucent silicon shields in filtered
        cabinets and baths of lathe oil,
My voice resounds through robot glove boxes & ignot 
        cans and echoes in electric vaults inert of atmo-
        sphere,
I enter with spirit out loud into your fuel rod drums
        underground on soundless thrones and beds of
        lead
O density! This weightless anthem trumpets transcendent 
        through hidden chambers and breaks through 
        iron doors into the Infernal Room!
Over your dreadful vibration this measured harmony        
        floats audible, these jubilant tones are honey and 
        milk and wine-sweet water
Poured on the stone black floor, these syllables are
        barley groats I scatter on the Reactor's core, 
I call your name with hollow vowels, I psalm your Fate
        close by, my breath near deathless ever at your
        side
to Spell your destiny, I set this verse prophetic on your
        mausoleum walls to seal you up Eternally with
        Diamond Truth!  O doomed Plutonium.

. . .

As noted in the Barnes and Noble Review:

Biographies of Oppenheimer portray him as a complex, contradicted man, and something of a poet himself. His love of poetry became well known when, in a 1965 interview, he famously claimed that his first reaction to the bomb test was a recollection of a line from the Bhagavad-Gita: “Now I am become death, the destroyer of worlds.”

Linkin Park’s  2010 concept album entitled “A Thousand Suns” captures Oppenheimer himself reciting these lines from Bhagavad-Gita scripture on recollecting Trinity atomic bomb test. He speaks on track 2, “Radiance”.

[div class=attrib]Image courtesy of Wikipedia / Creative Commons.[end-div]

MondayPoem: August 6th

In keeping with our atoms and all things atomic theme this week, Monday’s poem is authored by Sankichi Toge, Japanese poet and peace activist.

Twenty-four-year-old Sankichi Toge was in Hiroshima when the atomic bomb was dropped on his city. Sankichi Toge began writing poems as a teenager; his first collection of poetry entitled, “Genbaku shishu (“Poems of the Atomic Bomb”) was published in 1951. He died at the age of 36 in Hiroshima.

His poem August 6th is named for the day in August 1945 on which the atomic bomb was dropped on Hiroshima.

August 6th

How could I ever forget that flash of light!
In a moment thirty thousand people ceased to be
The cries of fifty thousand killed
Through yellow smoke whirling into light
Buildings split, bridges collapsed
Crowded trams burnt as they rolled about
Hiroshima, all full of boundless heaps of embers
Soon after, skin dangling like rags
With hands on breasts
Treading upon the spilt brains
Wearing shreds of burnt cloth round their loins
There came numberless lines of the naked
all crying
Bodies on the parade ground, scattered like
jumbled stone images
Crowds in piles by the river banks
loaded upon rafts fastened to shore
Turned by and by into corpses
under the scorching sun
in the midst of flame
tossing against the evening sky
Round about the street where mother and
brother were trapped alive under the fallen house
The fire-flood shifted on
On beds of filth along the Armory floor
Heaps, God knew who they were….
Heaps of schoolgirls lying in refuse
Pot-bellied, one-eyed
with half their skin peeled off, bald
The sun shone, and nothing moved
but the buzzing flies in the metal basins
Reeking with stagnant odor
How can I forget that stillness
Prevailing over the city of three hundred thousand?
Amidst that calm
How can I forget the entreaties
Of the departed wife and child
Through their orbs of eyes
Cutting through our minds and souls?

Famous for the wrong book

[div class=attrib]From the Guardian:[end-div]

Why is it that the book for which an author is best known is rarely their best? If history is the final judge of literary achievement, why has a title like Louis de Bernières’ Captain Corelli’s Mandolin risen to the top, overshadowing his much better earlier novels such as Señor Vivo and the Coca Lord? It’s not, I hope, the simple snobbery of insisting that the most popular can’t be the finest. (After all, who would dispute that Middlemarch is George Eliot’s peak? … You would? Great, there’s a space for you in the comments below.)

If someone reads Kurt Vonnegut‘s most famous book, Slaughterhouse-Five, and doesn’t like it, I’ll want to shout to them, “But it’s rubbish! Cat’s Cradle is much better! That’s the one you want to read!” It’s not just me, I’m sure. Geoff Dyer takes the view that it is John Cheever’s journals, not his stories, which represent his “greatest achievement, his principal claim to literary survival”. Gabriel Josipovici says that it is not Kafka’s The Trial or “Metamorphosis” – not any of his novels or stories – which “form [his] most sustained meditation on life and death, good and evil, and the role of art”, but his aphorisms.

So here I am going to list a few instances of a writer being famous for the wrong book, and my suggestions for where their greatest achievement really lies. Below, you can make your own suggestions (someone, please tell me I’ve just been reading the wrong Peter Carey or Emily Brontë), or let me know just how misguided I am.

Joseph Heller
Catch-22 is too long, messy and takes 100 pages to get going. Heller’s second novel, Something Happened, took even longer to write and justified the time. From its opening line (“I get the willies when I see closed doors”), it is a supremely controlled and meticulous masterpiece, grounded in the horror of daily living. The first time I read it I was overwhelmed. The second time I thought it was hilarious. The third time – getting closer to the age of the horribly honest narrator Bob Slocum – it was terrifying. It’s the book that keeps on giving.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image of Joseph Heller courtesy of Todd Plitt/AP.[end-div]

Learning to learn

[div class=attrib]By George Blecher for Eurozine:[end-div]

Before I learned how to learn, I was full of bullshit. I exaggerate. But like any bright student, I spent a lot of time faking it, pretending to know things about which I had only vague generalizations and a fund of catch-words. Why do bright students need to fake it? I guess because if they’re considered “bright”, they’re caught in a tautology: bright students are supposed to know, so if they risk not knowing, they must not be bright.

In any case, I faked it. I faked it so well that even my teachers were afraid to contradict me. I faked it so well that I convinced myself that I wasn’t faking it. In the darkest corners of the bright student’s mind, the borders between real and fake knowledge are blurred, and he puts so much effort into faking it that he may not even recognize when he actually knows something.

Above all, he dreads that his bluff will be called – that an honest soul will respect him enough to pick apart his faulty reasoning and superficial grasp of a subject, and expose him for the fraud he believes himself to be. So he lives in a state of constant fear: fear of being exposed, fear of not knowing, fear of appearing afraid. No wonder that Plato in The Republic cautions against teaching the “dialectic” to future Archons before the age of 30: he knew that instead of using it to pursue “Truth”, they’d wield it like a weapon to appear cleverer than their fellows.

Sometimes the worst actually happens. The bright student gets caught with his intellectual pants down. I remember taking an exam when I was 12, speeding through it with great cockiness until I realized that I’d left out a whole section. I did what the bright student usually does: I turned it back on the teacher, insisting that the question was misleading, and that I should be granted another half hour to fill in the missing part. (Probably Mr Lipkin just gave in because he knew what a pain in the ass the bright student can be!)

So then I was somewhere in my early 30s. No more teachers or parents to impress; no more exams to ace: just the day-to-day toiling in the trenches, trying to build a life.

[div class=attrib]More from theSource here.[end-div]

NASA Retires Shuttle; France Telecom Guillotines Minitel

The lives of 2 technological marvels came to a close this week. First, NASA officially concluded the space shuttle program with the final flight of Atlantis.

Then, France Telecom announced the imminent demise of Minitel. Sacre Bleu! What next? Will the United Kingdom phase out afternoon tea and the Royal Family?

If you’re under 35 years of age, especially if you have never visited France, you may never have heard of Minitel. About ten years before the mainstream arrival of the World Wide Web and Mosaic, the first internet browser, there was Minitel. The Minitel network offered France Telecom subscribers a host of internet-like services such as email, white-pages, news and information services,  message boards, train reservations, airline schedules, stock quotes and online purchases. Users leased small, custom terminals for free that connected via telephone line. Think prehistoric internet services: no hyperlinks, no fancy search engines, no rich graphics and no multimedia — that was Minitel.

Though rudimentary, Minitel was clearly ahead of its time and garnered a wide and loyal following in France. France Telecom delivered millions of terminals for free to household and business telephone subscribers. By 2000, France Telecom estimates that almost 9 million terminals, covering 25 million people or over 41 percent of the French population, still had access to the Minitel network. Deploying the Minitel service allowed France Telecom to replace printed white-pages directories given to all its customers with a free, online Minitel version.

The Minitel equipment included a basic dumb terminal with a text based screen, keyboard and modem. The modem transmission speed was a rather slow 75 bits per second (upstream) and 1,200 bits per second (downstream). This compares with today’s basic broad speeds of 1 Mbit per second (upstream) and 4 Mbits per second (downstream).

In a bow to Minitel’s more attractive siblings, the internet and the World Wide Web, France Telecom finally plans to retire the service on the June 30, 2012.

[div class=attrib]Image courtesy of Wikipedia/Creative Commons.[end-div]

First Ever Demonstration of Time Cloaking

[div class=attrib]From the Physics arXiv for Technology Review:[end-div]

Physicists have created a “hole in time” using the temporal equivalent of an invisibility cloak.

Invisibility cloaks are the result of physicists’ newfound ability to distort electromagnetic fields in extreme ways. The idea is steer light around a volume of space so that anything inside this region is essentially invisible.

The effect has generated huge interest. The first invisibility cloaks worked only at microwave frequencies but in only a few years, physicists have found ways to create cloaks that work for visible light, for sound and for ocean waves. They’ve even designed illusion cloaks that can make one object look like another.

Today, Moti Fridman and buddies, at Cornell University in Ithaca, go a step further. These guys have designed and built a cloak that hides events in time.

Time cloaking is possible because of a kind of duality between space and time in electromagnetic theory. In particular, the diffraction of a beam of light in space is mathematically equivalent to the temporal propagation of light through a dispersive medium. In other words, diffraction and dispersion are symmetric in spacetime.

That immediately leads to an interesting idea. Just as its easy to make a lens that focuses light in space using diffraction, so it is possible to use dispersion to make a lens that focuses in time.

Such a time-lens can be made using an electro-optic modulator, for example, and has a variety of familiar properties. “This time-lens can, for example, magnify or compress in time,” say Fridman and co.

This magnifying and compressing in time is important.

The trick to building a temporal cloak is to place two time-lenses in series and then send a beam of light through them. The first compresses the light in time while the second decompresses it again.

But this leaves a gap. For short period, there is a kind of hole in time in which any event is unrecorded.

So to an observer, the light coming out of the second time-lens appears undistorted, as if no event has occurred.

In effect, the space between the two lenses is a kind of spatio-temporal cloak that deletes changes that occur in short periods of time.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Original paper from arXiv.org here.[end-div]

Why Does Time Fly?

[div class=attrib]From Scientific American:[end-div]

Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen.  This overestimation does not happen when the dot seems to move away.  Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Read more from the original study here.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Sawayasu Tsuji.[end-div]

Lucian Freud dies aged 88

[div class=attrib]From the Guardian:[end-div]

Lucian Freud, widely acknowledged as one of the greatest, most influential and yet most controversial British painters of his era, has died at his London home.

News of his death, at the age of 88, was released by his New York art dealer, William Acquavella. The realist painter, who was a grandson of the psychoanalyst Sigmund Freud, had watched his works soar in value over recent years and, in 2008, his portrayal of a large, naked woman on a couch – Benefit Supervisor Sleeping – sold at auction for £2.6m, a record price for the work of a living artist.

Born in Berlin, Freud came to Britain in 1933 with his family when he was 10 years old and developed his passion for drawing. After studying at art school, he had a self-portrait accepted for Horizon magazine and, by the age of 21, his talent had been recognised in a solo show. He returned to Britain after the war years to teach at the Slade School of Art in London.

Over a career that spanned 50 years, Freud became famous for his intense and unsettling nude portraits. A naturalised British subject, he spent most of his working life in London and was frequently seen at the most salubrious bars and restaurants, often in the company of beautiful young women such as Kate Moss, who he once painted. A tweet from the writer Polly Samson last night reported that Freud’s regular table in The Wolseley restaurant was laid with a black tablecloth and a single candle in his honour.

The director of the Tate gallery, Nicholas Serota, said last night: “The vitality of [Freud’s] nudes, the intensity of the still life paintings and the presence of his portraits of family and friends guarantee Lucian Freud a unique place in the pantheon of late 20th century art.

[div class=attrib]More from theSource here.[end-div]

Face (Recognition) Time

If you’ve traveled or lived in the UK then you may well have been filmed and recorded by one of Britain’s 4.2 million security cameras (and that’s the count as of 2009).  That’s one per every 14 people.

While it’s encouraging that the United States and other nations have not followed a similar dubious path, there are reports that facial recognition systems will soon be mobile, and in the hands of police departments across the nation.

[div class=attrib]From Slate:[end-div]

According to the Wall Street Journal, police departments across the nation will soon adopt handheld facial-recognition systems that will let them identify people with a snapshot. These new capabilities are made possible by BI2 Technologies, a Massachusetts company that has developed a small device that attaches to officers’ iPhones. The police departments who spoke to the Journal said they plan to use the device only when officers suspect criminal activity and have no other way to identify a person—for instance, when they stop a driver who isn’t carrying her license. Law enforcement officials also seemed wary about civil liberties concerns. Is snapping someone’s photo from five feet away considered a search? Courts haven’t decided the issue, but sheriffs who spoke to the paper say they plan to exercise caution.

Don’t believe it. Soon, face recognition will be ubiquitous. While the police may promise to tread lightly, the technology is likely to become so good, so quickly that officers will find themselves reaching for their cameras in all kinds of situations. The police will still likely use traditional ID technologies like fingerprinting—or even iris scanning—as these are generally more accurate than face-scanning, but face-scanning has an obvious advantage over fingerprints: It works from far away. Bunch of guys loitering on the corner? Scantily clad woman hanging around that run-down motel? Two dudes who look like they’re smoking a funny-looking cigarette? Why not snap them all just to make sure they’re on the up-and-up?

Sure, this isn’t a new worry. Early in 2001, police scanned the faces of people going to the Super Bowl, and officials rolled out the technology at Logan Airport in Boston after 9/11. Those efforts raised a stink, and the authorities decided to pull back. But society has changed profoundly in the last decade, and face recognition is now set to go mainstream. What’s more, the police may be the least of your worries. In the coming years—if not months—we’ll see a slew of apps that allow your friends and neighbors to snap your face and get your name and other information you’ve put online. This isn’t a theoretical worry; the technology exists, now, to do this sort of thing crudely, and the only thing stopping companies from deploying it widely is a fear of public outcry. That fear won’t last long. Face recognition for everyone is coming. Get used to it.

[div class=attrib]More from theSource here.[end-div]

Saluting a Fantastic Machine and Courageous Astronauts

[div class=attrib]From the New York Times:[end-div]

The last space shuttle flight rolled to a stop just before 6 a.m. on Thursday, closing an era of the nation’s space program.

“Mission complete, Houston,” said Capt. Christopher J. Ferguson of the Navy, commander of the shuttle Atlantis for the last flight. “After serving the world for over 30 years, the space shuttle has earned its place in history, and it’s come to a final stop.”

It was the 19th night landing at the Kennedy Space Center in Florida to end the 135th space shuttle mission. For Atlantis, the final tally of its 26-year career is 33 missions, accumulating just short of 126 million miles during 307 days in space, circumnavigating the Earth 4,848 times.

A permanent marker will be placed on the runway to indicate the final resting spot of the space shuttle program.

The last day in space went smoothly. Late on Wednesday night, the crew awoke to the Kate Smith version of “God Bless America.” With no weather or technical concerns, the crew closed the payload doors at 2:09 a.m. on Thursday.

At 4:13 a.m., Barry E. Wilmore, an astronaut at mission control in Houston, told the Atlantis crew, “Everything is looking fantastic, there you are go for the deorbit burn, and you can maneuver on time.”

“That’s great, Butch,” replied Captain Ferguson. “Go on the deorbit maneuver, on time.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Philip Scott Andrews/The New York Times.[end-div]

Book Review: America Pacifica

Classic dystopian novels from the likes of Aldous Huxley, George Orwell, Philip K. Dick, Ursula K. Le Guin, and Margaret Atwood appeal for their fantastic narrative journeys. More so they resonate for it often seems that contemporary society is precariously close to this fictional chaos, dysfunction and destruction; one small step in the wrong direction and over the precipice we go. America Pacifica continues this tradition.

[div class=attrib]From The Barnes & Noble Review:[end-div]

Anna North is both a graduate of the Iowa Writers’ Work Shop and a writer for the feminist Web site Jezebel. It’s no surprise, then, that her debut novel, America Pacifica, is overflowing with big ideas about revolution, ecology, feminism, class, and poverty. But by the end of page one, when a teenage daughter, Darcy, watches her beloved mother, Sarah, emerge from a communal bathroom down the hall carrying “their” toothbrush, one also knows that this novel, like, say, the dystopic fiction of Margaret Atwood or Ursula K. Le Guin, aims not only to transmit those ideas in the form of an invented narrative, but also to give them the animating, detailed, and less predictable life of literature.

The “America Pacifica” of the title is an unnamed island upon which a generation of North American refugees have attempted to create a simulacra of their old home–complete with cities named Manhattanville and Little Los Angeles–after an environmental calamity rendered “the mainland” too frigid for human life. Daniel, a mainland scientist, argued that the humans should adapt themselves to the changing climate, while a man named Tyson insisted that they look for a warmer climate and use technology and dirty industrial processes to continue human life as it was once lived. The island’s population is comprised entirely of those who took Tyson’s side of the argument.

But this haven can only sustain enough luxuries for a tiny few. Every aspect of island life is governed by a brutal caste system which divides people into rigid hierarchies based on the order in which they and their families arrived by boat. The rich eat strawberries and fresh tomatoes, wear real fiber, and live in air-conditioned apartments. The poor subsist on meat products fabricated from jellyfish and seaweed, wear synthetic SeaFiber clothing, and dream of somehow getting into college (which isn’t open to them) so they can afford an apartment with their own bathroom and shower.

[div class=attrib]More from theSource here.[end-div]

Equation: How GPS Bends Time

[div class=attrib]From Wired:[end-div]

Einstein knew what he was talking about with that relativity stuff. For proof, just look at your GPS. The global positioning system relies on 24 satellites that transmit time-stamped information on where they are. Your GPS unit registers the exact time at which it receives that information from each satellite and then calculates how long it took for the individual signals to arrive. By multiplying the elapsed time by the speed of light, it can figure out how far it is from each satellite, compare those distances, and calculate its own position.

For accuracy to within a few meters, the satellites’ atomic clocks have to be extremely precise—plus or minus 10 nanoseconds. Here’s where things get weird: Those amazingly accurate clocks never seem to run quite right. One second as measured on the satellite never matches a second as measured on Earth—just as Einstein predicted.

According to Einstein’s special theory of relativity, a clock that’s traveling fast will appear to run slowly from the perspective of someone standing still. Satellites move at about 9,000 mph—enough to make their onboard clocks slow down by 8 microseconds per day from the perspective of a GPS gadget and totally screw up the location data. To counter this effect, the GPS system adjusts the time it gets from the satellites by using the equation here. (Don’t even get us started on the impact of general relativity.)

[div class=attrib]More from theSource here.[end-div]

How the Great White Egret Spurred Bird Conservation

The infamous Dead Parrot Sketch from Monty Python’s Flying Circus continues to resonate several generations removed from its creators. One of the most treasured exchanges, between a shady pet shop owner and prospective customer included two immortal comedic words, “Beautiful plumage”, followed by the equally impressive retort, “The plumage don’t enter into it. It’s stone dead.”

Though utterly silly this conversation does point towards a deeper and very ironic truth: that humans so eager to express their status among their peers do this by exploiting another species. Thus, the stunning white plumage of the Great White Egret proved to be its undoing, almost. So utterly sought after were the egrets’ feathers that both males and females were hunted close to extinction. And, in a final ironic twist, the near extinction of these great birds inspired the Audubon campaigns and drove legislation to curb the era of fancy feathers.

[div class=attrib]More courtesy of the Smithsonian[end-div]

I’m not the only one who has been dazzled by the egret’s feathers, though. At the turn of the 20th century, these feathers were a huge hit in the fashion world, to the detriment of the species, as Thor Hanson explains in his new book Feathers: The Evolution of a Natural Miracle:

One particular group of birds suffered near extermination at the hands of feather hunters, and their plight helped awaken a conservation ethic that still resonates in the modern environmental movement. With striking white plumes and crowded, conspicuous nesting colonies, Great Egrets and Snowy Egrets faced an unfortunate double jeopardy: their feathers fetched a high price, and their breeding habits made them an easy mark. To make matters worse, both sexes bore the fancy plumage, so hunters didn’t just target the males; they decimated entire rookeries. At the peak of the trade, an ounce of egret plume fetched the modern equivalent of two thousand dollars, and successful hunters could net a cool hundred grand in a single season. But every ounce of breeding plumes represented six dead adults, and each slain pair left behind three to five starving nestlings. Millions of birds died, and by the turn of the century this once common species survived only in the deep Everglades and other remote wetlands.

This slaughter inspired Audubon members to campaign for environmental protections and bird preservation, at the state, national and international levels.

[div class=attrib]Image courtesy of Antonio Soto for the Smithsonian.[end-div]