Learning to learn

[div class=attrib]By George Blecher for Eurozine:[end-div]

Before I learned how to learn, I was full of bullshit. I exaggerate. But like any bright student, I spent a lot of time faking it, pretending to know things about which I had only vague generalizations and a fund of catch-words. Why do bright students need to fake it? I guess because if they’re considered “bright”, they’re caught in a tautology: bright students are supposed to know, so if they risk not knowing, they must not be bright.

In any case, I faked it. I faked it so well that even my teachers were afraid to contradict me. I faked it so well that I convinced myself that I wasn’t faking it. In the darkest corners of the bright student’s mind, the borders between real and fake knowledge are blurred, and he puts so much effort into faking it that he may not even recognize when he actually knows something.

Above all, he dreads that his bluff will be called – that an honest soul will respect him enough to pick apart his faulty reasoning and superficial grasp of a subject, and expose him for the fraud he believes himself to be. So he lives in a state of constant fear: fear of being exposed, fear of not knowing, fear of appearing afraid. No wonder that Plato in The Republic cautions against teaching the “dialectic” to future Archons before the age of 30: he knew that instead of using it to pursue “Truth”, they’d wield it like a weapon to appear cleverer than their fellows.

Sometimes the worst actually happens. The bright student gets caught with his intellectual pants down. I remember taking an exam when I was 12, speeding through it with great cockiness until I realized that I’d left out a whole section. I did what the bright student usually does: I turned it back on the teacher, insisting that the question was misleading, and that I should be granted another half hour to fill in the missing part. (Probably Mr Lipkin just gave in because he knew what a pain in the ass the bright student can be!)

So then I was somewhere in my early 30s. No more teachers or parents to impress; no more exams to ace: just the day-to-day toiling in the trenches, trying to build a life.

[div class=attrib]More from theSource here.[end-div]

NASA Retires Shuttle; France Telecom Guillotines Minitel

The lives of 2 technological marvels came to a close this week. First, NASA officially concluded the space shuttle program with the final flight of Atlantis.

Then, France Telecom announced the imminent demise of Minitel. Sacre Bleu! What next? Will the United Kingdom phase out afternoon tea and the Royal Family?

If you’re under 35 years of age, especially if you have never visited France, you may never have heard of Minitel. About ten years before the mainstream arrival of the World Wide Web and Mosaic, the first internet browser, there was Minitel. The Minitel network offered France Telecom subscribers a host of internet-like services such as email, white-pages, news and information services,  message boards, train reservations, airline schedules, stock quotes and online purchases. Users leased small, custom terminals for free that connected via telephone line. Think prehistoric internet services: no hyperlinks, no fancy search engines, no rich graphics and no multimedia — that was Minitel.

Though rudimentary, Minitel was clearly ahead of its time and garnered a wide and loyal following in France. France Telecom delivered millions of terminals for free to household and business telephone subscribers. By 2000, France Telecom estimates that almost 9 million terminals, covering 25 million people or over 41 percent of the French population, still had access to the Minitel network. Deploying the Minitel service allowed France Telecom to replace printed white-pages directories given to all its customers with a free, online Minitel version.

The Minitel equipment included a basic dumb terminal with a text based screen, keyboard and modem. The modem transmission speed was a rather slow 75 bits per second (upstream) and 1,200 bits per second (downstream). This compares with today’s basic broad speeds of 1 Mbit per second (upstream) and 4 Mbits per second (downstream).

In a bow to Minitel’s more attractive siblings, the internet and the World Wide Web, France Telecom finally plans to retire the service on the June 30, 2012.

[div class=attrib]Image courtesy of Wikipedia/Creative Commons.[end-div]

First Ever Demonstration of Time Cloaking

[div class=attrib]From the Physics arXiv for Technology Review:[end-div]

Physicists have created a “hole in time” using the temporal equivalent of an invisibility cloak.

Invisibility cloaks are the result of physicists’ newfound ability to distort electromagnetic fields in extreme ways. The idea is steer light around a volume of space so that anything inside this region is essentially invisible.

The effect has generated huge interest. The first invisibility cloaks worked only at microwave frequencies but in only a few years, physicists have found ways to create cloaks that work for visible light, for sound and for ocean waves. They’ve even designed illusion cloaks that can make one object look like another.

Today, Moti Fridman and buddies, at Cornell University in Ithaca, go a step further. These guys have designed and built a cloak that hides events in time.

Time cloaking is possible because of a kind of duality between space and time in electromagnetic theory. In particular, the diffraction of a beam of light in space is mathematically equivalent to the temporal propagation of light through a dispersive medium. In other words, diffraction and dispersion are symmetric in spacetime.

That immediately leads to an interesting idea. Just as its easy to make a lens that focuses light in space using diffraction, so it is possible to use dispersion to make a lens that focuses in time.

Such a time-lens can be made using an electro-optic modulator, for example, and has a variety of familiar properties. “This time-lens can, for example, magnify or compress in time,” say Fridman and co.

This magnifying and compressing in time is important.

The trick to building a temporal cloak is to place two time-lenses in series and then send a beam of light through them. The first compresses the light in time while the second decompresses it again.

But this leaves a gap. For short period, there is a kind of hole in time in which any event is unrecorded.

So to an observer, the light coming out of the second time-lens appears undistorted, as if no event has occurred.

In effect, the space between the two lenses is a kind of spatio-temporal cloak that deletes changes that occur in short periods of time.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Original paper from arXiv.org here.[end-div]

Why Does Time Fly?

[div class=attrib]From Scientific American:[end-div]

Everybody knows that the passage of time is not constant. Moments of terror or elation can stretch a clock tick to what seems like a life time. Yet, we do not know how the brain “constructs” the experience of subjective time. Would it not be important to know so we can find ways to make moments last, or pass by, more quickly?

A recent study by van Wassenhove and colleagues is beginning to shed some light on this problem. This group used a simple experimental set up to measure the “subjective” experience of time. They found that people accurately judge whether a dot appears on the screen for shorter, longer or the same amount of time as another dot. However, when the dot increases in size so as to appear to be moving toward the individual — i.e. the dot is “looming” — something strange happens. People overestimate the time that the dot lasted on the screen.  This overestimation does not happen when the dot seems to move away.  Thus, the overestimation is not simply a function of motion. Van Wassenhove and colleagues conducted this experiment during functional magnetic resonance imaging, which enabled them to examine how the brain reacted differently to looming and receding.

The brain imaging data revealed two main findings. First, structures in the middle of the brain were more active during the looming condition. These brain areas are also known to activate in experiments that involve the comparison of self-judgments to the judgments of others, or when an experimenter does not tell the subject what to do. In both cases, the prevailing idea is that the brain is busy wondering about itself, its ongoing plans and activities, and relating oneself to the rest of the world.

Read more from the original study here.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Sawayasu Tsuji.[end-div]

Book Review: Linchpin. Seth Godin

Phew! Another heartfelt call to action from business blogger Seth Godin to become indispensable.

Author, public speaker, orthogonal thinker and internet marketing maven, Seth Godin makes a compelling case to the artist within us all to get off our backsides, ignore the risk averse “lizard brain” as he puts it, get creative, and give the gift of art. After all there is no way to win the “race to the bottom” wrought by commoditization of both product and labor.

Bear in mind, Godin uses “art” in its most widely used sense, not merely a canvas or a sculpture. Here, art is anything that its maker so creates; it may be a service just as well as an object. Importantly also, to be art it has to be given with the correct intent — as a gift (a transcendent, unexpected act that surpasses expectation).

Critics maintain that his latest bestseller is short on specifics, but indeed it should be. After all if the process of creating art could be decomposed to an instruction manual it wouldn’t deliver art, it would deliver a Big Mac. So while, we do not get a “7 point plan” that leads to creative nirvana, Godin does a good job through his tireless combination of anecdote, repetition, historical analysis and social science at convincing the “anonymous cogs in the machine” to think and act more like the insightful, innovators that we can all become.

Godin rightly believes that the new world of work is rife with opportunity to add value through creativity, human connection and generosity, and this is the area where the indispensable artist gets to create his or her art, and to become a linchpin in the process. Godin’s linchpin is a rule-breaker, not a follower; a map-maker, not an order taker; a doer not a whiner.

In reading Linchpin we are reminded of the other side of the economy, in which we all unfortunately participate as well, the domain of commoditization, homogeneity and anonymity. This is the domain that artists so their utmost to avoid, and better still, subvert. Of course, this economy provides a benefit too – lower price. However, a “Volkswagen-sized jar of pickles for $3” can only go so far. Commoditization undermines our very social fabric: it undermines our desire for uniqueness and special connection in a service or product that we purchase; it removes our dignity and respect when we allow ourselves to become a disposable part, a human cog, in the job machine. So, jettison the bland, the average, and the subservient, learn to take risk, face fear and become an indispensable, passionate, discerning artist – one who creates and one who gives.

Lucian Freud dies aged 88

[div class=attrib]From the Guardian:[end-div]

Lucian Freud, widely acknowledged as one of the greatest, most influential and yet most controversial British painters of his era, has died at his London home.

News of his death, at the age of 88, was released by his New York art dealer, William Acquavella. The realist painter, who was a grandson of the psychoanalyst Sigmund Freud, had watched his works soar in value over recent years and, in 2008, his portrayal of a large, naked woman on a couch – Benefit Supervisor Sleeping – sold at auction for £2.6m, a record price for the work of a living artist.

Born in Berlin, Freud came to Britain in 1933 with his family when he was 10 years old and developed his passion for drawing. After studying at art school, he had a self-portrait accepted for Horizon magazine and, by the age of 21, his talent had been recognised in a solo show. He returned to Britain after the war years to teach at the Slade School of Art in London.

Over a career that spanned 50 years, Freud became famous for his intense and unsettling nude portraits. A naturalised British subject, he spent most of his working life in London and was frequently seen at the most salubrious bars and restaurants, often in the company of beautiful young women such as Kate Moss, who he once painted. A tweet from the writer Polly Samson last night reported that Freud’s regular table in The Wolseley restaurant was laid with a black tablecloth and a single candle in his honour.

The director of the Tate gallery, Nicholas Serota, said last night: “The vitality of [Freud’s] nudes, the intensity of the still life paintings and the presence of his portraits of family and friends guarantee Lucian Freud a unique place in the pantheon of late 20th century art.

[div class=attrib]More from theSource here.[end-div]

Face (Recognition) Time

If you’ve traveled or lived in the UK then you may well have been filmed and recorded by one of Britain’s 4.2 million security cameras (and that’s the count as of 2009).  That’s one per every 14 people.

While it’s encouraging that the United States and other nations have not followed a similar dubious path, there are reports that facial recognition systems will soon be mobile, and in the hands of police departments across the nation.

[div class=attrib]From Slate:[end-div]

According to the Wall Street Journal, police departments across the nation will soon adopt handheld facial-recognition systems that will let them identify people with a snapshot. These new capabilities are made possible by BI2 Technologies, a Massachusetts company that has developed a small device that attaches to officers’ iPhones. The police departments who spoke to the Journal said they plan to use the device only when officers suspect criminal activity and have no other way to identify a person—for instance, when they stop a driver who isn’t carrying her license. Law enforcement officials also seemed wary about civil liberties concerns. Is snapping someone’s photo from five feet away considered a search? Courts haven’t decided the issue, but sheriffs who spoke to the paper say they plan to exercise caution.

Don’t believe it. Soon, face recognition will be ubiquitous. While the police may promise to tread lightly, the technology is likely to become so good, so quickly that officers will find themselves reaching for their cameras in all kinds of situations. The police will still likely use traditional ID technologies like fingerprinting—or even iris scanning—as these are generally more accurate than face-scanning, but face-scanning has an obvious advantage over fingerprints: It works from far away. Bunch of guys loitering on the corner? Scantily clad woman hanging around that run-down motel? Two dudes who look like they’re smoking a funny-looking cigarette? Why not snap them all just to make sure they’re on the up-and-up?

Sure, this isn’t a new worry. Early in 2001, police scanned the faces of people going to the Super Bowl, and officials rolled out the technology at Logan Airport in Boston after 9/11. Those efforts raised a stink, and the authorities decided to pull back. But society has changed profoundly in the last decade, and face recognition is now set to go mainstream. What’s more, the police may be the least of your worries. In the coming years—if not months—we’ll see a slew of apps that allow your friends and neighbors to snap your face and get your name and other information you’ve put online. This isn’t a theoretical worry; the technology exists, now, to do this sort of thing crudely, and the only thing stopping companies from deploying it widely is a fear of public outcry. That fear won’t last long. Face recognition for everyone is coming. Get used to it.

[div class=attrib]More from theSource here.[end-div]

Saluting a Fantastic Machine and Courageous Astronauts

[div class=attrib]From the New York Times:[end-div]

The last space shuttle flight rolled to a stop just before 6 a.m. on Thursday, closing an era of the nation’s space program.

“Mission complete, Houston,” said Capt. Christopher J. Ferguson of the Navy, commander of the shuttle Atlantis for the last flight. “After serving the world for over 30 years, the space shuttle has earned its place in history, and it’s come to a final stop.”

It was the 19th night landing at the Kennedy Space Center in Florida to end the 135th space shuttle mission. For Atlantis, the final tally of its 26-year career is 33 missions, accumulating just short of 126 million miles during 307 days in space, circumnavigating the Earth 4,848 times.

A permanent marker will be placed on the runway to indicate the final resting spot of the space shuttle program.

The last day in space went smoothly. Late on Wednesday night, the crew awoke to the Kate Smith version of “God Bless America.” With no weather or technical concerns, the crew closed the payload doors at 2:09 a.m. on Thursday.

At 4:13 a.m., Barry E. Wilmore, an astronaut at mission control in Houston, told the Atlantis crew, “Everything is looking fantastic, there you are go for the deorbit burn, and you can maneuver on time.”

“That’s great, Butch,” replied Captain Ferguson. “Go on the deorbit maneuver, on time.”

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Philip Scott Andrews/The New York Times.[end-div]

Book Review: America Pacifica

Classic dystopian novels from the likes of Aldous Huxley, George Orwell, Philip K. Dick, Ursula K. Le Guin, and Margaret Atwood appeal for their fantastic narrative journeys. More so they resonate for it often seems that contemporary society is precariously close to this fictional chaos, dysfunction and destruction; one small step in the wrong direction and over the precipice we go. America Pacifica continues this tradition.

[div class=attrib]From The Barnes & Noble Review:[end-div]

Anna North is both a graduate of the Iowa Writers’ Work Shop and a writer for the feminist Web site Jezebel. It’s no surprise, then, that her debut novel, America Pacifica, is overflowing with big ideas about revolution, ecology, feminism, class, and poverty. But by the end of page one, when a teenage daughter, Darcy, watches her beloved mother, Sarah, emerge from a communal bathroom down the hall carrying “their” toothbrush, one also knows that this novel, like, say, the dystopic fiction of Margaret Atwood or Ursula K. Le Guin, aims not only to transmit those ideas in the form of an invented narrative, but also to give them the animating, detailed, and less predictable life of literature.

The “America Pacifica” of the title is an unnamed island upon which a generation of North American refugees have attempted to create a simulacra of their old home–complete with cities named Manhattanville and Little Los Angeles–after an environmental calamity rendered “the mainland” too frigid for human life. Daniel, a mainland scientist, argued that the humans should adapt themselves to the changing climate, while a man named Tyson insisted that they look for a warmer climate and use technology and dirty industrial processes to continue human life as it was once lived. The island’s population is comprised entirely of those who took Tyson’s side of the argument.

But this haven can only sustain enough luxuries for a tiny few. Every aspect of island life is governed by a brutal caste system which divides people into rigid hierarchies based on the order in which they and their families arrived by boat. The rich eat strawberries and fresh tomatoes, wear real fiber, and live in air-conditioned apartments. The poor subsist on meat products fabricated from jellyfish and seaweed, wear synthetic SeaFiber clothing, and dream of somehow getting into college (which isn’t open to them) so they can afford an apartment with their own bathroom and shower.

[div class=attrib]More from theSource here.[end-div]

Equation: How GPS Bends Time

[div class=attrib]From Wired:[end-div]

Einstein knew what he was talking about with that relativity stuff. For proof, just look at your GPS. The global positioning system relies on 24 satellites that transmit time-stamped information on where they are. Your GPS unit registers the exact time at which it receives that information from each satellite and then calculates how long it took for the individual signals to arrive. By multiplying the elapsed time by the speed of light, it can figure out how far it is from each satellite, compare those distances, and calculate its own position.

For accuracy to within a few meters, the satellites’ atomic clocks have to be extremely precise—plus or minus 10 nanoseconds. Here’s where things get weird: Those amazingly accurate clocks never seem to run quite right. One second as measured on the satellite never matches a second as measured on Earth—just as Einstein predicted.

According to Einstein’s special theory of relativity, a clock that’s traveling fast will appear to run slowly from the perspective of someone standing still. Satellites move at about 9,000 mph—enough to make their onboard clocks slow down by 8 microseconds per day from the perspective of a GPS gadget and totally screw up the location data. To counter this effect, the GPS system adjusts the time it gets from the satellites by using the equation here. (Don’t even get us started on the impact of general relativity.)

[div class=attrib]More from theSource here.[end-div]

How the Great White Egret Spurred Bird Conservation

The infamous Dead Parrot Sketch from Monty Python’s Flying Circus continues to resonate several generations removed from its creators. One of the most treasured exchanges, between a shady pet shop owner and prospective customer included two immortal comedic words, “Beautiful plumage”, followed by the equally impressive retort, “The plumage don’t enter into it. It’s stone dead.”

Though utterly silly this conversation does point towards a deeper and very ironic truth: that humans so eager to express their status among their peers do this by exploiting another species. Thus, the stunning white plumage of the Great White Egret proved to be its undoing, almost. So utterly sought after were the egrets’ feathers that both males and females were hunted close to extinction. And, in a final ironic twist, the near extinction of these great birds inspired the Audubon campaigns and drove legislation to curb the era of fancy feathers.

[div class=attrib]More courtesy of the Smithsonian[end-div]

I’m not the only one who has been dazzled by the egret’s feathers, though. At the turn of the 20th century, these feathers were a huge hit in the fashion world, to the detriment of the species, as Thor Hanson explains in his new book Feathers: The Evolution of a Natural Miracle:

One particular group of birds suffered near extermination at the hands of feather hunters, and their plight helped awaken a conservation ethic that still resonates in the modern environmental movement. With striking white plumes and crowded, conspicuous nesting colonies, Great Egrets and Snowy Egrets faced an unfortunate double jeopardy: their feathers fetched a high price, and their breeding habits made them an easy mark. To make matters worse, both sexes bore the fancy plumage, so hunters didn’t just target the males; they decimated entire rookeries. At the peak of the trade, an ounce of egret plume fetched the modern equivalent of two thousand dollars, and successful hunters could net a cool hundred grand in a single season. But every ounce of breeding plumes represented six dead adults, and each slain pair left behind three to five starving nestlings. Millions of birds died, and by the turn of the century this once common species survived only in the deep Everglades and other remote wetlands.

This slaughter inspired Audubon members to campaign for environmental protections and bird preservation, at the state, national and international levels.

[div class=attrib]Image courtesy of Antonio Soto for the Smithsonian.[end-div]

The Pervasive Threat of Conformity: Peer Pressure Is Here to Stay

[div class=attrib]From BigThink:[end-div]

Today, I’d like to revisit one of the most well-known experiments in social psychology: Solomon Asch’s lines study. Let’s look once more at his striking findings on the power of group conformity and consider what they mean now, more than 50 years later, in a world that is much changed from Asch’s 1950s America.

How long are these lines? I don’t know until you tell me.

In the 1950s, Solomon Asch conducted a series of studies to examine the effects of peer pressure, in as clear-cut a setting as possible: visual perception. The idea was to see if, when presented with lines of differing lengths and asked questions about the lines (Which was the longest? Which corresponded to a reference line of a certain length?), participants would answer with the choice that was obviously correct – or would fall sway to the pressure of a group that gave an incorrect response. Here is a sample stimulus from one of the studies:

Which line matches the reference line? It seems obvious, no? Now, imagine that you were in a group with six other people – and they all said that it was, in fact, Line B.  Now, you would have no idea that you were the only actual participant and that the group was carefully arranged with confederates, who were instructed to give that answer and were seated in such a way that they would answer before you. You’d think that they, like you, were participants in the study – and that they all gave what appeared to you to be a patently wrong answer. Would you call their bluff and say, no, the answer is clearly Line A? Are you all blind? Or, would you start to question your own judgment? Maybe it really is Line B. Maybe I’m just not seeing things correctly. How could everyone else be wrong and I be the only person who is right?

We don’t like to be the lone voice of dissent

While we’d all like to imagine that we fall into the second camp, statistically speaking, we are three times more likely to be in the first: over 75% of Asch’s subjects (and far more in the actual condition given above) gave the wrong answer, going along with the group opinion.

[div class=attrib]More from theSource here.[end-div]

Richard Feynman on the Ascendant

Genius – The Life and Science of Richard Feynman by James Gleick was a good first course for those fascinated by Richard Feynman’s significant contributions to physics, cosmology (and percussion).

Now, eight years later come two more biographies that observe Richard Feynman from very different perspectives, reviewed in the New York Review of Books. The first, Lawrence Krauss’s book, Quantum Man is the weighty main course; the second, by Jim Ottaviani and artist Leland Myrick, is a graphic-book (as in comic) biography, and delicious dessert.

In his review — The ‘Dramatic Picture’ of Richard Feynman — Freeman Dyson rightly posits that Richard Feynman’s star may now, or soon, be in the same exalted sphere as Einstein and Hawking. Though, type “Richard” in Google search and wait for its predictive text to fill in the rest and you’ll find that Richard Nixon, Richard Dawkins and Richard Branson rank higher than this giant of physics.

[div class=attrib]Freeman Dyson for the New York Review of Books:[end-div]

In the last hundred years, since radio and television created the modern worldwide mass-market entertainment industry, there have been two scientific superstars, Albert Einstein and Stephen Hawking. Lesser lights such as Carl Sagan and Neil Tyson and Richard Dawkins have a big public following, but they are not in the same class as Einstein and Hawking. Sagan, Tyson, and Dawkins have fans who understand their message and are excited by their science. Einstein and Hawking have fans who understand almost nothing about science and are excited by their personalities.

On the whole, the public shows good taste in its choice of idols. Einstein and Hawking earned their status as superstars, not only by their scientific discoveries but by their outstanding human qualities. Both of them fit easily into the role of icon, responding to public adoration with modesty and good humor and with provocative statements calculated to command attention. Both of them devoted their lives to an uncompromising struggle to penetrate the deepest mysteries of nature, and both still had time left over to care about the practical worries of ordinary people. The public rightly judged them to be genuine heroes, friends of humanity as well as scientific wizards.

Two new books now raise the question of whether Richard Feynman is rising to the status of superstar. The two books are very different in style and in substance. Lawrence Krauss’s book, Quantum Man, is a narrative of Feynman’s life as a scientist, skipping lightly over the personal adventures that have been emphasized in earlier biographies. Krauss succeeds in explaining in nontechnical language the essential core of Feynman’s thinking.

… The other book, by writer Jim Ottaviani and artist Leland Myrick, is very different. It is a comic-book biography of Feynman, containing 266 pages of pictures of Feynman and his legendary adventures. In every picture, bubbles of text record Feynman’s comments, mostly taken from stories that he and others had told and published in earlier books.

[div class=attrib]More from theSource here.[end-div]

[div class=attrib]Image courtesy of Shelley Gazin/Corbis.[end-div]

The Worst of States, the Best of States

Following on from our recent article showing the best of these United States, it’s time to look at the worst.

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

The United States of Shame again gets most of its data from health stats, detailing the deplorable firsts of 14 states (9). Eight states get worst marks for crime, from white-collar to violent (10), while four lead in road accidents (11). Six can be classed as economic worst cases (12), five as moral nadirs (13), two as environmental basket cases (14). In a category of one are states like Ohio (‘Nerdiest’), Maine (‘Dumbest’) and North Dakota (‘Ugliest’).

All claims are neatly backed up by references, some of them to reliable statistics, others to less scientific straw polls. In at least one case, to paraphrase Dickens, the best of stats really is the worst of stats. Ohio’s ‘shameful’ status as nerdiest state is based on its top ranking in library visits. Yet on the ‘awesome’ map, Ohio is listed as the state with… most library visits.

Juxtaposing each state’s best and worst leads to interesting statistical pairings. But with data as haphazardly corralled together as this, causal linkage should be avoided. Otherwise it could be concluded that:

A higher degree of equality leads to an increase in suicides (Alaska);
Sunny weather induces alcoholism (Arizona);
Breastfeeding raises the risk of homelessness (Oregon).
Yet in some cases, some kind of link can be inferred. New Yorkers use more public transit than other Americans, but are also stuck with the longest commutes.

[div class=attrib]More from theSource here.[end-div]

MondayPoem: The Enigma of the Infinitesimal

Monday’s poem comes from Mark Strand over a Slate. Strand was United States Poet Laureate during 1990-91. He won the 1999 Pulitzer Prize for Poetry for “Blizzard of One”.

The poem is austere and spare yet is simply evocative. Is Strand conjuring the spirits and ghosts of our imagination? Perhaps not. These “[l]overs of the in-between” are more likely to be the creative misfits who shy away from attention and who don’t conform to our societal norms. Bloggers perhaps?

[div class=attrib]By Mark Strand for Slate:[end-div]

You’ve seen them at dusk, walking along the shore, seen them standing in doorways, leaning from windows, or straddling the slow moving edge of a shadow. Lovers of the in-between, they are neither here nor there, neither in nor out. Poor souls, they are driven to experience the impossible. Even at night, they lie in bed with one eye closed and the other open, hoping to catch the last second of consciousness and the first of sleep, to inhabit that no man’s land, that beautiful place, to behold as only a god might, the luminous conjunction of nothing and all.

[div class=attrib]Listen to the author read his poem at theSource here.[end-div]

Book Review: The First Detective

A new book by James Morton examines the life and times of cross-dressing burglar, prison-escapee and snitch turned super-detective Eugène-François Vidocq.

[div class=attrib]From The Barnes & Noble Review:[end-div]

The daring costumed escapes and bedsheet-rope prison breaks of the old romances weren’t merely creaky plot devices; they were also the objective correlatives of the lost politics of early modern Europe. Not yet susceptible to legislative amelioration, rules and customs that seemed both indefensible and unassailable had to be vaulted over like collapsing bridges or tunneled under like manor walls. Not only fictional musketeers but such illustrious figures as the young Casanova and the philosopher Jean-Jacques Rousseau spent their early years making narrow escapes from overlapping orthodoxies, swimming moats to marriages of convenience and digging their way  out of prisons of privilege by dressing in drag or posing as noblemen’s sons. If one ran afoul of the local clergy or some aristocratic cuckold, there were always new bishops and magistrates to charm in the next diocese or département.

In 1775–roughly a generation after the exploits of Rousseau and Casanova–a prosperous baker’s son named Eugène-François Vidocq was born in Arras, in northern France. Indolent and adventuresome, he embarked upon a career that in its early phase looked even more hapless and disastrous than those of his illustrious forebears. An indifferent soldier in the chaotic, bloody interregnum of revolutionary France, Vidocq quickly fell into petty crime (at one point, he assumed the name Rousseau for a time as an alias and nom de guerre). A hapless housebreaker and a credulous co-conspirator, his criminal misadventures were equaled only by his skill escaping from the dungeons and bagnes that passed for a penal system in the pre-Napoleonic era.

By 1809, his canniness as an informer landed him a job with the police; with his old criminal comrades as willing foot soldiers, Vidocq organized a brigade de sûreté, a unit of plainclothes police, which in 1813 Napoleon made an official organ of state security. Throughout his subsequent career he would lay much of the foundation of modern policing, and may be considered a forebear not only to the Dupins and the Holmes of modern detective literature but of swashbuckling, above-the-law policemen like Eliot Ness and J. Edgar Hoover as well.

[div class=attrib]More from theSource here.[end-div]

When the multiverse and many-worlds collide

[div class=attrib]From the New Scientist:[end-div]

TWO of the strangest ideas in modern physics – that the cosmos constantly splits into parallel universes in which every conceivable outcome of every event happens, and the notion that our universe is part of a larger multiverse – have been unified into a single theory. This solves a bizarre but fundamental problem in cosmology and has set physics circles buzzing with excitement, as well as some bewilderment.

The problem is the observability of our universe. While most of us simply take it for granted that we should be able to observe our universe, it is a different story for cosmologists. When they apply quantum mechanics – which successfully describes the behaviour of very small objects like atoms – to the entire cosmos, the equations imply that it must exist in many different states simultaneously, a phenomenon called a superposition. Yet that is clearly not what we observe.

Cosmologists reconcile this seeming contradiction by assuming that the superposition eventually “collapses” to a single state. But they tend to ignore the problem of how or why such a collapse might occur, says cosmologist Raphael Bousso at the University of California, Berkeley. “We’ve no right to assume that it collapses. We’ve been lying to ourselves about this,” he says.

In an attempt to find a more satisfying way to explain the universe’s observability, Bousso, together with Leonard Susskind at Stanford University in California, turned to the work of physicists who have puzzled over the same problem but on a much smaller scale: why tiny objects such as electrons and photons exist in a superposition of states but larger objects like footballs and planets apparently do not.

This problem is captured in the famous thought experiment of Schrödinger’s cat. This unhappy feline is inside a sealed box containing a vial of poison that will break open when a radioactive atom decays. Being a quantum object, the atom exists in a superposition of states – so it has both decayed and not decayed at the same time. This implies that the vial must be in a superposition of states too – both broken and unbroken. And if that’s the case, then the cat must be both dead and alive as well.

[div class=attrib]More from theSource here.[end-div]

Dark energy spotted in the cosmic microwave background

[div class=attrib]From Institute of Physics:[end-div]

Astronomers studying the cosmic microwave background (CMB) have uncovered new direct evidence for dark energy – the mysterious substance that appears to be accelerating the expansion of the universe. Their findings could also help map the structure of dark matter on the universe’s largest length scales.

The CMB is the faint afterglow of the universe’s birth in the Big Bang. Around 400,000 years after its creation, the universe had cooled sufficiently to allow electrons to bind to atomic nuclei. This “recombination” set the CMB radiation free from the dense fog of plasma that was containing it. Space telescopes such as WMAP and Planck have charted the CMB and found its presence in all parts of the sky, with a temperature of 2.7 K. However, measurements also show tiny fluctuations in this temperature on the scale of one part in a million. These fluctuations follow a Gaussian distribution.

In the first of two papers, a team of astronomers including Sudeep Das at the University of California, Berkeley, has uncovered fluctuations in the CMB that deviate from this Gaussian distribution. The deviations, observed with the Atacama Cosmology Telescope in Chile, are caused by interactions with large-scale structures in the universe, such as galaxy clusters. “On average, a CMB photon will have encountered around 50 large-scale structures before it reaches our telescope,” Das told physicsworld.com. “The gravitational influence of these structures, which are dominated by massive clumps of dark matter, will each deflect the path of the photon,” he adds. This process, called “lensing”, eventually adds up to a total deflection of around 3 arc minutes – one-20th of a degree.

Dark energy versus structure

In the second paper Das, along with Blake Sherwin of Princeton University and Joanna Dunkley of Oxford University, looks at how lensing could reveal dark energy. Dark energy acts to counter the emergence of structures within the universe. A universe with no dark energy would have a lot of structure. As a result, the CMB photons would undergo greater lensing and the fluctuations would deviate more from the original Gaussian distribution.

[div class=attrib]More from theSource here.[end-div]

Green Bootleggers and Baptists

[div class=attrib]Bjørn Lomborg for Project Syndicate:[end-div]

In May, the United Nations’ International Panel on Climate Change made media waves with a new report on renewable energy. As in the past, the IPCC first issued a short summary; only later would it reveal all of the data. So it was left up to the IPCC’s spin-doctors to present the take-home message for journalists.

The first line of the IPCC’s press release declared, “Close to 80% of the world‘s energy supply could be met by renewables by mid-century if backed by the right enabling public policies.” That story was repeated by media organizations worldwide.

Last month, the IPCC released the full report, together with the data behind this startlingly optimistic claim. Only then did it emerge that it was based solely on the most optimistic of 164 modeling scenarios that researchers investigated. And this single scenario stemmed from a single study that was traced back to a report by the environmental organization Greenpeace. The author of that report – a Greenpeace staff member – was one of the IPCC’s lead authors.

The claim rested on the assumption of a large reduction in global energy use. Given the number of people climbing out of poverty in China and India, that is a deeply implausible scenario.

When the IPCC first made the claim, global-warming activists and renewable-energy companies cheered. “The report clearly demonstrates that renewable technologies could supply the world with more energy than it would ever need,” boasted Steve Sawyer, Secretary-General of the Global Wind Energy Council.

This sort of behavior – with activists and big energy companies uniting to applaud anything that suggests a need for increased subsidies to alternative energy – was famously captured by the so-called “bootleggers and Baptists” theory of politics.

The theory grew out of the experience of the southern United States, where many jurisdictions required stores to close on Sunday, thus preventing the sale of alcohol. The regulation was supported by religious groups for moral reasons, but also by bootleggers, because they had the market to themselves on Sundays. Politicians would adopt the Baptists’ pious rhetoric, while quietly taking campaign contributions from the criminals.

Of course, today’s climate-change “bootleggers” are not engaged in any illegal behavior. But the self-interest of energy companies, biofuel producers, insurance firms, lobbyists, and others in supporting “green” policies is a point that is often missed.

Indeed, the “bootleggers and Baptists” theory helps to account for other developments in global warming policy over the past decade or so. For example, the Kyoto Protocol would have cost trillions of dollars, but would have achieved a practically indiscernible difference in stemming the rise in global temperature. Yet activists claimed that there was a moral obligation to cut carbon-dioxide emissions, and were cheered on by businesses that stood to gain.

[div class=attrib]More from theSource here[end-div]

Hello Internet; Goodbye Memory

Imagine a world without books; you’d have to commit useful experiences, narratives and data to handwritten form and memory.Imagine a world without the internet and real-time search; you’d have to rely on a trusted expert or a printed dictionary to find answers to your questions. Imagine a world without the written word; you’d have to revert to memory and oral tradition to pass on meaningful life lessons and stories.

Technology is a wonderfully double-edged mechanism. It brings convenience. It helps in most aspects of our lives. Yet, it also brings fundamental cognitive change that brain scientists have only recently begun to fathom. Recent studies, including the one cited below from Columbia University explore this in detail.

[div class=attrib]From Technology Review:[end-div]

A study says that we rely on external tools, including the Internet, to augment our memory.

The flood of information available online with just a few clicks and finger-taps may be subtly changing the way we retain information, according to a new study. But this doesn’t mean we’re becoming less mentally agile or thoughtful, say the researchers involved. Instead, the change can be seen as a natural extension of the way we already rely upon social memory aids—like a friend who knows a particular subject inside out.

Researchers and writers have debated over how our growing reliance on Internet-connected computers may be changing our mental faculties. The constant assault of tweets and YouTube videos, the argument goes, might be making us more distracted and less thoughtful—in short, dumber. However, there is little empirical evidence of the Internet’s effects, particularly on memory.

Betsy Sparrow, assistant professor of psychology at Columbia University and lead author of the new study, put college students through a series of four experiments to explore this question.

One experiment involved participants reading and then typing out a series of statements, like “Rubber bands last longer when refrigerated,” on a computer. Half of the participants were told that their statements would be saved, and the other half were told they would be erased. Additionally, half of the people in each group were explicitly told to remember the statements they typed, while the other half were not. Participants who believed the statements would be erased were better at recalling them, regardless of whether they were told to remember them.

[div class=attrib]More from theSource here.[end-div]

The Good, the Bad and the Ugly – 40 years on

One of the most fascinating and (in)famous experiments in social psychology began in the bowels of Stanford University 40 years ago next month. The experiment intended to evaluate how people react to being powerless. However, on conclusion it took a broader look at role assignment and reaction to authority.

The Stanford Prison Experiment incarcerated male college student volunteers in a mock prison for 6 fateful days. Some of the students were selected to be prison guards, the remainder would be prisoners. The researchers, led by psychology professor Philip Zimbardo encouraged the guards to think of themselves as actual guards in a real prison. What happened during these 6 days in “prison” is the stuff of social science legend. The results continues to shock psychologists to this day; many were not prepared for the outcome after 6 days, which saw guards take their roles to the extreme becoming overarchingly authoritarian and mentally abusive, and prisoners become down-trodden and eventually rebellious. A whistle-blower eventually called the experiment to an abrupt end (it was to have continued for 2 weeks).

Forty years on, researchers went back to interview professor Zimbardo and some of the participating guards and prisoners to probe their feelings now. Recollections from one of the guards is below.

[div class=attrib]From Stanford Magazine:[end-div]

I was just looking for some summer work. I had a choice of doing this or working at a pizza parlor. I thought this would be an interesting and different way of finding summer employment.

The only person I knew going in was John Mark. He was another guard and wasn’t even on my shift. That was critical. If there were prisoners in there who knew me before they encountered me, then I never would have been able to pull off anything I did. The act that I put on—they would have seen through it immediately.

What came over me was not an accident. It was planned. I set out with a definite plan in mind, to try to force the action, force something to happen, so that the researchers would have something to work with. After all, what could they possibly learn from guys sitting around like it was a country club? So I consciously created this persona. I was in all kinds of drama productions in high school and college. It was something I was very familiar with: to take on another personality before you step out on the stage. I was kind of running my own experiment in there, by saying, “How far can I push these things and how much abuse will these people take before they say, ‘knock it off?'” But the other guards didn’t stop me. They seemed to join in. They were taking my lead. Not a single guard said, “I don’t think we should do this.”

The fact that I ramped up the intimidation and the mental abuse without any real sense as to whether I was hurting anybody— I definitely regret that. But in the long run, no one suffered any lasting damage. When the Abu Ghraib scandal broke, my first reaction was, this is so familiar to me. I knew exactly what was going on. I could picture myself in the middle of that and watching it spin out of control. When you have little or no supervision as to what you’re doing, and no one steps in and says, “Hey, you can’t do this”—things just keep escalating. You think, how can we top what we did yesterday? How do we do something even more outrageous? I felt a deep sense of familiarity with that whole situation.

Sometimes when people know about the experiment and then meet me, it’s like, My God, this guy’s a psycho! But everyone who knows me would just laugh at that.

[div class=attrib]More from theSource here.[end-div]

3D Printing – A demonstration

Three dimensional “printing” has been around for a few years now, but the technology continues to advance by leaps and bounds. The technology has already progressed to such an extent that some 3D print machines can now “print” objects with moving parts and in color as well. And, we all thought those cool replicator machines in Star Trek were the stuff of science fiction.

[tube]LQfYm4ZVcVI[/tube]

Book Review: “Millennium People”: J.G. Ballard’s last hurrah

[div class=attrib]From Salon:[end-div]

In this, his last novel, the darkly comic “Millennium People,” J.G. Ballard returns to many of the themes that have established him as one of the 20th century’s principal chroniclers of modernity as dystopia. Throughout his career Ballard, who died in 2009, wrote many different variations on the same theme: A random act of violence propels a somewhat affectless protagonist into a violent pathology lurking just under the tissue-thin layer of postmodern civilization. As in “Crash” (1973) and “Concrete Island” (1974), the car parks, housing estates, motorways and suburban sprawl of London in “Millennium People” form a psychological geography. At its center, Heathrow Airport — a recurrent setting for Ballard — exerts its subtly malevolent pull on the bored lives and violent dreams of the alienated middle class.

“Millennium People” begins with the explosion of a bomb at Heathrow, which kills the ex-wife of David Markham, an industrial psychologist. The normally passive Markham sets out to investigate the anonymous bombing and the gated community of Chelsea Marina, a middle-class neighborhood that has become ground zero for a terrorist group and a burgeoning rebellion of London’s seemingly docile middle class. Exploited not so much for their labor as for their deeply ingrained and self-policing sense of social responsibility and good manners, the educated and professional residents of Chelsea Marina regard themselves as the “new proletariat,” with their exorbitant maintenance and parking fees as the new form of oppression, their careers, cultured tastes and education the new gulag.

In the company of a down-and-out priest and a film professor turned Che Guevara of the Volvo set, Markham quickly discovers that the line between amateur detective and amateur terrorist is not so clear, as he is drawn deeper into acts of sabotage and violence against the symbols and institutions of his own safe and sensible life. Targets include travel agencies, video stores, the Tate Modern, the BBC and National Film Theater — all “soporifics” designed to con people into believing their lives are interesting or going somewhere.

[div class=attrib]More from theSource here.[end-div]

Happy Birthday Neptune

One hundred and sixty-four years ago, or one Neptunian year, Neptune was first observed by telescope. Significantly, it was the first planet to be discovered deliberately; the existence and location of the gas giant was calculated mathematically. Subsequently, it was located by telescope, on 24 September 1846, and found to be within one degree of the mathematically predicted location. Astronomers hypothesized Neptune’s existence due to perturbations in the orbit of its planetary neighbor, Uranus, around the sun, which could only be explained by the presence of another object in nearby orbit. A triumph for the scientific method, and besides, it’s beautiful too.

[div class=attrib]Image courtesy of NASA.[end-div]

Culturally Specific Mental Disorders: A Bad Case of the Brain Fags

Is this man buff enough? Image courtesy of Slate

If you happen to have just read The Psychopath Test by Jon Ronson, this article in Slate is appropriately timely, and presents new fodder for continuing research (and a sequel). It would therefore come as no surprise to find Mr.Ronson trekking through Newfoundland in search of “Old Hag Syndrome”, a type of sleep paralysis, visiting art museums in Italy for “Stendhal Syndrome,” a delusional disorder experienced by Italians after studying artistic masterpieces, and checking on Nigerian college students afflicted by “Brain Fag Syndrome”. Then there is: “Wild Man Syndrome,” from New Guinea (a syndrome combining hyperactivity, clumsiness and forgetfulness), “Koro Syndrome” (a delusion of disappearing protruding body parts) first described in China over 2,000 years ago, “Jiko-shisen-kyofu” from Japan (a fear of offending others by glancing at them), and here in the west, “Muscle Dysmorphia Syndrome” (a delusion common in weight-lifters that one’s body is insufficiently ripped).

All of these and more can be found in the latest version of the DSM-IV (Diagnostic and Statistical Manual of Mental Disorders, Fourth Edition) manual.

[div class=attrib]From Slate:[end-div]

In 1951, Hong Kong psychiatrist Pow-Meng Yap authored an influential paper in the Journal of Mental Sciences on the subject of “peculiar psychiatric disorders”—those that did not fit neatly into the dominant disease-model classification scheme of the time and yet appeared to be prominent, even commonplace, in certain parts of the world. Curiously these same conditions—which include “amok” in Southeast Asia and bouffée délirante in French-speaking countries—were almost unheard of outside particular cultural contexts. The American Psychiatric Association has conceded that certain mysterious mental afflictions are so common, in some places, that they do in fact warrant inclusion as “culture-bound syndromes” in the official Diagnostic and Statistical Manual of Mental Disorders.

he working version of this manual, the DSM-IV, specifies 25 such syndromes. Take “Old Hag Syndrome,” a type of sleep paralysis in Newfoundland in which one is visited by what appears to be a rather unpleasant old hag sitting on one’s chest at night. (If I were a bitter, divorced straight man, I’d probably say something diabolical about my ex-wife here.) Then there’s gururumba, or “Wild Man Syndrome,” in which New Guinean males become hyperactive, clumsy, kleptomaniacal, and conveniently amnesic, “Brain Fag Syndrome” (more on that in a moment), and “Stendhal Syndrome,” a delusional disorder experienced mostly by Italians after gazing upon artistic masterpieces. The DSM-IV defines culture-bound syndromes as “recurrent, locality-specific patterns of aberrant behavior and troubling experience that may or may not be linked to a particular diagnostic category.”
And therein lies the nosological pickle: The symptoms of culture-bound syndromes often overlap with more general, known psychiatric conditions that are universal in nature, such as schizophrenia, body dysmorphia, and social anxiety. What varies across cultures, and is presumably moulded by them, is the unique constellation of symptoms, or “idioms of distress.”

Some scholars believe that many additional distinct culture-bound syndromes exist. One that’s not in the manual but could be, argue psychiatrists Gen Kanayama and Harrison Pope in a short paper published earlier this year in the Harvard Review of Psychiatry, is “muscle dysmorphia.” The condition is limited to Western males, who suffer the delusion that they are insufficiently ripped. “As a result,” write the authors, “they may lift weights compulsively in the gym, often gain large amounts of muscle mass, yet still perceive themselves as too small.” Within body-building circles, in fact, muscle dysmorphia has long been recognized as a sort of reverse anorexia nervosa. But it’s almost entirely unheard of among Asian men. Unlike hypermasculine Western heroes such as Hercules, Thor, and the chiseled Arnold of yesteryear, the Japanese and Chinese have tended to prefer their heroes fully clothed, mentally acute, and lithe, argue Kanayama and Pope. In fact, they say anabolic steroid use is virtually nonexistent in Asian countries, even though the drugs are considerably easier to obtain, being available without a prescription at most neighborhood drugstores.

[div class=attrib]More from theSource here.[end-div]

Disconnected?

[div class=attrib]From Slate:[end-div]

Have you heard that divorce is contagious? A lot of people have. Last summer a study claiming to show that break-ups can propagate from friend to friend to friend like a marriage-eating bacillus spread across the news agar from CNN to CBS to ABC with predictable speed. “Think of this ‘idea’ of getting divorced, this ‘option’ of getting divorced like a virus, because it spreads more or less the same way,” explained University of California-San Diego professor James Fowler to the folks at Good Morning America.

It’s a surprising, quirky, and seemingly plausible finding, which explains why so many news outlets caught the bug. But one weird thing about the media outbreak was that the study on which it was based had never been published in a scientific journal. The paper had been posted to the Social Science Research Network web site, a sort of academic way station for working papers whose tagline is “Tomorrow’s Research Today.” But tomorrow had not yet come for the contagious divorce study: It had never actually passed peer review, and still hasn’t. “It is under review,” Fowler explained last week in an email. He co-authored the paper with his long-time collaborator, Harvard’s Nicholas Christakis, and lead author Rose McDermott.

A few months before the contagious divorce story broke, Slate ran an article I’d written based on a related, but also unpublished, scientific paper. The mathematician Russell Lyons had posted a dense treatise on his website suggesting that the methods employed by Christakis and Fowler in their social network studies were riddled with statistical errors at many levels. The authors were claiming—in the New England Journal of Medicine, in a popular book, in TED talks, in snappy PR videos—that everything from obesity to loneliness to poor sleep could spread from person to person to person like a case of the galloping crud. But according to Lyons and several other experts, their arguments were shaky at best. “It’s not clear that the social contagionists have enough evidence to be telling people that they owe it to their social network to lose weight,” I wrote last April. As for the theory that obesity and divorce and happiness contagions radiate from human beings through three degrees of friendship, I concluded “perhaps it’s best to flock away for now.”

The case against Christakis and Fowler has grown since then. The Lyons paper passed peer review and was published in the May issue of the journal Statistics, Politics, and Policy. Two other recent papers raise serious doubts about their conclusions. And now something of a consensus is forming within the statistics and social-networking communities that Christakis and Fowler’s headline-grabbing contagion papers are fatally flawed. Andrew Gelman, a professor of statistics at Columbia, wrote a delicately worded blog post in June noting that he’d “have to go with Lyons” and say that the claims of contagious obesity, divorce and the like “have not been convincingly demonstrated.” Another highly respected social-networking expert, Tom Snijders of Oxford, called the mathematical model used by Christakis and Fowler “not coherent.” And just a few days ago, Cosma Shalizi, a statistician at Carnegie Mellon, declared, “I agree with pretty much everything Snijders says.”

[div class=attrib]More from theSource here.[end-div]

MondayPoem: If You Forget Me

Pablo Neruda (1904–1973)

[div class=attrib]If You Forget Me, Pablo Neruda[end-div]

I want you to know
one thing.

You know how this is:
if I look
at the crystal moon, at the red branch
of the slow autumn at my window,
if I touch
near the fire
the impalpable ash
or the wrinkled body of the log,
everything carries me to you,
as if everything that exists,
aromas, light, metals,
were little boats
that sail
toward those isles of yours that wait for me.

Well, now,
if little by little you stop loving me
I shall stop loving you little by little.

If suddenly
you forget me
do not look for me,
for I shall already have forgotten you.

If you think it long and mad,
the wind of banners
that passes through my life,
and you decide
to leave me at the shore
of the heart where I have roots,
remember
that on that day,
at that hour,
I shall lift my arms
and my roots will set off
to seek another land.

But
if each day,
each hour,
you feel that you are destined for me
with implacable sweetness,
if each day a flower
climbs up to your lips to seek me,
ah my love, ah my own,
in me all that fire is repeated,
in me nothing is extinguished or forgotten,
my love feeds on your love, beloved,
and as long as you live it will be in your arms
without leaving mine.

The Allure of Steampunk Videotelephony and the Telephonoscope

Video telephony as imagined in 1910

A concept for the videophone surfaced just a couple of years after the telephone was patented in the United States. The telephonoscope as it was called first appeared in Victorian journals and early French science fiction in 1878.

In 1891 Alexander Graham Bell recorded his concept of an electrical radiophone, which discussed, “…the possibility of seeing by electricity”. He later went on to predict that, “…the day would come when the man at the telephone would be able to see the distant person to whom he was speaking”.

The world’s first videophone entered service in 1934, in Germany. The service was offered in select post offices linking several major German cities, and provided bi-directional voice and image on 8 inch square displays. In the U.S., AT&T launched the Picturephone in the mid-1960s. However, the costly equipment, high-cost per call, and inconveniently located public video-telephone booths ensured that the service would never gain public acceptance. Similar to the U.S., experience major telephone companies in France, Japan and Sweden had limited success with video-telephony during the 1970s-80s.

Major improvements in video technology, telecommunications deregulation and increases in bandwidth during the 1980s-90s brought the price point down considerably. However, significant usage remained mostly within the realm of major corporations due to the still not insignificant investment in equipment and cost of bandwidth.

Fast forward to the 21st century. Skype and other IP (internet protocol) based services have made videochat commonplace and affordable, and in most cases free.It now seems that videchat has become almost ubiquitous. Recent moves into this space by tech heavyweights like Apple with Facetime, Microsoft with its acquisition of Skype, Google with its Google Plus social network video calling component, and Facebook’s new video calling service will in all likelihood add further momentum.

Of course, while videochat is an effective communication tool it does have a cost in terms of personal and social consequences over its non-video cousin, the telephone. Next time you videochat rather than make a telephone call you will surely be paying greater attention to your bad hair and poor grooming, your crumpled clothes, uncoordinated pajamas or lack thereof, the unwanted visitors in the background shot, and the not so subtle back-lighting that focuses attention on the clutter in your office or bedroom. Doesn’t it make you harken back for the days of the simple telephone? Either that or perhaps you are drawn to the more alluring and elegant steampunk form of videochat as imagined by the Victorians, in the image above.

The Best of States, the Worst of States

[div class=attrib]From Frank Jacobs / BigThink:[end-div]

Are these maps cartograms or mere infographics?

An ‘information graphic’ is defined as any graphic representation of data. It follows from that definition that infographics are less determined by type than by purpose. Which is to represent complex information in a readily graspable graphic format. Those formats are often, but not only: diagrams, flow charts, and maps.

Although one definition of maps – the graphic representation of spatial data – is very similar to that of infographics, the two are easily distinguished by, among other things, the context of the latter, which are usually confined to and embedded in technical and journalistic writing.

Cartograms are a subset of infographics, limited to one type of graphic representation: maps. On these maps, one set of quantitative information (usually surface or distance) is replaced by another (often demographic data or electoral results). The result is an informative distortion of the map (1).

The distortion on these maps is not of the distance-bending or surface-stretching kind. It merely substitutes the names of US states with statistical information relevant to each of them (2). This substitution is non-quantitative, affecting the toponymy rather than the topography of the map. So is this a mere infographic? As the information presented is statistical (each label describes each state as first or last in a Top 50), I’d say this is – if you’ll excuse the pun – a borderline case.

What’s more relevant, from this blog’s perspective, is that it is an atypical, curious and entertaining use of cartography.

The first set of maps labels each and every one of the states as best and worst at something. All of those distinctions, both the favourable and the unfavourable kind, are backed up by some sort of evidence.

The first map, the United States of Awesome, charts fifty things that each state of the Union is best at. Most of those indicators, 12 in all, are related to health and well-being (3). Ten are economic (4), six environmental (5), five educational (6). Three can be classified as ‘moral’, even if these particular distinctions make for strange bedfellows (7).

The best thing that can be said about Missouri and Illinois, apparently, is that they’re extremely average (8). While that may excite few people, it will greatly interest political pollsters and anyone in need of a focus group. Virginia and Indiana are the states with the most birthplaces of presidents and vice-presidents, respectively. South Carolinians prefer to spend their time golfing, Pennsylvanians hunting. Violent crime is lowest in Maine, public corruption in Nebraska. The most bizarre distinctions, finally, are reserved for New Mexico (Spaceport Home), Oklahoma (Best Licence Plate) and Missouri (Bromine Production). If that’s the best thing about those states, what might be the worst?

[div class=attrib]More from theSource here.[end-div]

Cy Twombly, Idiosyncratic Painter, Dies at 83

Cy Twombly. Image courtesy of Sundance Channel

[div class=attrib]From the New York Times:[end-div]

Cy Twombly, whose spare childlike scribbles and poetic engagement with antiquity left him stubbornly out of step with the movements of postwar American art even as he became one of the era’s most important painters, died in Rome Tuesday. He was 83.

The cause was not immediately known, although Mr. Twombly had suffered from cancer. His death was announced by the Gagosian Gallery, which represents his work.

In a career that slyly subverted Abstract Expressionism, toyed briefly with Minimalism, seemed barely to acknowledge Pop Art and anticipated some of the concerns of Conceptualism, Mr. Twombly was a divisive artist almost from the start. The curator Kirk Varnedoe, on the occasion of a 1994 retrospective at the Museum of Modern Art, wrote that his work was “influential among artists, discomfiting to many critics and truculently difficult not just for a broad public, but for sophisticated initiates of postwar art as well.” The critic Robert Hughes called him “the Third Man, a shadowy figure, beside that vivid duumvirate of his friends Jasper Johns and Robert Rauschenberg.”

Mr. Twombly’s decision to settle permanently in southern Italy in 1957 as the art world shifted decisively in the other direction, from Europe to New York, was only the most symbolic of his idiosyncrasies. He avoided publicity throughout his life and mostly ignored his critics, who questioned constantly whether his work deserved a place at the forefront of 20th-century abstraction, though he lived long enough to see it arrive there. It didn’t help that his paintings, because of their surface complexity and whirlwinds of tiny detail – scratches, erasures, drips, penciled fragments of Italian and classical verse amid scrawled phalluses and buttocks – lost much of their power in reproduction.

But Mr. Twombly, a tall, rangy Virginian who once practiced drawing in the dark to make his lines less purposeful, steadfastly followed his own program and looked to his own muses: often literary ones like Catullus, Rumi, Pound and Rilke. He seemed to welcome the privacy that came with unpopularity.

“I had my freedom and that was nice,” he said in a rare interview, with Nicholas Serota, the director of the Tate, before a 2008 survey of his career at the Tate Modern.

The critical low point probably came after a 1964 exhibition at the Leo Castelli Gallery in New York that was widely panned. The artist and writer Donald Judd, who was hostile toward painting in general, was especially damning even so, calling the show a fiasco. “There are a few drips and splatters and an occasional pencil line,” he wrote in a review. “There isn’t anything to these paintings.”

[div class=attrib]More from theSource here.[end-div]