Dinosaurs of Retail

moa

Shopping malls in the United States were in their prime in the 1970s and ’80s. Many had positioned themselves a a bright, clean, utopian alternative to inner-city blight and decay. A quarter of a century on, while the mega-malls may be thriving, the numerous smaller suburban brethren are seeing lower sales. As internet shopping and retailing pervades all reaches of our society many midsize malls are decaying or shutting down completely.  Documentary photographer Seth Lawless captures this fascinating transition in a new book: Black Friday: the Collapse of the American Shopping Mall.

From the Guardian:

It is hard to believe there has ever been any life in this place. Shattered glass crunches under Seph Lawless’s feet as he strides through its dreary corridors. Overhead lights attached to ripped-out electrical wires hang suspended in the stale air and fading wallpaper peels off the walls like dead skin.

Lawless sidesteps debris as he passes from plot to plot in this retail graveyard called Rolling Acres Mall in Akron, Ohio. The shopping centre closed in 2008, and its largest retailers, which had tried to make it as standalone stores, emptied out by the end of last year. When Lawless stops to overlook a two-storey opening near the mall’s once-bustling core, only an occasional drop of water, dribbling through missing ceiling tiles, breaks the silence.

“You came, you shopped, you dressed nice – you went to the mall. That’s what people did,” says Lawless, a pseudonymous photographer who grew up in a suburb of nearby Cleveland. “It was very consumer-driven and kind of had an ugly side, but there was something beautiful about it. There was something there.”

Gazing down at the motionless escalators, dead plants and empty benches below, he adds: “It’s still beautiful, though. It’s almost like ancient ruins.”

Dying shopping malls are speckled across the United States, often in middle-class suburbs wrestling with socioeconomic shifts. Some, like Rolling Acres, have already succumbed. Estimates on the share that might close or be repurposed in coming decades range from 15 to 50%. Americans are returning downtown; online shopping is taking a 6% bite out of brick-and-mortar sales; and to many iPhone-clutching, city-dwelling and frequently jobless young people, the culture that spawned satire like Mallrats seems increasingly dated, even cartoonish.

According to longtime retail consultant Howard Davidowitz, numerous midmarket malls, many of them born during the country’s suburban explosion after the second world war, could very well share Rolling Acres’ fate. “They’re going, going, gone,” Davidowitz says. “They’re trying to change; they’re trying to get different kinds of anchors, discount stores … [But] what’s going on is the customers don’t have the fucking money. That’s it. This isn’t rocket science.”

Shopping culture follows housing culture. Sprawling malls were therefore a natural product of the postwar era, as Americans with cars and fat wallets sprawled to the suburbs. They were thrown up at a furious pace as shoppers fled cities, peaking at a few hundred per year at one point in the 1980s, according to Paco Underhill, an environmental psychologist and author of Call of the Mall: The Geography of Shopping. Though construction has since tapered off, developers left a mall overstock in their wake.

Currently, the US contains around 1,500 of the expansive “malls” of suburban consumer lore. Most share a handful of bland features. Brick exoskeletons usually contain two storeys of inward-facing stores separated by tile walkways. Food courts serve mediocre pizza. Parking lots are big enough to easily misplace a car. And to anchor them economically, malls typically depend on department stores: huge vendors offering a variety of products across interconnected sections.

For mid-century Americans, these gleaming marketplaces provided an almost utopian alternative to the urban commercial district, an artificial downtown with less crime and fewer vermin. As Joan Didion wrote in 1979, malls became “cities in which no one lives but everyone consumes”. Peppered throughout disconnected suburbs, they were a place to see and be seen, something shoppers have craved since the days of the Greek agora. And they quickly matured into a self-contained ecosystem, with their own species – mall rats, mall cops, mall walkers – and an annual feeding frenzy known as Black Friday.

“Local governments had never dealt with this sort of development and were basically bamboozled [by developers],” Underhill says of the mall planning process. “In contrast to Europe, where shopping malls are much more a product of public-private negotiation and funding, here in the US most were built under what I call ‘cowboy conditions’.”

Shopping centres in Europe might contain grocery stores or childcare centres, while those in Japan are often built around mass transit. But the suburban American variety is hard to get to and sells “apparel and gifts and damn little else”, Underhill says.

Nearly 700 shopping centres are “super-regional” megamalls, retail leviathans usually of at least 1 million square feet and upward of 80 stores. Megamalls typically outperform their 800 slightly smaller, “regional” counterparts, though size and financial health don’t overlap entirely. It’s clearer, however, that luxury malls in affluent areas are increasingly forcing the others to fight for scraps. Strip malls – up to a few dozen tenants conveniently lined along a major traffic artery – are retail’s bottom feeders and so well-suited to the new environment. But midmarket shopping centres have begun dying off alongside the middle class that once supported them. Regional malls have suffered at least three straight years of declining profit per square foot, according to the International Council of Shopping Centres (ICSC).

Read the entire story here.

Image: Mall of America. Courtesy of Wikipedia.

Your Tax Dollars At Work — Leetspeak

US-FBI-ShadedSealIt’s fascinating to see what our government agencies are doing with some of our hard earned tax dollars.

In this head-scratching example, the FBI — the FBI’s Intelligence Research Support Unit, no less — has just completed a 83-page glossary of Internet slang or “leetspeak”. LOL and Ugh! (the latter is not an acronym).

Check out the document via Muckrock here — they obtained the “secret” document through the Freedom of Information Act.

From the Washington Post:

The Internet is full of strange and bewildering neologisms, which anyone but a text-addled teen would struggle to understand. So the fine, taxpayer-funded people of the FBI — apparently not content to trawl Urban Dictionary, like the rest of us — compiled a glossary of Internet slang.

An 83-page glossary. Containing nearly 3,000 terms.

The glossary was recently made public through a Freedom of Information request by the group MuckRock, which posted the PDF, called “Twitter shorthand,” online. Despite its name, this isn’t just Twitter slang: As the FBI’s Intelligence Research Support Unit explains in the introduction, it’s a primer on shorthand used across the Internet, including in “instant messages, Facebook and Myspace.” As if that Myspace reference wasn’t proof enough that the FBI’s a tad out of touch, the IRSU then promises the list will prove useful both professionally and “for keeping up with your children and/or grandchildren.” (Your tax dollars at work!)

All of these minor gaffes could be forgiven, however, if the glossary itself was actually good. Obviously, FBI operatives and researchers need to understand Internet slang — the Internet is, increasingly, where crime goes down these days. But then we get things like ALOTBSOL (“always look on the bright side of life”) and AMOG (“alpha male of group”) … within the first 10 entries.

ALOTBSOL has, for the record, been tweeted fewer than 500 times in the entire eight-year history of Twitter. AMOG has been tweeted far more often, but usually in Spanish … as a misspelling, it would appear, of “amor” and “amigo.”

Among the other head-scratching terms the FBI considers can’t-miss Internet slang:

  1. AYFKMWTS (“are you f—— kidding me with this s—?”) — 990 tweets
  2. BFFLTDDUP (“best friends for life until death do us part) — 414 tweets
  3. BOGSAT (“bunch of guys sitting around talking”) — 144 tweets
  4. BTDTGTTSAWIO (“been there, done that, got the T-shirt and wore it out”) — 47 tweets
  5. BTWITIAILWY (“by the way, I think I am in love with you”) — 535 tweets
  6. DILLIGAD (“does it look like I give a damn?”) — 289 tweets
  7. DITYID (“did I tell you I’m depressed?”) — 69 tweets
  8. E2EG (“ear-to-ear grin”) — 125 tweets
  9. GIWIST (“gee, I wish I said that”) — 56 tweets
  10. HCDAJFU (“he could do a job for us”) — 25 tweets
  11. IAWTCSM (“I agree with this comment so much”) — 20 tweets
  12. IITYWIMWYBMAD (“if I tell you what it means will you buy me a drink?”) — 250 tweets
  13. LLTA (“lots and lots of thunderous applause”) — 855 tweets
  14. NIFOC (“naked in front of computer”) — 1,065 tweets, most of them referring to acronym guides like this one.
  15. PMYMHMMFSWGAD (“pardon me, you must have mistaken me for someone who gives a damn”) — 128 tweets
  16. SOMSW (“someone over my shoulder watching) — 170 tweets
  17. WAPCE (“women are pure concentrated evil”) — 233 tweets, few relating to women
  18. YKWRGMG (“you know what really grinds my gears?”) — 1,204 tweets

In all fairness to the FBI, they do get some things right: “crunk” is helpfully defined as “crazy and drunk,” FF is “a recommendation to follow someone referenced in the tweet,” and a whole range of online patois is translated to its proper English equivalent: hafta is “have to,” ima is “I’m going to,” kewt is “cute.”

Read the entire article here.

Image: FBI Seal. Courtesy of U.S. Government.

Goostman Versus Turing

eugene-goostman

Some computer scientists believe that “Eugene Goostman” may have overcome the famous hurdle proposed by Alan Turning, by cracking the eponymous Turning Test. Eugene is a 13 year-old Ukrainian “boy” constructed from computer algorithms designed to feign intelligence and mirror human thought processes. During a text-based exchange Eugene managed to convince his human interrogators that he was a real boy — and thus his creators claim to have broken the previously impenetrable Turing barrier.

Other researchers and philosophers disagree: they claim that it’s easier to construct an artificial intelligence that converses in good, but limited English — Eugene is Ukrainian after all — than it would be to develop a native anglophone adult. So, the Turning Test barrier may yet stand.

From the Guardian:

From 2001: a Space Odyssey to Her, the idea of an intelligent computer that can hold conversations with humans has long been a dream of science-fiction writers, but that fantasy may just have taken a step closer to becoming reality. On Saturday, a computer algorithm claiming to be a 13-year-old boy called Eugene Goostman passed the Turing test, it has been claimed. However, some commentators have suggested that pretending to be a 13-year-old Ukrainian child is not as difficult as pretending to be an Anglophone adult, and that the result should not count.

The program convinced 33% of the human judges at an event at the Royal Society that it was actually a human. The Turing test, which was devised in 1950 by the pioneering computer scientist and second wold war codebreaker Alan Turing is a classic benchmark in artificial intelligence – it determines a machine’s ability to show intelligent behaviour equivalent to, or indistinguishable from, that of a human.

Transcripts of the conversations from the latest Royal Society test are not yet available but below are transcripts from a previous event at Bletchley Park on 23 June 2012, with the same program.

Can you tell Eugene isn’t human?

Test 1

[12:43:37] Judge: Hi, I’m Cathy.

[12:43:43] Eugene: Cathy! Nice to meet you! You’re a woman, if I’m not mistaken.

[12:44:20] Judge: Right. What is your name?

[12:44:29] Eugene: People call me Eugene. Or “Zhenya”. It is a short name for Eugene in Russia and Ukraine.

[12:45:17] Judge: Nice name. Are you from either area?

[12:45:41] Eugene: Agree 🙂 Oh, what a fruitful conversation;-)

[12:45:52] Judge: Do you really think so?

[12:46:04] Eugene: No, your guess was wrong. I don’t think so. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

[12:46:56] Judge: What speciality? What’s yours?

[12:47:01] Eugene: Huh? And I forgot to ask you where you are from.

[12:47:51] Judge: The universe and you?

[12:48:01] Eugene: Is it your hobby to ask little poor Jewish boys such tricky questions they don’t have an answer for?

Test 2

[16:11:25] Judge: what music do you like?

[16:11:32] Eugene: I like Eminem – because he doesn’t have a thing to do with “polite correctness”.

[16:11:46] Judge: OMG i love eminem, he’s my favourite artist!

[16:12:00] Eugene: I wonder what Eminem’s nick mean. I associate it only with a word “enema”. But he is a cool guy, I like his The Real Slim Shady.

[16:12:55] Judge: i know what his nickname means, his real name is Marshall Mathers. From the initials M&M phynetically sounding like Eminem.

[16:13:06] Eugene: Eminem is a famous crapper. No, raper… ripper… Yeah, I mean – rapper. I like his Stan and The Real Slim Shady.

[16:13:32] Judge: I like Hailies Song.

[16:13:46] Eugene: Interesting tastes. I like science-fiction books, by the way. Where do you came from, by the way? Could you tell me about the place where you live?

[16:14:22] Judge: Epsom, home of the derby. yourself?

[16:14:27] Eugene: My guinea pig says that name Derby sounds very nice.

[16:14:41] Judge: i shopuld [sic] talk with him sometime.

[16:14:48] Eugene: I think you should take some typing lessons. Or just type more accurately.

[16:15:04] Judge: That’s very rude.

[16:15:09] Eugene: Ummm … Frankly, I didn’t get your question 🙁

[16:15:49] Judge: “Ummm” isnt a word. i think you need typing lessons

[16:15:58] Eugene: My thoughts are same. By the way, I still don’t know your specialty – or, possibly, I’ve missed it?

Test 3

[16:06:31] Judge: Hi there, do you prefer to wear a dress or trousers to work?

[16:06:37] Eugene: I am a scholar. I’m too young to make money.

[16:07:32] Judge: What was the last film you saw at the cinema?

[16:07:38] Eugene: Huh? Could you tell me what are you? I mean your profession.

[16:08:11] Judge: I am an IT manager, and you?

[16:08:16] Eugene: Manager? So how is it to work as manager?

[16:09:13] Judge: Hard work but rewarding, what do you do?

[16:09:18] Eugene: I am only 13, so I’m attending school so far.

[16:09:51] Judge: Have you seen teletubbies?

[16:10:04] Eugene: No, I haven’t seen it, unfortunately. But I hope it is better than these crappy Star Wars. And I forgot to ask you where you are from …

[16:10:15] Judge: Brighton, and you?

Read the entire story and Eugene’s conversation with real humans here.

Image: A conversation with Eugene Goostman. Courtesy of BBC.

Nuts About Nutella

nutella

Connoisseurs of Nutella — that wonderful concoction of hazelnuts and chocolate — are celebrating 50 years of the iconic Italian spread. Here’s to the next 50 bites, sorry years! Say no more.

From the Guardian:

In Piedmont they have been making gianduiotto, a confectionery combining hazelnuts and cocoa sold in a pretty tinfoil wrapper, since the mid-18th century. They realised long ago that the nuts, which are plentiful in the surrounding hills, are a perfect match for chocolate. But no one had any idea that their union would prove so harmonious, lasting and fruitful. Only after the second world war was this historic marriage finally sealed.

Cocoa beans are harder to come by and, consequently, more expensive. Pietro Ferrero, an Alba-based pastry cook, decided to turn the problem upside down. Chocolate should not be allowed to dictate its terms. By using more nuts and less cocoa, one could obtain a product that was just as good and not as costly. What is more, it would be spread.

Nutella, one of the world’s best-known brands, celebrated its 50th anniversary in Alba last month. In telling the story of this chocolate spread, it’s difficult to avoid cliches: a success story emblematic of Italy’s postwar recovery, the tale of a visionary entrepreneur and his perseverance, a business model driven by a single product.

The early years were spectacular. In 1946 the Ferrero brothers produced and sold 300kg of their speciality; nine months later output had reached 10 tonnes. Pietro stayed at home making the spread. Giovanni went to market across Italy in his little Fiat. In 1948 Ferrero, now a limited company, moved into a 5,000 sq metre factory equipped to produce 50 tonnes of gianduiotto a month.

By 1949 the process was nearing perfection, with the launch of the “supercrema” version, which was smoother and stuck more to the bread than the knife. It was also the year Pietro died. He did not live long enough to savour his triumph.

His son Michele was driven by the same obsession with greater spreadability. Under his leadership Ferrero became an empire. But it would take another 15 years of hard work and endless experiments before finally, in 1964, Nutella was born.

The firm now sells 365,000 tonnes of Nutella a year worldwide, the biggest consumers being the Germans, French, Italians and Americans. The anniversary was, of course, the occasion for a big promotional operation. At a gathering in Rome last month, attended by two government ministers, journalists received a 1kg jar marked with the date and a commemorative Italian postage stamp. It is an ideal opportunity for Ferrero – which also owns the Tic Tac, Ferrero Rocher, Kinder and Estathé brands, among others – to affirm its values and rehearse its well-established narrative.

There are no recent pictures of the patriarch Michele, who divides his time between Belgium and Monaco. According to Forbes magazine he was worth $9.5bn in 2009, making him the richest person in Italy. He avoids the media and making public appearances, even eschewing the boards of leading Italian firms.

His son Giovanni, who has managed the company on his own after the early death of his brother Pietro in 2011, only agreed to a short interview on Italy’s main public TV channel. He abides by the same rule as his father: “Only on two occasions should the papers mention one’s name – birth and death.”

In contrast, Ferrero executives have plenty to say about both products and the company, with its 30,000-strong workforce at 14 locations, its €8bn ($10bn) revenue, 72% share of the chocolate-spreads market, 5 million friends on Facebook, 40m Google references, its hazelnut plantations in both hemispheres securing it a round-the-year supply of fresh ingredients and, of course, its knowhow.

“The recipe for Nutella is not a secret like Coca-Cola,” says marketing manager Laurent Cremona. “Everyone can find out the ingredients. We simply know how to combine them better than other people.”

Be that as it may, the factory in Alba is as closely guarded as Fort Knox and visits are not allowed. “It’s not a company, it’s an oasis of happiness,” says Francesco Paolo Fulci, a former ambassador and president of the Ferrero foundation. “In 70 years, we haven’t had a single day of industrial action.”

Read the entire article here.

Image: Never enough Nutella. Courtesy of secret Nutella fans the world over / Ferrero, S.P.A

theDiagonal is Dislocating to The Diagonal

Flatirons_Winter_SunriseDear readers, theDiagonal is in the midst of a major dislocation in May-June 2014. Thus, your friendly editor would like to apologize for the recent, intermittent service. While theDiagonal lives online, its human-powered (currently) editor is physically relocating with family to Boulder, CO. Normal, daily service from theDiagonal will resume in July.

The city of Boulder intersects Colorado State Highway 119, as it sweeps on a SW to NE track from the Front Range towards the Central Plains. Coincidentally, or not, highway 119 is more affectionately known as The Diagonal.

Image: The Flatirons, mountain formations, in Boulder, Colorado. Courtesy of Jesse Varner / AzaToth / Wikipedia.

Images: Go Directly To Jail or…

open-door

If you live online and write or share images it’s likely that you’ve been, or will soon be, sued by the predatory Getty Images. Your kindly editor at theDiagonal uses images found to be in the public domain or references them as fair use in this blog, and yet has fallen prey to this extortionate nuisance of a company.

Getty with its army of fee extortion collectors — many are not even legally trained or accredited — will find reason to send you numerous legalistic and threatening letters demanding hundreds of dollars in compensation and damages. It will do this without sound proof, relying on the threats to cajole unwary citizens to part with significant sums. This is such a big market for Getty that numerous services, such as this one, have sprung up over the years to help writers and bloggers combat the Getty extortion.

With that in mind, it’s refreshing to see the Metropolitan Museum of Art in New York taking a rather different stance: the venerable institution is doing us all a wonderful service by making many hundreds of thousands of classic images available online for free. Getty take that!

From WSJ:

This month, the Metropolitan Museum of Art released for download about 400,000 digital images of works that are in the public domain. The images, which are free to use for non-commercial use without permission or fees, may now be downloaded from the museum’s website. The museum will continue to add images to the collection as they digitize files as part of the initiative Open Access for Scholarly Content (OASC). 

When asked about the impact of the initiative, Sree Sreenivasan, Chief Digital Officer, said the new program would provide increased access and streamline the process of obtaining these images. “In keeping with the Museum’s mission, we hope the new image policy will stimulate new scholarship in a variety of media, provide greater access to our vast collection, and broaden the reach of the Museum to researchers world-wide. By providing open access, museums and scholars will no longer have to request permission to use our public domain images, they can download the images directly from our website.”

Thomas P. Campbell, director and chief executive of the Metropolitan Museum of Art, said the Met joins a growing number of museums using an open-access policy to make available digital images of public domain works. “I am delighted that digital technology can open the doors to this trove of images from our encyclopedic collection,” Mr. Campbell said in his May 16 announcement. Other New York institutions that have initiated similar programs include the New York Public Library (map collection),  the Brooklyn Academy of Music and the New York Philharmonic. 

See more images here.

Image: “The Open Door,” earlier than May 1844. Courtesy of William Henry Fox Talbot/The Metropolitan Museum of Art, New York.

I Think, Therefore I am, Not Robot

Robbie_the_Robot_2006

A sentient robot is the long-held dream of both artificial intelligence researcher and science fiction author. Yet, some leading mathematicians theorize it may never happen, despite our accelerating technological prowess.

From New Scientist:

So long, robot pals – and robot overlords. Sentient machines may never exist, according to a variation on a leading mathematical model of how our brains create consciousness.

Over the past decade, Giulio Tononi at the University of Wisconsin-Madison and his colleagues have developed a mathematical framework for consciousness that has become one of the most influential theories in the field. According to their model, the ability to integrate information is a key property of consciousness. They argue that in conscious minds, integrated information cannot be reduced into smaller components. For instance, when a human perceives a red triangle, the brain cannot register the object as a colourless triangle plus a shapeless patch of red.

But there is a catch, argues Phil Maguire at the National University of Ireland in Maynooth. He points to a computational device called the XOR logic gate, which involves two inputs, A and B. The output of the gate is “0” if A and B are the same and “1” if A and B are different. In this scenario, it is impossible to predict the output based on A or B alone – you need both.

Memory edit

Crucially, this type of integration requires loss of information, says Maguire: “You have put in two bits, and you get one out. If the brain integrated information in this fashion, it would have to be continuously haemorrhaging information.”

Maguire and his colleagues say the brain is unlikely to do this, because repeated retrieval of memories would eventually destroy them. Instead, they define integration in terms of how difficult information is to edit.

Consider an album of digital photographs. The pictures are compiled but not integrated, so deleting or modifying individual images is easy. But when we create memories, we integrate those snapshots of information into our bank of earlier memories. This makes it extremely difficult to selectively edit out one scene from the “album” in our brain.

Based on this definition, Maguire and his team have shown mathematically that computers can’t handle any process that integrates information completely. If you accept that consciousness is based on total integration, then computers can’t be conscious.

Open minds

“It means that you would not be able to achieve the same results in finite time, using finite memory, using a physical machine,” says Maguire. “It doesn’t necessarily mean that there is some magic going on in the brain that involves some forces that can’t be explained physically. It is just so complex that it’s beyond our abilities to reverse it and decompose it.”

Disappointed? Take comfort – we may not get Rosie the robot maid, but equally we won’t have to worry about the world-conquering Agents of The Matrix.

Neuroscientist Anil Seth at the University of Sussex, UK, applauds the team for exploring consciousness mathematically. But he is not convinced that brains do not lose information. “Brains are open systems with a continual turnover of physical and informational components,” he says. “Not many neuroscientists would claim that conscious contents require lossless memory.”

Read the entire story here.

Image: Robbie the Robot, Forbidden Planet. Courtesy of San Diego Comic Con, 2006 / Wikipedia.

c2=e/m

Feynmann_Diagram_Gluon_RadiationParticle physicists will soon attempt to reverse the direction of Einstein’s famous equation delineating energy-matter equivalence, e=mc2. Next year, they plan to crash quanta of light into each other to create matter. Cool or what!

From the Guardian:

Researchers have worked out how to make matter from pure light and are drawing up plans to demonstrate the feat within the next 12 months.

The theory underpinning the idea was first described 80 years ago by two physicists who later worked on the first atomic bomb. At the time they considered the conversion of light into matter impossible in a laboratory.

But in a report published on Sunday, physicists at Imperial College London claim to have cracked the problem using high-powered lasers and other equipment now available to scientists.

“We have shown in principle how you can make matter from light,” said Steven Rose at Imperial. “If you do this experiment, you will be taking light and turning it into matter.”

The scientists are not on the verge of a machine that can create everyday objects from a sudden blast of laser energy. The kind of matter they aim to make comes in the form of subatomic particles invisible to the naked eye.

The original idea was written down by two US physicists, Gregory Breit and John Wheeler, in 1934. They worked out that – very rarely – two particles of light, or photons, could combine to produce an electron and its antimatter equivalent, a positron. Electrons are particles of matter that form the outer shells of atoms in the everyday objects around us.

But Breit and Wheeler had no expectations that their theory would be proved any time soon. In their study, the physicists noted that the process was so rare and hard to produce that it would be “hopeless to try to observe the pair formation in laboratory experiments”.

Oliver Pike, the lead researcher on the study, said the process was one of the most elegant demonstrations of Einstein’s famous relationship that shows matter and energy are interchangeable currencies. “The Breit-Wheeler process is the simplest way matter can be made from light and one of the purest demonstrations of E=mc2,” he said.

Writing in the journal Nature Photonics, the scientists describe how they could turn light into matter through a number of separate steps. The first step fires electrons at a slab of gold to produce a beam of high-energy photons. Next, they fire a high-energy laser into a tiny gold capsule called a hohlraum, from the German for “empty room”. This produces light as bright as that emitted from stars. In the final stage, they send the first beam of photons into the hohlraum where the two streams of photons collide.

The scientists’ calculations show that the setup squeezes enough particles of light with high enough energies into a small enough volume to create around 100,000 electron-positron pairs.

The process is one of the most spectacular predictions of a theory called quantum electrodynamics (QED) that was developed in the run up to the second world war. “You might call it the most dramatic consequence of QED and it clearly shows that light and matter are interchangeable,” Rose told the Guardian.

The scientists hope to demonstrate the process in the next 12 months. There are a number of sites around the world that have the technology. One is the huge Omega laser in Rochester, New York. But another is the Orion laser at Aldermaston, the atomic weapons facility in Berkshire.

A successful demonstration will encourage physicists who have been eyeing the prospect of a photon-photon collider as a tool to study how subatomic particles behave. “Such a collider could be used to study fundamental physics with a very clean experimental setup: pure light goes in, matter comes out. The experiment would be the first demonstration of this,” Pike said.

Read the entire story here.

Image: Feynmann diagram for gluon radiation. Courtesy of Wikipedia.

 

 

95.5 Percent is Made Up and It’s Dark

Petrarch_by_Bargilla

Physicists and astronomers observe the very small and the very big. Although they are focused on very different areas of scientific endeavor and discovery, they tend to agree on one key observation: 95.5 of the cosmos is currently invisible to us. That is, only around 4.5 percent of our physical universe is made up of matter or energy that we can see or sense directly through experimental interaction. The rest, well, it’s all dark — so-called dark matter and dark energy. But nobody really knows what or how or why. Effectively, despite tremendous progress in our understanding of our world, we are still in a global “Dark Age”.

From the New Scientist:

TO OUR eyes, stars define the universe. To cosmologists they are just a dusting of glitter, an insignificant decoration on the true face of space. Far outweighing ordinary stars and gas are two elusive entities: dark matter and dark energy. We don’t know what they are… except that they appear to be almost everything.

These twin apparitions might be enough to give us pause, and make us wonder whether all is right with the model universe we have spent the past century so carefully constructing. And they are not the only thing. Our standard cosmology also says that space was stretched into shape just a split second after the big bang by a third dark and unknown entity called the inflaton field. That might imply the existence of a multiverse of countless other universes hidden from our view, most of them unimaginably alien – just to make models of our own universe work.

Are these weighty phantoms too great a burden for our observations to bear – a wholesale return of conjecture out of a trifling investment of fact, as Mark Twain put it?

The physical foundation of our standard cosmology is Einstein’s general theory of relativity. Einstein began with a simple observation: that any object’s gravitational mass is exactly equal to its resistance to accelerationMovie Camera, or inertial mass. From that he deduced equations that showed how space is warped by mass and motion, and how we see that bending as gravity. Apples fall to Earth because Earth’s mass bends space-time.

In a relatively low-gravity environment such as Earth, general relativity’s effects look very like those predicted by Newton’s earlier theory, which treats gravity as a force that travels instantaneously between objects. With stronger gravitational fields, however, the predictions diverge considerably. One extra prediction of general relativity is that large accelerating masses send out tiny ripples in the weave of space-time called gravitational waves. While these waves have never yet been observed directly, a pair of dense stars called pulsars, discovered in 1974, are spiralling in towards each other just as they should if they are losing energy by emitting gravitational waves.

Gravity is the dominant force of nature on cosmic scales, so general relativity is our best tool for modelling how the universe as a whole moves and behaves. But its equations are fiendishly complicated, with a frightening array of levers to pull. If you then give them a complex input, such as the details of the real universe’s messy distribution of mass and energy, they become effectively impossible to solve. To make a working cosmological model, we make simplifying assumptions.

The main assumption, called the Copernican principle, is that we are not in a special place. The cosmos should look pretty much the same everywhere – as indeed it seems to, with stuff distributed pretty evenly when we look at large enough scales. This means there’s just one number to put into Einstein’s equations: the universal density of matter.

Einstein’s own first pared-down model universe, which he filled with an inert dust of uniform density, turned up a cosmos that contracted under its own gravity. He saw that as a problem, and circumvented it by adding a new term into the equations by which empty space itself gains a constant energy density. Its gravity turns out to be repulsive, so adding the right amount of this “cosmological constant” ensured the universe neither expanded nor contracted. When observations in the 1920s showed it was actually expanding, Einstein described this move as his greatest blunder.

It was left to others to apply the equations of relativity to an expanding universe. They arrived at a model cosmos that grows from an initial point of unimaginable density, and whose expansion is gradually slowed down by matter’s gravity.

This was the birth of big bang cosmology. Back then, the main question was whether the expansion would ever come to a halt. The answer seemed to be no; there was just too little matter for gravity to rein in the fleeing galaxies. The universe would coast outwards forever.

Then the cosmic spectres began to materialise. The first emissary of darkness put a foot in the door as long ago as the 1930s, but was only fully seen in the late 1970s when astronomers found that galaxies are spinning too fast. The gravity of the visible matter would be too weak to hold these galaxies together according to general relativity, or indeed plain old Newtonian physics. Astronomers concluded that there must be a lot of invisible matter to provide extra gravitational glue.

The existence of dark matter is backed up by other lines of evidence, such as how groups of galaxies move, and the way they bend light on its way to us. It is also needed to pull things together to begin galaxy-building in the first place. Overall, there seems to be about five times as much dark matter as visible gas and stars.

Dark matter’s identity is unknown. It seems to be something beyond the standard model of particle physics, and despite our best efforts we have yet to see or create a dark matter particle on Earth (see “Trouble with physics: Smashing into a dead end”). But it changed cosmology’s standard model only slightly: its gravitational effect in general relativity is identical to that of ordinary matter, and even such an abundance of gravitating stuff is too little to halt the universe’s expansion.

The second form of darkness required a more profound change. In the 1990s, astronomers traced the expansion of the universe more precisely than ever before, using measurements of explosions called type 1a supernovae. They showed that the cosmic expansion is accelerating. It seems some repulsive force, acting throughout the universe, is now comprehensively trouncing matter’s attractive gravity.

This could be Einstein’s cosmological constant resurrected, an energy in the vacuum that generates a repulsive force, although particle physics struggles to explain why space should have the rather small implied energy density. So imaginative theorists have devised other ideas, including energy fields created by as-yet-unseen particles, and forces from beyond the visible universe or emanating from other dimensions.

Whatever it might be, dark energy seems real enough. The cosmic microwave background radiation, released when the first atoms formed just 370,000 years after the big bang, bears a faint pattern of hotter and cooler spots that reveals where the young cosmos was a little more or less dense. The typical spot sizes can be used to work out to what extent space as a whole is warped by the matter and motions within it. It appears to be almost exactly flat, meaning all these bending influences must cancel out. This, again, requires some extra, repulsive energy to balance the bending due to expansion and the gravity of matter. A similar story is told by the pattern of galaxies in space.

All of this leaves us with a precise recipe for the universe. The average density of ordinary matter in space is 0.426 yoctograms per cubic metre (a yoctogram is 10-24 grams, and 0.426 of one equates to about 250 protons), making up 4.5 per cent of the total energy density of the universe. Dark matter makes up 22.5 per cent, and dark energy 73 per cent (see diagram). Our model of a big-bang universe based on general relativity fits our observations very nicely – as long as we are happy to make 95.5 per cent of it up.

Arguably, we must invent even more than that. To explain why the universe looks so extraordinarily uniform in all directions, today’s consensus cosmology contains a third exotic element. When the universe was just 10-36 seconds old, an overwhelming force took over. Called the inflaton field, it was repulsive like dark energy, but far more powerful, causing the universe to expand explosively by a factor of more than 1025, flattening space and smoothing out any gross irregularities.

When this period of inflation ended, the inflaton field transformed into matter and radiation. Quantum fluctuations in the field became slight variations in density, which eventually became the spots in the cosmic microwave background, and today’s galaxies. Again, this fantastic story seems to fit the observational facts. And again it comes with conceptual baggage. Inflation is no trouble for general relativity – mathematically it just requires an add-on term identical to the cosmological constant. But at one time this inflaton field must have made up 100 per cent of the contents of the universe, and its origin poses as much of a puzzle as either dark matter or dark energy. What’s more, once inflation has started it proves tricky to stop: it goes on to create a further legion of universes divorced from our own. For some cosmologists, the apparent prediction of this multiverse is an urgent reason to revisit the underlying assumptions of our standard cosmology (see “Trouble with physics: Time to rethink cosmic inflation?”).

The model faces a few observational niggles, too. The big bang makes much more lithium-7 in theory than the universe contains in practice. The model does not explain the possible alignment in some features in the cosmic background radiation, or why galaxies along certain lines of sight seem biased to spin left-handedly. A newly discovered supergalactic structure 4 billion light years long calls into question the assumption that the universe is smooth on large scales.

Read the entire story here.

Image: Petrarch, who first conceived the idea of a European “Dark Age”, by Andrea di Bartolo di Bargilla, c1450. Courtesy of Galleria degli Uffizi, Florence, Italy / Wikipedia.

Building a Memory Palace

Feats of memory have long been the staple of human endeavor — for instance, memorizing and recalling Pi to hundreds of decimal places. Nowadays, however, memorization is a competitive sport replete with grand prizes, worthy of a place in an X-Games tournament.

From the NYT:

The last match of the tournament had all the elements of a classic showdown, pitting style versus stealth, quickness versus deliberation, and the world’s foremost card virtuoso against its premier numbers wizard.

If not quite Ali-Frazier or Williams-Sharapova, the duel was all the audience of about 100 could ask for. They had come to the first Extreme Memory Tournament, or XMT, to see a fast-paced, digitally enhanced memory contest, and that’s what they got.

The contest, an unusual collaboration between industry and academic scientists, featured one-minute matches between 16 world-class “memory athletes” from all over the world as they met in a World Cup-like elimination format. The grand prize was $20,000; the potential scientific payoff was large, too.

One of the tournament’s sponsors, the company Dart NeuroScience, is working to develop drugs for improved cognition. The other, Washington University in St. Louis, sent a research team with a battery of cognitive tests to determine what, if anything, sets memory athletes apart. Previous research was sparse and inconclusive.

Yet as the two finalists, both Germans, prepared to face off — Simon Reinhard, 35, a lawyer who holds the world record in card memorization (a deck in 21.19 seconds), and Johannes Mallow, 32, a teacher with the record for memorizing digits (501 in five minutes) — the Washington group had one preliminary finding that wasn’t obvious.

“We found that one of the biggest differences between memory athletes and the rest of us,” said Henry L. Roediger III, the psychologist who led the research team, “is in a cognitive ability that’s not a direct measure of memory at all but of attention.”

The Memory Palace

The technique the competitors use is no mystery.

People have been performing feats of memory for ages, scrolling out pi to hundreds of digits, or phenomenally long verses, or word pairs. Most store the studied material in a so-called memory palace, associating the numbers, words or cards with specific images they have already memorized; then they mentally place the associated pairs in a familiar location, like the rooms of a childhood home or the stops on a subway line.

The Greek poet Simonides of Ceos is credited with first describing the method, in the fifth century B.C., and it has been vividly described in popular books, most recently “Moonwalking With Einstein,” by Joshua Foer.

Each competitor has his or her own variation. “When I see the eight of diamonds and the queen of spades, I picture a toilet, and my friend Guy Plowman,” said Ben Pridmore, 37, an accountant in Derby, England, and a former champion. “Then I put those pictures on High Street in Cambridge, which is a street I know very well.”

As these images accumulate during memorization, they tell an increasingly bizarre but memorable story. “I often use movie scenes as locations,” said James Paterson, 32, a high school psychology teacher in Ascot, near London, who competes in world events. “In the movie ‘Gladiator,’ which I use, there’s a scene where Russell Crowe is in a field, passing soldiers, inspecting weapons.”

Mr. Paterson uses superheroes to represent combinations of letters or numbers: “I might have Batman — one of my images — playing Russell Crowe, and something else playing the horse, and so on.”

The material that competitors attempt to memorize falls into several standard categories. Shuffled decks of cards. Random words. Names matched with faces. And numbers, either binary (ones and zeros) or integers. They are given a set amount of time to study — up to one minute in this tournament, an hour or more in others — before trying to reproduce as many cards, words or digits in the order presented.

Now and then, a challenger boasts online of having discovered an entirely new method, and shows up at competitions to demonstrate it.

“Those people are easy to find, because they come in last, or close to it,” said another world-class competitor, Boris Konrad, 29, a German postdoctoral student in neuroscience. “Everyone here uses this same type of technique.”

Anyone can learn to construct a memory palace, researchers say, and with practice remember far more detail of a particular subject than before. The technique is accessible enough that preteens pick it up quickly, and Mr. Paterson has integrated it into his teaching.

“I’ve got one boy, for instance, he has no interest in academics really, but he knows the Premier League, every team, every player,” he said. “I’m working with him, and he’s using that knowledge as scaffolding to help remember what he’s learning in class.”

Experts in Forgetting

The competitors gathered here for the XMT are not just anyone, however. This is the all-world team, an elite club of laser-smart types who take a nerdy interest in stockpiling facts and pushing themselves hard.

In his doctoral study of 30 world-class performers (most from Germany, which has by far the highest concentration because there are more competitions), Mr. Konrad has found as much. The average I.Q.: 130. Average study time: 1,000 to 2,000 hours and counting. The top competitors all use some variation of the memory-palace system and test, retest and tweak it.

“I started with my own system, but now I use his,” said Annalena Fischer, 20, pointing to her boyfriend, Christian Schäfer, 22, whom she met at a 2010 memory competition in Germany. “Except I don’t use the distance runners he uses; I don’t know anything about the distance runners.” Both are advanced science students and participants in Mr. Konrad’s study.

One of the Washington University findings is predictable, if still preliminary: Memory athletes score very highly on tests of working memory, the mental sketchpad that serves as a shopping list of information we can hold in mind despite distractions.

One way to measure working memory is to have subjects solve a list of equations (5 + 4 = x; 8 + 9 = y; 7 + 2 = z; and so on) while keeping the middle numbers in mind (4, 9 and 2 in the above example). Elite memory athletes can usually store seven items, the top score on the test the researchers used; the average for college students is around two.

“And college students tend to be good at this task,” said Dr. Roediger, a co-author of the new book “Make It Stick: The Science of Successful Learning.” “What I’d like to do is extend the scoring up to, say, 21, just to see how far the memory athletes can go.”

Yet this finding raises another question: Why don’t the competitors’ memory palaces ever fill up? Players usually have many favored locations to store studied facts, but they practice and compete repeatedly. They use and reuse the same blueprints hundreds of times, and the new images seem to overwrite the old ones — virtually without error.

“Once you’ve remembered the words or cards or whatever it is, and reported them, they’re just gone,” Mr. Paterson said.

Many competitors say the same: Once any given competition is over, the numbers or words or facts are gone. But this is one area in which they have less than precise insight.

In its testing, which began last year, the Washington University team has given memory athletes surprise tests on “old” material — lists of words they’d been tested on the day before. On Day 2, they recalled an average of about three-quarters of the words they memorized on Day 1 (college students remembered fewer than 5 percent). That is, despite what competitors say, the material is not gone; far from it.

Yet to install a fresh image-laden “story” in any given memory palace, a memory athlete must clear away the old one in its entirety. The same process occurs when we change a password: The old one must be suppressed, so it doesn’t interfere with the new one.

One term for that skill is “attentional control,” and psychologists have been measuring it for years with standardized tests. In the best known, the Stroop test, people see words flash by on a computer screen and name the color in which a word is presented. Answering is nearly instantaneous when the color and the word match — “red” displayed in red — but slower when there’s a mismatch, like “red” displayed in blue.

Read the entire article here.

Life and Death: Sharing Startups

The great cycle of re-invention spawned by the Internet and mobile technologies continues apace. This time it’s the entrepreneurial businesses laying the foundation for the sharing economy — whether that be beds, room, clothes, tuition, bicycles or cars. A few succeed to become great new businesses; most fail.

From the WSJ:

A few high-profile “sharing-economy” startups are gaining quick traction with users, including those that let consumers rent apartments and homes like Airbnb Inc., or get car rides, such as Uber Technologies Inc.

Both Airbnb and Uber are valued in the billions of dollars, a sign that investors believe the segment is hot—and a big reason why more entrepreneurs are embracing the business model.

At MassChallenge, a Boston-based program to help early-stage entrepreneurs, about 9% of participants in 2013 were starting companies to connect consumers or businesses with products and services that would otherwise go unused. That compares with about 5% in 2010, for instance.

“We’re bullish on the sharing economy, and we’ll definitely make more investments in it,” said Sam Altman, president of Y Combinator, a startup accelerator in Mountain View, Calif., and one of Airbnb’s first investors.

Yet at least a few dozen sharing-economy startups have failed since 2012, including BlackJet, a Florida-based service that touted itself as the “Uber for jet travel,” and Tutorspree, a New York service dubbed the “Airbnb for tutors.” Most ran out of money, following struggles that ranged from difficulties building a critical mass of supply and demand, to higher-than-expected operating costs.

“We ended up being unable to consistently produce a level of demand on par with what we needed to scale rapidly,” said Aaron Harris, co-founder of Tutorspree, which launched in January 2011 and shuttered in August 2013.

“If you have to reacquire the customer every six months, they’ll forget you,” said Howard Morgan, co-founder of First Round Capital, which was an investor in BlackJet. “A private jet ride isn’t something you do every day. If you’re very wealthy, you have your own plane.” By comparison, he added that he recently used Uber’s ride-sharing service three times in one day.

Consider carpooling startup Ridejoy, for example. During its first year in 2011, its user base was growing by about 30% a month, with more than 25,000 riders and drivers signed up, and an estimated 10,000 rides completed, said Kalvin Wang, one of its three founders. But by the spring of 2013, Ridejoy, which had raised $1.3 million from early-stage investors like Freestyle Capital, was facing ferocious competition from free alternatives, such as carpooling forums on college websites.

Also, some riders could—and did—begin to sidestep the middleman. Many skipped paying its 10% transaction fee by handing their drivers cash instead of paying by credit card on Ridejoy’s website or mobile app. Others just didn’t get it, and even 25,000 users wasn’t sufficient to sustain the business. “You never really have enough inventory,” said Mr. Wang.

After it folded in the summer of 2013, Ridejoy returned about half of its funding to investors, according to Mr. Wang. Alexis Ohanian, an entrepreneur in Brooklyn, N.Y., who was an investor in Ridejoy, said it “could just be the timing or execution that was off.” He cited the success so far of Lyft Inc., the two-year-old San Francisco company that is valued at more than $700 million and offers a short-distance ride-sharing service. “It turned out the short rides are what the market really wanted,” Mr. Ohanian said.

One drawback is that because much of the revenue a sharing business generates goes directly back to the suppliers—of bedrooms, parking spots, vehicles or other “shared” assets—the underlying business may be continuously strapped for cash.

Read the entire article here.

The (Space) Explorers Club

clangers

Thirteen private companies recently met in New York city to present their plans and ideas for their commercial space operations. Ranging from space tourism to private exploration of the Moon and asteroid mining the companies gathered at the Explorers Club to herald a new phase of human exploration.

From Technology Review:

It was a rare meeting of minds. Representatives from 13 commercial space companies gathered on May 1 at a place dedicated to going where few have gone before: the Explorers Club in New York.

Amid the mansions and high-end apartment buildings just off Central Park, executives from space-tourism companies, rocket-making startups, and even a business that hopes to make money by mining asteroids for useful materials showed off displays and gave presentations.

The Explorers Club event provided a snapshot of what may be a new industry in the making. In an era when NASA no longer operates manned space missions and government funding for unmanned missions is tight, a host of startups—most funded by space enthusiasts with very deep pockets—have stepped up in hope of filling the gap. In the past few years, several have proved themselves. Elon Musk’s SpaceX, for example, delivers cargo to the International Space Station for NASA. Both Richard Branson’s Virgin Galactic and rocket-plane builder XCOR Aerospace plan to perform demonstrations this year that will help catapult commercial spaceflight from the fringe into the mainstream.

The advancements being made by space companies could matter to more than the few who can afford tickets to space. SpaceX has already shaken incumbents in the $190 billion satellite launch industry by offering cheaper rides into space for communications, mapping, and research satellites.

However, space tourism also looks set to become significantly cheaper. “People don’t have to actually go up for it to impact them,” says David Mindell, an MIT professor of aeronautics and astronautics and a specialist in the history of engineering. “At $200,000 you’ll have a lot more ‘space people’ running around, and over time that could have a big impact.” One direct result, says Mindell, may be increased public support for human spaceflight, especially “when everyone knows someone who’s been into space.”

Along with reporters, Explorer Club members, and members of the public who had paid the $75 to $150 entry fee, several former NASA astronauts were in attendance to lend their endorsements—including the MC for the evening, Michael López-Alegría, veteran of the space shuttle and the ISS. Also on hand, highlighting the changing times with his very presence, was the world’s first second-generation astronaut, Richard Garriott. Garriott’s father flew missions on Skylab and the space shuttle in the 1970s and 1980s, respectively. However, Garriott paid his own way to the International Space Station in 2008 as a private citizen.

The evening was a whirlwind of activity, with customer testimonials and rapid-fire displays of rocket launches, spacecraft in orbit, and space ships under construction and being tested. It all painted a picture of an industry on the move, with multiple companies offering services from suborbital experiences and research opportunities to flights to Earth orbit and beyond.

The event also offered a glimpse at the plans of several key players.

Lauren De Niro Pipher, head of astronaut relations at Virgin Galactic, revealed that the company’s founder plans to fly with his family aboard the Virgin Galactic SpaceShipTwo rocket plane in November or December of this year. The flight will launch the company’s suborbital spaceflight business, for which De Niro Pipher said more than 700 customers have so far put down deposits on tickets costing $200,000 to $250,000.

The director of business development for Blue Origin, Bretton Alexander, announced his company’s intention to begin test flights of its first full-scale vehicle within the next year. “We have not publicly started selling rides in space as others have,” said Alexander during his question-and-answer session. “But that is our plan to do that, and we look forward to doing that, hopefully soon.”

Blue Origin is perhaps the most secretive of the commercial spaceflight companies, typically revealing little of its progress toward the services it plans to offer: suborbital manned spaceflight and, later, orbital flight. Like Virgin, it was founded by a wealthy entrepreneur, in this case Amazon founder Jeff Bezos. The company, which is headquartered in Kent, Washington, has so far conducted at least one supersonic test flight and a test of its escape rocket system, both at its West Texas test center.

Also on hand was the head of Planetary Resources, Chris Lewicki, a former spacecraft engineer and manager for Mars programs at NASA. He showed off a prototype of his company’s Arkyd 100, an asteroid-hunting space telescope the size of a toaster oven. If all goes according to plan, a fleet of Arkyd 100s will first scan the skies from Earth orbit in search of nearby asteroids that might be rich in mineral wealth and water, to be visited by the next generation of Arkyd probes. Water is potentially valuable for future space-based enterprises as rocket fuel (split into its constituent elements of hydrogen and oxygen) and for use in life support systems. Planetary Resources plans to “launch early, launch often,” Lewicki told me after his presentation. To that end, the company is building a series of CubeSat-size spacecraft dubbed Arkyd 3s, to be launched from the International Space Station by the end of this year.

Andrew Antonio, experience manager at a relatively new company, World View Enterprises, showed a computer-generated video of his company’s planned balloon flights to the edge of space. A manned capsule will ascend to 100,000 feet, or about 20 miles up, from which the curvature of Earth and the black sky of space are visible. At $75,000 per ticket (reduced to $65,000 for Explorers Club members), the flight will be more affordable than competing rocket-powered suborbital experiences but won’t go as high. Antonio said his company plans to launch a small test vehicle “in about a month.”

XCOR’s director of payload sales and operations, Khaki Rodway, showed video clips of the company’s Lynx suborbital rocket plane coming together in Mojave, California, as well as a profile of an XCOR spaceflight customer. Hangared just down the flight line at the same air and space port where Virgin Galactic’s SpaceShipTwo is undergoing flight testing, the Lynx offers seating for one paying customer per flight at $95,000. XCOR hopes the Lynx will begin flying by the end of this year.

Read the entire article here.

Image: Still from the Clangers TV show. Courtesy of BBC / Smallfilms.

Intimate Anonymity

A new mobile app lets you share all your intimate details with a stranger for 20 days. The fascinating part of this social experiment is that the stranger remains anonymous throughout. The app known as 20 Day Stranger is brought to us by the venerable MIT Media Lab. It may never catch on, but you can be sure that psychologists are gleefully awaiting some data.

From Slate:

Social media is all about connecting with people you know, people you sort of know, or people you want to know. But what about all those people you didn’t know you wanted to know? They’re out there, too, and the new iPhone app 20 Day Stranger wants to put you in touch with them. Created by the MIT Media Lab’s Playful Systems research group, the app connects strangers and allows them to update each other about any and every detail of their lives for 20 days. But the people are totally anonymous and can interact directly only at the end of their 20 days together, when they can exchange one message each.

20 Day Stranger uses information from the iPhone’s sensors to alert your stranger-friend when you wake up (and start moving the phone), when you’re in a car or bus (from GPS tracking), and where you are. But it isn’t totally privacy-invading: The app also takes steps to keep both people anonymous. When it shows your stranger-friend that you’re walking around somewhere, it accompanies the notification with images from a half-mile radius of where you actually are on Google Maps. Your stranger-friend might be able to figure out what area you’re in, or they might not.

Kevin Slavin, the director of Playful Systems, explained to Fast Company that the app’s goal is to introduce people online in a positive and empathetic way, rather than one that’s filled with suspicion or doubt. Though 20 Day Stranger is currently being beta tested, Playful Systems’ goal is to generally release it in the App Store. But the group is worried about getting people to adopt it all over instead of building up user bases in certain geographic areas. “There’s no one type of person what will make it useful,” Slavin said. “It’s the heterogeneous quality of everyone in aggregate. Which is a bad [promotional] strategy if you’re making commercial software.”

At this point it’s not that rare to interact frequently with someone you’ve never met in person on social media. What’s unusual it not to know their name or anything about who they are. But an honest window into another person’s life without the pressure of identity could expand your worldview and maybe even stimulate introspection. It sounds like a step up from Secret, that’s for sure.

Read the entire article here.

Measuring a Life

stephen-sutton

“I don’t see the point in measuring life in time any more… I would rather measure it in terms of what I actually achieve. I’d rather measure it in terms of making a difference, which I think is a much more valid and pragmatic measure.”

These are the inspiring and insightful words of 19 year-old, Stephen Sutton, from Birmingham in Britain, about a week before he died from bowel cancer. His upbeat attitude and selflessness during his last days captured the hearts and minds of the nation, and he raised around $5½ million for cancer charities in the process.

From the Guardian:

Few scenarios can seem as cruel or as bleak as a 19-year-old boy dying of cancer. And yet, in the case of Stephen Sutton, who died peacefully in his sleep in the early hours of Wednesday morning, it became an inspiring, uplifting tale for millions of people.

Sutton was already something of a local hero in Birmingham, where he was being treated, but it was an extraordinary Facebook update in April that catapulted him into the national spotlight.

“It’s a final thumbs up from me,” he wrote, accompanied by a selfie of him lying in a sickbed, covered in drips, smiling cheerfully with his thumbs in the air. “I’ve done well to blag things as well as I have up till now, but unfortunately I think this is just one hurdle too far.”

It was an extraordinary moment: many would have forgiven him being full of rage and misery. And yet here was a simple, understated display of cheerful defiance.

Sutton had originally set a fundraising target of £10,000 for the Teenage Cancer Trust. But the emotional impact of that selfie was so profound that, in a matter of days, more than £3m was donated.

He made a temporary recovery that baffled doctors; he explained that he had “coughed up” a tumour. And so began an extraordinary dialogue with his well-wishers.

To his astonishment, nearly a million people liked his Facebook page and tens of thousands followed him on Twitter. It is fashionable to be downbeat about social media: to dismiss it as being riddled with the banal and the narcissistic, or for stripping human interaction of warmth as conversations shift away from the “real world” to the online sphere.

But it was difficult not to be moved by the online response to Stephen’s story: a national wave of emotion that is not normally forthcoming for those outside the world of celebrity.

His social-media updates were relentlessly upbeat, putting those of us who have tweeted moaning about a cold to shame. “Just another update to let everyone know I am still doing and feeling very well,” he reassured followers less than a week before his death. “My disease is very advanced and will get me eventually, but I will try my damn hardest to be here as long as possible.”

Sutton was diagnosed with bowel cancer in September 2010 when he was 15; tragically, he had been misdiagnosed and treated for constipation months earlier.

But his response was unabashed positivity from the very beginning, even describing his diagnosis as a “good thing” and a “kick up the backside”.

The day he began chemotherapy, he attended a party dressed as a granny – he was so thin and pale, he said, that he was “quite convincing”. He refused to take time off school, where he excelled.

When he was diagnosed as terminally ill two years later, he set up a Facebook page with a bucket list of things he wanted to achieve, including sky-diving, crowd-surfing in a rubber dinghy, and hugging an animal bigger than him (an elephant, it turned out).

But it was his fundraising for cancer research that became his passion, and his efforts will undoubtedly transform the lives of some of the 2,200 teenagers and young adults diagnosed with cancer each year.

The Teenage Cancer Trust on Wednesday said it was humbled and hugely grateful for his efforts, with donations still ticking up and reaching £3.34m by mid-afternoon .

His dream had been to become a doctor. With that ambition taken from him, he sought and found new ways to help people. “Spreading positivity” was another key aim. Four days ago, he organised a National Good Gestures Day, in Birmingham, giving out “free high-fives, hugs, handshakes and fist bumps”.

Indeed, it was not just money for cancer research that Sutton was after. He became an evangelist for a new approach to life.

“I don’t see the point in measuring life in time any more,” he told one crowd. “I would rather measure it in terms of what I actually achieve. I’d rather measure it in terms of making a difference, which I think is a much more valid and pragmatic measure.”

By such a measure, Sutton could scarcely have lived a longer, richer and more fulfilling life.

Read the entire story here.

Image: Stephen Sutton. Courtesy of Google Search.

Thwaites

thwaits_icebridge_2012

Over the coming years the words “Thwaites Glacier” will become known to many people, especially those who make their home near the world’s oceans. The thawing of Antarctic ice and the accelerating melting of its glaciers — of which Thwaites is a prime example — pose an increasing threat to our coasts, but imperil us all.

Thwaites is one of size mega-glaciers that drain into the West Antarctic’s Amundsen Sea. If all were to melt completely, as they are continuing to do, global sea-level would be projected to rise an average of 4½ feet. Astonishingly, this catastrophe in the making has passed a tipping-point — climatologists and glaciologists now tend to agree that the melting is irreversible and accelerating.

From ars technica:

Today, researchers at UC Irvine and the Jet Propulsion Laboratory have announced results indicating that glaciers across a large area of West Antarctica have been destabilized and that there is little that will stop their continuing retreat. These glaciers are all that stand between the ocean and a massive basin of ice that sits below sea level. Should the sea invade this basin, we’d be committed to several meters of sea level rise.

Even in the short term, the new findings should increase our estimates for sea level rise by the end of the century, the scientists suggest. But the ongoing process of retreat and destabilization will mean that the area will contribute to rising oceans for centuries.

The press conference announcing these results is ongoing. We will have a significant update on this story later today.

UPDATE (2:05pm CDT):

The glaciers in question are in West Antarctica, and drain into the Amundsen Sea. On the coastal side, the ends of the glacier are actually floating on ocean water. Closer to the coast, there’s what’s called a “grounding line,” where the weight of the ice above sea level pushes the bottom of the glacier down against the sea bed. From there on, back to the interior of Antarctica, all of the ice is directly in contact with the Earth.

That’s a rather significant fact, given that, just behind a range of coastal hills, all of the ice is sitting in a huge basin that’s significantly below sea level. In total, the basin contains enough ice to raise sea levels approximately four meters, largely because the ice piled in there rises significantly above sea level.

Because of this configuration, the grounding line of the glaciers that drain this basin act as a protective barrier, keeping the sea back from the base of the deeper basin. Once ocean waters start infiltrating the base of a glacier, the glacier melts, flows faster, and thins. This lessens the weight holding the glacier down, ultimately causing it to float, which hastens its break up. Since the entire basin is below sea level (in some areas by over a kilometer), water entering the basin via any of the glaciers could destabilize the entire thing.

Thus, understanding the dynamics of the grounding lines is critical. Today’s announcements have been driven by two publications. One of them models the behavior of one of these glaciers, and shows that it has likely reached a point where it will be prone to a sudden retreat sometime in the next few centuries. The second examines every glacier draining this basin, and shows that all but one of them are currently losing contact with their grounding lines.

Ungrounded

The data come from two decades worth of data from the ESA’s Earth Remote Sensing satellites. These include radar that performs two key functions: peers through the ice to get a sense of the terrain that lies buried under the ice near the grounding line. And, through interferometry, it tracks the dynamics of the ice sheet’s flow in the area, as well as its thinning and the location of the grounding line itself. The study tracks a number of glaciers that all drain into the region: Pine Island, Thwaites, Haynes, and Smith/Kohler.

As we’ve covered previously, the Pine Island Glacier came ungrounded in the second half of the past decade, retreating up to 31km in the process. Although this was the one that made headlines, all the glaciers in the area are in retreat. Thwaites saw areas retreat up to 14km over the course of the study, Haynes retracted by 10km, and the Smith/Kohler glaciers retreated by 35km.

The retreating was accompanied by thinning of the glaciers, as ice that had been held back above sea levels in the interior spread forward and thinned out. This contributed to sea level rise, and the speakers at the press conference agreed that the new data shows that the recently released IPCC estimates for sea level rise are out of date; even by the end of this century, the continuation of this process will significantly increase the rate of sea level rise we can expect.

The real problem, however, comes later. Glaciers can establish new grounding lines if there’s a feature in the terrain, such as a hill that rises above sea level, that provides a new anchoring point. The authors see none: “Upstream of the 2011 grounding line positions, we find no major bed obstacle that would prevent the glaciers from further retreat and draw down the entire basin.” In fact, several of the existing grounding lines are close to points where the terrain begins to slope downward into the basin.

For some of the glaciers, the problems are already starting. At Pine Island, the bottom of the glacier is now sitting on terrain that’s 400 meters deeper than where the end rested in 1992, and there are no major hills between there and the basin. As far as the Smith/Kohler glaciers, the grounding line is 800 meters deeper and “its ice shelf pinning points are vanishing.”

What’s next?

As a result, the authors concluded that these glaciers are essentially destabilized—unless something changes radically, they’re destined for retreat into the indefinite future. But what will the trajectory of that retreat look like? In this case, the data doesn’t directly help. It needs to be fed into a model that projects the current melting into the future. Conveniently, a different set of scientists has already done this modeling.

The work focuses on the Thwaites glacier, which appears to be the most stable: there are 60-80km before between the existing terminus and the deep basin, and two or three ridges within that distance that will allow the formation of new grounding lines.

The authors simulated the behavior of Thwaites using a number of different melting rates. These ranged from a low that approximated the behavior typical in the early 90s, to a high rate of melt that is similar to what was observed in recent years. Every single one of these situations saw the Thwaites retreat into the deep basin within the next 1,000 years. In the higher melt scenarios—the ones most reflective of current conditions—this typically took only a few centuries.

The other worrisome behavior is that there appeared to be a tipping point. In every simulation that saw an extensive retreat, rates of melting shifted from under 80 gigatonnes of ice per year to 150 gigatonnes or more, all within the span of a couple of decades. In the later conditions, this glacier alone contributed half a centimeter to sea level rise—every year.

Read the entire article here.

Image: Thwaites Glacier, Antarctica, 2012. Courtesy of NASA Earth Observatory.

DarwinTunes

Charles_DarwinResearchers at Imperial College, London recently posed an intriguing question and have since developed a cool experiment to test it. Does artistic endeavor, such as music, follow the same principles of evolutionary selection in biology, as described by Darwin? That is, does the funkiest survive? Though, one has to wonder what the eminent scientist would have thought about some recent fusion of rap / dubstep / classical.

From the Guardian:

There were some funky beats at Imperial College London on Saturday at its annual science festival. As well as opportunities to create bogeys, see robots dance and try to get physics PhD students to explain their wacky world, this fascinating event included the chance to participate in a public game-like experiment called DarwinTunes.

Participants select tunes and “mate” them with other tunes to create musical offspring: if the offspring are in turn selected by other players, they “survive” and get the chance to reproduce their musical DNA. The experiment is online – you too can try to immortalise your selfish musical genes.

It is a model of evolution in practice that raises fascinating questions about culture and nature. These questions apply to all the arts, not just to dance beats. How does “cultural evolution” work? How close is the analogy between Darwin’s well-proven theory of evolution in nature and the evolution of art, literature and music?

The idea of cultural evolution was boldly defined by Jacob Bronowski as our fundamental human ability “not to accept the environment but to change it”. The moment the first stone tools appeared in Africa, about 2.5m years ago, a new, faster evolution, that of human culture, became visible on Earth: from cave paintings to the Renaissance, from Galileo to the 3D printer, this cultural evolution has advanced at breathtaking speed compared with the massive periods of time it takes nature to evolve new forms.

In DarwinTunes, cultural evolution is modelled as what the experimenters call “the survival of the funkiest”. Pulsing dance beats evolve through selections made by participants, and the music (it is claimed) becomes richer through this process of selection. Yet how does the model really correspond to the story of culture?

One way Darwin’s laws of nature apply to visual art is in the need for every successful form to adapt to its environment. In the forests of west and central Africa, wood carving was until recent times a flourishing art form. In the islands of Greece, where marble could be quarried easily, stone sculpture was more popular. In the modern technological world, the things that easily come to hand are not wood or stone but manufactured products and media images – so artists are inclined to work with the readymade.

At first sight, the thesis of DarwinTunes is a bit crude. Surely it is obvious that artists don’t just obey the selections made by their audience – that is, their consumers. To think they do is to apply the economic laws of our own consumer society across all history. Culture is a lot funkier than that.

Yet just because the laws of evolution need some adjustment to encompass art, that does not mean art is a mysterious spiritual realm impervious to scientific study. In fact, the evolution of evolution – the adjustments made by researchers to Darwin’s theory since it was unveiled in the Victorian age – offers interesting ways to understand culture.

One useful analogy between art and nature is the idea of punctuated equilibrium, introduced by some evolutionary scientists in the 1970s. Just as species may evolve not through a constant smooth process but by spectacular occasional leaps, so the history of art is punctuated by massively innovative eras followed by slower, more conventional periods.

Read the entire story here.

Image: Charles Darwin, 1868, photographed by Julia Margaret Cameron. Courtesy of Wikipedia.

Plastic, Heal Thyself!

[tube]sybsT1_0qwQ[/tube]

Blood is a remarkable substance: it transports vital oxygen to nourish our cells, it carries signalling chemicals that control our actions, it delivers armies of substances, at a moment’s notice, to ward against bodily infection and injury. Now, imagine a similar, bio-mimetic process in plastic, which remarkably allows a plastic material to heal itself.

From New Scientist:

If you prick it, does it not bleed? Puncture this plastic and it will heal itself with oozing fluids, in a process that mimics the way blot clots form to repair wounds. The plastic could one day be used to automatically patch holes in distant spacecraft or repair fighter jets on the fly.

So far, efforts to develop materials that fix themselves the way biological tissue mends itself have been limited. Scott White at the University of Illinois at Urbana-Champaign and his colleagues developed one of the first versions in 2001, but that material could only heal microscopic cracks.

Now his team have created a plastic lined with a type of artificial vascular system that can heal damage large enough to be visible to the naked eye.

The key is a pair of liquids that react when they are mixed. One fluid contains long, thin molecules and the other contains three-sided molecules. When the fluids mix, the molecules join together to create a scaffold, similar to the way blood platelets and fibrin proteins join to form a clot.

After a few minutes of contact, the liquids turn into a thick gel that fills the damaged area. Over a few hours, other ingredients within the fluids cause the gel to harden.

Strength from weakness

To test the concept, the team ran separate channels of each liquid through a plastic square and punctured it, creating a 4-millimetre hole with 35 millimetres of surrounding cracks. This also tore open the fluid channels.

Pumps on the edge of the plastic square squirted the fluids into the channels, where they oozed out and mixed, filling the hole and the radiating cracks within 20 minutes. The material hardened in about 3 hours, and the resulting patch was around 60 per cent as strong as the original plastic.

Holes larger than 8 millimetres proved more difficult to fill, as gravity caused the gel to sag before it could harden. The team thinks using foams in place of fluids would fill larger gaps, but they haven’t tested that idea yet.

Eventually, White and his team envision plastics with multiple criss-crossing channels, to ensure that the fluids always overlap with a damaged area. Embedding this synthetic vascular network would weaken the original material, but not by much, they say.

“You pay the price for being able to repair this damage, but it is certainly one that nature has figured out how to tolerate,” says team member Jeff Moore, also at the University of Illinois. “If you just look to things like bone or trees, they are all vascularised.”

Read the entire article here.

Image: Self-healing materials fix large-scale damage. Courtesy of University of Illinois at Urbana-Champaign.

The Rise of McLiterature

Will-Self-2007A sad symptom of our expanding media binge culture and the fragmentation of our shortening attention spans is the demise of literary fiction. Author Will Self believes the novel, and narrative prose in general, is on a slow, but accelerating, death-spiral. His eloquent views presented in a May 6, 2014 lecture are excerpted below.

From the Guardian:

If you happen to be a writer, one of the great benisons of having children is that your personal culture-mine is equipped with its own canaries. As you tunnel on relentlessly into the future, these little harbingers either choke on the noxious gases released by the extraction of decadence, or they thrive in the clean air of what we might call progress. A few months ago, one of my canaries, who’s in his mid-teens and harbours a laudable ambition to be the world’s greatest ever rock musician, was messing about on his electric guitar. Breaking off from a particularly jagged and angry riff, he launched into an equally jagged diatribe, the gist of which was already familiar to me: everything in popular music had been done before, and usually those who’d done it first had done it best. Besides, the instant availability of almost everything that had ever been done stifled his creativity, and made him feel it was all hopeless.

A miner, if he has any sense, treats his canary well, so I began gently remonstrating with him. Yes, I said, it’s true that the web and the internet have created a permanent Now, eliminating our sense of musical eras; it’s also the case that the queered demographics of our longer-living, lower-birthing population means that the middle-aged squat on top of the pyramid of endeavour, crushing the young with our nostalgic tastes. What’s more, the decimation of the revenue streams once generated by analogues of recorded music have put paid to many a musician’s income. But my canary had to appreciate this: if you took the long view, the advent of the 78rpm shellac disc had also been a disaster for musicians who in the teens and 20s of the last century made their daily bread by live performance. I repeated one of my favourite anecdotes: when the first wax cylinder recording of Feodor Chaliapin singing “The Song of the Volga Boatmen was played, its listeners, despite a lowness of fidelity that would seem laughable to us (imagine a man holding forth from a giant bowl of snapping, crackling and popping Rice Krispies), were nonetheless convinced the portly Russian must be in the room, and searched behind drapes and underneath chaise longues for him.

So recorded sound blew away the nimbus of authenticity surrounding live performers – but it did worse things. My canaries have often heard me tell how back in the 1970s heyday of the pop charts, all you needed was a writing credit on some loathsome chirpy-chirpy-cheep-cheeping ditty in order to spend the rest of your born days lying by a guitar-shaped pool in the Hollywood Hills hoovering up cocaine. Surely if there’s one thing we have to be grateful for it’s that the web has put paid to such an egregious financial multiplier being applied to raw talentlessness. Put paid to it, and also returned musicians to the domain of live performance and, arguably, reinvigorated musicianship in the process. Anyway, I was saying all of this to my canary when I was suddenly overtaken by a great wave of noxiousness only I could smell. I faltered, I fell silent, then I said: sod you and your creative anxieties, what about me? How do you think it feels to have dedicated your entire adult life to an art form only to see the bloody thing dying before your eyes?

My canary is a perceptive songbird – he immediately ceased his own cheeping, except to chirrup: I see what you mean. The literary novel as an art work and a narrative art form central to our culture is indeed dying before our eyes. Let me refine my terms: I do not mean narrative prose fiction tout court is dying – the kidult boywizardsroman and the soft sadomasochistic porn fantasy are clearly in rude good health. And nor do I mean that serious novels will either cease to be written or read. But what is already no longer the case is the situation that obtained when I was a young man. In the early 1980s, and I would argue throughout the second half of the last century, the literary novel was perceived to be the prince of art forms, the cultural capstone and the apogee of creative endeavour. The capability words have when arranged sequentially to both mimic the free flow of human thought and investigate the physical expressions and interactions of thinking subjects; the way they may be shaped into a believable simulacrum of either the commonsensical world, or any number of invented ones; and the capability of the extended prose form itself, which, unlike any other art form, is able to enact self-analysis, to describe other aesthetic modes and even mimic them. All this led to a general acknowledgment: the novel was the true Wagnerian Gesamtkunstwerk.

This is not to say that everyone walked the streets with their head buried in Ulysses or To the Lighthouse, or that popular culture in all its forms didn’t hold sway over the psyches and imaginations of the great majority. Nor do I mean to suggest that in our culture perennial John Bull-headed philistinism wasn’t alive and snorting: “I don’t know much about art, but I know what I like.” However, what didn’t obtain is the current dispensation, wherein those who reject the high arts feel not merely entitled to their opinion, but wholly justified in it. It goes further: the hallmark of our contemporary culture is an active resistance to difficulty in all its aesthetic manifestations, accompanied by a sense of grievance that conflates it with political elitism. Indeed, it’s arguable that tilting at this papery windmill of artistic superiority actively prevents a great many people from confronting the very real economic inequality and political disenfranchisement they’re subject to, exactly as being compelled to chant the mantra “choice” drowns out the harsh background Muzak telling them they have none.

Just because you’re paranoid it doesn’t mean they aren’t out to get you. Simply because you’ve remarked a number of times on the concealed fox gnawing its way into your vitals, it doesn’t mean it hasn’t at this moment swallowed your gall bladder. Ours is an age in which omnipresent threats of imminent extinction are also part of the background noise – nuclear annihilation, terrorism, climate change. So we can be blinkered when it comes to tectonic cultural shifts. The omnipresent and deadly threat to the novel has been imminent now for a long time – getting on, I would say, for a century – and so it’s become part of culture. During that century, more books of all kinds have been printed and read by far than in the entire preceding half millennium since the invention of movable-type printing. If this was death it had a weird, pullulating way of expressing itself. The saying is that there are no second acts in American lives; the novel, I think, has led a very American sort of life: swaggering, confident, brash even – and ever aware of its world-conquering manifest destiny. But unlike Ernest Hemingway or F Scott Fitzgerald, the novel has also had a second life. The form should have been laid to rest at about the time of Finnegans Wake, but in fact it has continued to stalk the corridors of our minds for a further three-quarters of a century. Many fine novels have been written during this period, but I would contend that these were, taking the long view, zombie novels, instances of an undead art form that yet wouldn’t lie down.

Literary critics – themselves a dying breed, a cause for considerable schadenfreude on the part of novelists – make all sorts of mistakes, but some of the most egregious ones result from an inability to think outside of the papery prison within which they conduct their lives’ work. They consider the codex. They are – in Marshall McLuhan’s memorable phrase – the possessors of Gutenberg minds.

There is now an almost ceaseless murmuring about the future of narrative prose. Most of it is at once Panglossian and melioristic: yes, experts assert, there’s no disputing the impact of digitised text on the whole culture of the codex; fewer paper books are being sold, newspapers fold, bookshops continue to close, libraries as well. But … but, well, there’s still no substitute for the experience of close reading as we’ve come to understand and appreciate it – the capacity to imagine entire worlds from parsing a few lines of text; the ability to achieve deep and meditative levels of absorption in others’ psyches. This circling of the wagons comes with a number of public-spirited campaigns: children are given free books; book bags are distributed with slogans on them urging readers to put books in them; books are hymned for their physical attributes – their heft, their appearance, their smell – as if they were the bodily correlates of all those Gutenberg minds, which, of  course, they are.

The seeming realists among the Gutenbergers say such things as: well, clearly, books are going to become a minority technology, but the beau livre will survive. The populist Gutenbergers prate on about how digital texts linked to social media will allow readers to take part in a public conversation. What none of the Gutenbergers are able to countenance, because it is quite literally – for once the intensifier is justified – out of their minds, is that the advent of digital media is not simply destructive of the codex, but of the Gutenberg mind itself. There is one question alone that you must ask yourself in order to establish whether the serious novel will still retain cultural primacy and centrality in another 20 years. This is the question: if you accept that by then the vast majority of text will be read in digital form on devices linked to the web, do you also believe that those readers will voluntarily choose to disable that connectivity? If your answer to this is no, then the death of the novel is sealed out of your own mouth.

Read the entire excerpt here.

Image: Will Self, 2007. Courtesy of Wikipedia / Creative Commons.

Expanding Binge Culture

The framers of the U.S. Declaration of Independence could not have known. They could not have foreseen how commoditization, consumerism, globalisation and always-on media culture would come to transform our culture. They did well to insert “Life, Liberty and the pursuit of Happiness”.

But they failed to consider our collective evolution — if you would wish to denote it as such — towards a sophisticated culture of binge. Significant numbers of us have long binged on physical goods, money, natural resources, food and drink. However, media has lagged, somewhat. But no longer. Now we have at our instantaneous whim entire libraries of all-you-can-eat infotainment. Time will tell if this signals the demise of quality, as it gets replaced with overwhelming quantity. One area shows where we may be heading — witness the “fastfoodification” of our news.

From NYT:

When Beyoncé released, without warning, 17 videos around midnight on Dec. 13, millions of fans rejoiced. As a more casual listener of Ms. Knowles, I balked at the onslaught of new material and watched a few videos before throwing in the towel.

Likewise, when Netflix, in one fell swoop, made complete seasons of “House of Cards” and “Orange Is the New Black” available for streaming, I quailed at the challenge, though countless others happily immersed themselves in their worlds of Washington intrigue and incarcerated women.

Then there is the news, to which floodgates are now fully open thanks to the Internet and cable TV: Flight 370, Putin, Chris Christie, Edward Snowden, Rob Ford, Obamacare, “Duck Dynasty,” “bossy,” #CancelColbert, conscious uncoupling. When presented with 24/7 coverage of these ongoing narratives from an assortment of channels — traditional journalism sites, my Facebook feed, the log-out screen of my email — I followed some closely and very consciously uncoupled from others.

Had these content providers released their offerings in the old-media landscape, à la carte rather than in an all-you-can-eat buffet, the prospect of a seven-course meal might not have seemed so daunting. I could handle a steady drip of one article a day about Mr. Ford in a newspaper. But after two dozen, updated every 10 minutes, plus scores of tweets, videos and GIFs that keep on giving, I wanted to forget altogether about Toronto’s embattled mayor.

While media technology is now catching up to Americans’ penchant for overdoing it and finding plenty of willing indulgers, there are also those like me who recoil from the abundance of binge culture.

In the last decade, media entertainment has given far more freedom to consumers: watch, listen to and read anything at anytime. But Barry Schwartz’s 2004 book, “The Paradox of Choice,” argues that our surfeit of consumer choices engenders anxiety, not satisfaction, and sometimes even a kind of paralysis.

His thesis (which has its dissenters) applies mostly to the profusion of options within a single set: for instance, the challenge of picking out salad dressing from 175 varieties in a supermarket. Nevertheless, it is also germane to the concept of bingeing, when 62 episodes of “Breaking Bad” wait overwhelmingly in a row like bottles of Newman’s Own on a shelf.

Alex Quinlan, 31, a first-year Ph.D. student in poetry at Florida State University, said he used to spend at least an hour every morning reading the news and “putting off my responsibilities,” as well as binge-watching shows. He is busier now, and last fall had trouble installing an Internet connection in his home, which effectively “rewired my media-consumption habits,” he said. “I’m a lot more disciplined. Last night I watched one episode of ‘House of Cards’ and went to bed. A year ago, I probably would’ve watched one, gotten another beer, then watched two more.”

Even shorter-term bingeing can seem like a major commitment, because there is a distorting effect of receiving a large chunk of content at once rather than getting it piecemeal. To watch one Beyoncé video a week would eat as much time as watching all in one day, but their unified dissemination makes them seem intimidatingly movie-length (which they are, approximately) rather than like a series of four-minute clips.

I also experienced some first-world anxiety last year with the release of the fourth season of “Arrested Development.” I had devoured the show’s first three seasons, parceled out in 22-minute weekly installments on Fox as well as on DVD, where I would watch episodes I had already seen (in pre-streaming days, binge-watching required renting or owning a copy, which was more like a contained feast). But when Netflix uploaded 15 new episodes totaling 8.5 hours on May 26, I was not among those queuing up for it. It took me some time to get around to the show, and once I had started, the knowledge of how many episodes stretched in front of me, at my disposal whenever I wanted, proved off-putting.

This despite the keeping-up-with-the-Joneses quality to binge-viewing. If everyone is quickly exhausting every new episode of a show, and writing and talking about it the next day, it’s easy to feel left out of the conversation if you haven’t kept pace. And sometimes when you’re late to the party, you decide to stay home instead.

Because we frequently gorge when left to our own Wi-Fi-enabled devices, the antiquated methods of “scheduling our information consumption” may have been healthier, if less convenient, said Clay Johnson, 36, the author of “The Information Diet.” He recalled rushing home after choir practice when he was younger to catch “Northern Exposure” on TV.

“That idea is now preposterous,” he said. “We don’t have appointment television anymore. Just because we can watch something all the time doesn’t mean we should. Maybe we should schedule it in a way that makes sense around our daily lives.”

“It’s a lot like food,” he added. “You see some people become info-anorexic, who say the answer is to unplug and not consume anything. Much like an eating disorder, it’s just as unhealthy a decision as binge-watching the news and media. There’s a middle ground of people who are saying, ‘I need to start treating this form of input in my life like a conscious decision and to be informed in the right way.’ ”

Read the entire story here.

You May Be Living Inside a Simulation

real-and-simulated-cosmos

Some theorists posit that we are living inside a simulation, that the entire universe is one giant, evolving model inside a grander reality. This is a fascinating idea, but may never be experimentally verifiable. So just relax — you and I may not be real, but we’ll never know.

On the other hand, but in a similar vein, researchers have themselves developed the broadest and most detailed simulation of the universe to date. Now, there are no “living” things yet inside this computer model, but it’s probably only a matter of time before our increasingly sophisticated simulations start wondering if they are simulations as well.

From the BBC:

An international team of researchers has created the most complete visual simulation of how the Universe evolved.

The computer model shows how the first galaxies formed around clumps of a mysterious, invisible substance called dark matter.

It is the first time that the Universe has been modelled so extensively and to such great resolution.

The research has been published in the journal Nature.

Now we can get to grips with how stars and galaxies form and relate it to dark matter”

The simulation will provide a test bed for emerging theories of what the Universe is made of and what makes it tick.

One of the world’s leading authorities on galaxy formation, Professor Richard Ellis of the California Institute of Technology (Caltech) in Pasadena, described the simulation as “fabulous”.

“Now we can get to grips with how stars and galaxies form and relate it to dark matter,” he told BBC News.

The computer model draws on the theories of Professor Carlos Frenk of Durham University, UK, who said he was “pleased” that a computer model should come up with such a good result assuming that it began with dark matter.

“You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots”.

Cosmologists have been creating computer models of how the Universe evolved for more than 20 years. It involves entering details of what the Universe was like shortly after the Big Bang, developing a computer program which encapsulates the main theories of cosmology and then letting the programme run.

The simulated Universe that comes out at the other end is usually a very rough approximation of what astronomers really see.

The latest simulation, however, comes up with the Universe that is strikingly like the real one.

Immense computing power has been used to recreate this virtual Universe. It would take a normal laptop nearly 2,000 years to run the simulation. However, using state-of-the-art supercomputers and clever software called Arepo, researchers were able to crunch the numbers in three months.

Cosmic tree

In the beginning, it shows strands of mysterious material which cosmologists call “dark matter” sprawling across the emptiness of space like branches of a cosmic tree. As millions of years pass by, the dark matter clumps and concentrates to form seeds for the first galaxies.

Then emerges the non-dark matter, the stuff that will in time go on to make stars, planets and life emerge.

But early on there are a series of cataclysmic explosions when it gets sucked into black holes and then spat out: a chaotic period which was regulating the formation of stars and galaxies. Eventually, the simulation settles into a Universe that is similar to the one we see around us.

According to Dr Mark Vogelsberger of Massachusetts Institute of Technology (MIT), who led the research, the simulations back many of the current theories of cosmology.

“Many of the simulated galaxies agree very well with the galaxies in the real Universe. It tells us that the basic understanding of how the Universe works must be correct and complete,” he said.

In particular, it backs the theory that dark matter is the scaffold on which the visible Universe is hanging.

“If you don’t include dark matter (in the simulation) it will not look like the real Universe,” Dr Vogelsberger told BBC News.

Read the entire article here.

Image: On the left: the real universe imaged via the Hubble telescope. On the right: a view of what emerges from the computer simulation. Courtesy of BBC / Illustris Collaboration.

Paper is the Next Big Thing

Da-Vinci-Hammer-Codex

Luddites and technophobes rejoice, paper-bound books may be with us for quite some time. And, there may be some genuinely scientific reasons why physical books will remain. Recent research shows that people learn more effectively when reading from paper versus its digital offspring.

From Wired:

Paper books were supposed to be dead by now. For years, information theorists, marketers, and early adopters have told us their demise was imminent. Ikea even redesigned a bookshelf to hold something other than books. Yet in a world of screen ubiquity, many people still prefer to do their serious reading on paper.

Count me among them. When I need to read deeply—when I want to lose myself in a story or an intellectual journey, when focus and comprehension are paramount—I still turn to paper. Something just feels fundamentally richer about reading on it. And researchers are starting to think there’s something to this feeling.

To those who see dead tree editions as successors to scrolls and clay tablets in history’s remainder bin, this might seem like literary Luddism. But I e-read often: when I need to copy text for research or don’t want to carry a small library with me. There’s something especially delicious about late-night sci-fi by the light of a Kindle Paperwhite.

What I’ve read on screen seems slippery, though. When I later recall it, the text is slightly translucent in my mind’s eye. It’s as if my brain better absorbs what’s presented on paper. Pixels just don’t seem to stick. And often I’ve found myself wondering, why might that be?

The usual explanation is that internet devices foster distraction, or that my late-thirty-something brain isn’t that of a true digital native, accustomed to screens since infancy. But I have the same feeling when I am reading a screen that’s not connected to the internet and Twitter or online Boggle can’t get in the way. And research finds that kids these days consistently prefer their textbooks in print rather than pixels. Whatever the answer, it’s not just about habit.

Another explanation, expressed in a recent Washington Post article on the decline of deep reading, blames a sweeping change in our lifestyles: We’re all so multitasked and attention-fragmented that our brains are losing the ability to focus on long, linear texts. I certainly feel this way, but if I don’t read deeply as often or easily as I used to, it does still happen. It just doesn’t happen on screen, and not even on devices designed specifically for that experience.

Maybe it’s time to start thinking of paper and screens another way: not as an old technology and its inevitable replacement, but as different and complementary interfaces, each stimulating particular modes of thinking. Maybe paper is a technology uniquely suited for imbibing novels and essays and complex narratives, just as screens are for browsing and scanning.

“Reading is human-technology interaction,” says literacy professor Anne Mangen of Norway’s University of Stavenger. “Perhaps the tactility and physical permanence of paper yields a different cognitive and emotional experience.” This is especially true, she says, for “reading that can’t be done in snippets, scanning here and there, but requires sustained attention.”

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Read the entire electronic article here.

Image: Leicester or Hammer Codex, by Leonardo da Vinci (1452-1519). Courtesy of Wikipedia / Public domain.

 

Clothing Design by National Sub-Committee

North-Korean-Military

It’s probably safe to assume that clothing designed by committee will be more utilitarian and drab than that from the colored pencils of say Yves Saint Laurent, Tom Ford, Giorgio Armani or Coco Chanel.

So, imagine what clothing would look like if it was designed by the Apparel Research Center, a sub-subcommittee of the Clothing Industry Department, itself a sub-committee of the National Industry Committee. Yes, welcome to the strange, centrally planned and tightly controlled world of our favorite rogue nation, North Korea. Imagine no more as Paul French takes us on a journey through daily life in North Korea, excerpted from his new book North Korea: State of Paranoia by Paul French. It makes for sobering reading.

From the Guardian:

6am The day starts early in Pyongyang, the city described by the North Korean government as the “capital of revolution”. Breakfast is usually corn or maize porridge, possibly a boiled egg and sour yoghurt, with perhaps powdered milk for children.

Then it is time to get ready for work. North Korea has a large working population: approximately 59% of the total in 2010. A growing number of women work in white-collar office jobs; they make up around 90% of workers in light industry and 80% of the rural workforce. Many women are now the major wage-earner in the family – though still housewife, mother and cook as well as a worker, or perhaps a soldier.

Makeup is increasingly common in Pyongyang, though it is rarely worn until after college graduation. Chinese-made skin lotions, foundation, eyeliner and lipstick are available and permissible in the office. Many women suffer from blotchy skin caused by the deteriorating national diet, so are wearing more makeup. Long hair is common, but untied hair is frowned upon.

Men’s hairstyles could not be described as radical. In the 1980s, when Kim Jong-il first came to public prominence, his trademark crewcut, known as a “speed battle cut”, became popular, while the more bouffant style favoured by Kim Il-sung, and then Kim Jong-il, in their later years, is also popular. Kim Jong-un’s trademark short-back-and-sides does not appear to have inspired much imitation so far. Hairdressers and barbers are run by the local Convenience Services Management Committee; at many, customers can wash their hair themselves.

Fashion is not really an applicable term in North Korea, as the Apparel Research Centre under the Clothing Industry Department of the National Light Industry Committee designs most clothing. However, things have loosened up somewhat, with bright colours now permitted as being in accordance with a “socialist lifestyle”. Pyongyang offers some access to foreign styles. A Japanese watch denotes someone in an influential position; a foreign luxury watch indicates a very senior position. The increasing appearance of Adidas, Disney and other brands (usually fake) indicates that access to goods smuggled from China is growing. Jeans have at times been fashionable, though risky – occasionally they have been banned as “decadent”, along with long hair on men, which can lead to arrest and a forced haircut.

One daily ritual of all North Koreans is making sure they have their Kim Il-sung badge attached to their lapel. The badges have been in circulation since the late 1960s, when the Mansudae Art Studio started producing them for party cadres. Desirable ones can change hands on the black market for several hundred NKW. In a city where people rarely carry cash, jewellery or credit cards, Kim badges are one of the most prized targets of Pyongyang’s pickpockets.

Most streets are boulevards of utilitarian high-rise blocks. Those who live on higher floors may have to set out for work or school a little earlier than those lower down. Due to chronic power cuts, many elevators work only intermittently, if at all. Many buildings are between 20 and 40 storeys tall – there are stories of old people who have never been able to leave. Even in the better blocks elevators can be sporadic and so people just don’t take the chance. Families make great efforts to relocate older relatives on lower floors, but this is difficult and a bribe is sometimes required. With food shortages now constant, many older people share their meagre rations with their grandchildren, weakening themselves further and making the prospect of climbing stairs even more daunting.

Some people do drive to work, but congestion is not a major problem. Despite the relative lack of cars, police enforce traffic regulations strictly and issue tickets. Fines can be equivalent to two weeks’ salary. Most cars belong to state organisations, but are often used as if they were privately owned. All vehicles entering Pyongyang must be clean; owners of dirty cars may be fined. Those travelling out of Pyongyang require a travel certificate. There are few driving regulations; however, on hills ascending vehicles have the right of way, and trucks cannot pass passenger cars under any circumstances. Drunk-driving is punished with hard labour. Smoking while driving is banned on the grounds that a smoking driver cannot smell a problem with the car.

Those who have a bicycle usually own a Sea Gull, unless they are privileged and own an imported second-hand Japanese bicycle. But even a Sea Gull costs several months’ wages and requires saving.

7.30am For many North Koreans the day starts with a 30-minute reading session and exercises before work begins. The reading includes receiving instructions and studying the daily editorial in the party papers. This is followed by directives on daily tasks and official announcements.

For children, the school day starts with exercises to a medley of populist songs before a session of marching on the spot and saluting the image of the leader. The curriculum is based Kim Il-sung’s 1977 Thesis on Socialist Education, emphasising the political role of education in developing revolutionary spirit. All children study Kim Il-sung’s life closely. Learning to read means learning to read about Kim Il-sung; music class involves singing patriotic songs. Rote learning and memorising political tracts is integral and can bring good marks, which help in getting into university – although social rank is a more reliable determinant of college admission. After graduation, the state decides where graduates will work.

8am Work begins. Pyongyang is the centre of the country’s white-collar workforce, though a Pyongyang office would appear remarkably sparse to most outsiders. Banks, industrial enterprises and businesses operate almost wholly without computers, photocopiers and modern office technology. Payrolls and accounting are done by hand.

12pm Factories, offices and workplaces break for lunch for an hour. Many workers bring a packed lunch, or, if they live close by, go home to eat. Larger workplaces have a canteen serving cheap lunches, such as corn soup, corn cake and porridge. The policy of eating in work canteens, combined with the lack of food shops and restaurants, means that Pyongyang remains strangely empty during the working day with no busy lunchtime period, as seen in other cities around the world.

Shopping is an as-and-when activity. If a shop has stock, then returning later is not an option as it will be sold out. According to defectors, North Koreans want “five chests and seven appliances”. The chests are a quilt chest, wardrobe, bookshelf, cupboard and shoe closet, while the appliances comprise a TV, refrigerator, washing machine, electric fan, sewing machine, tape recorder and camera. Most ordinary people only have a couple of appliances, usually a television and a sewing machine.

Food shopping is equally problematic. Staples such as soy sauce, soybean paste, salt and oil, as well as toothpaste, soap, underwear and shoes, sell out fast. The range of food items available is highly restricted. White cabbage, cucumber and tomato are the most common; meat is rare, and eggs increasingly so. Fruit is largely confined to apples and pears. The main staple of the North Korean diet is rice, though bread is sometimes available, accompanied by a form of butter that is often rancid. Corn, maize and mushrooms also appear sometimes.

Read the entire excerpt here.

Image: Soldiers from the Korean People’s Army look south while on duty in the Joint Security Area, 2008. Courtesy of U.S. government.

 

Metabolism Without Life

Glycolysis2-pathway

A remarkable chance discovery in a Cambridge University research lab shows that a number of life-sustaining metabolic processes can occur spontaneously and outside of living cells. This opens a rich, new vein of theories and approaches to studying the origin of life.

From the New Scientist:

Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.

“People have said that these pathways look so complex they couldn’t form by environmental chemistry alone,” says Markus Ralser at the University of Cambridge who supervised the research.

But his findings suggest that many of these reactions could have occurred spontaneously in Earth’s early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.

The origin of metabolism is a major gap in our understanding of the emergence of life. “If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how,” says Ralser.

Happy accident

One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.

“This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA,” Ralser says.

Remarkably, the discovery was an accident, stumbled on during routine quality control testing of the medium used to culture cells at Ralser’s laboratory. As a shortcut, one of his students decided to run unused media through a mass spectrometer, which spotted a signal for pyruvate – an end product of a metabolic pathway called glycolysis.

To test whether the same processes could have helped spark life on Earth, they approached colleagues in the Earth sciences department who had been working on reconstructing the chemistry of the Archean Ocean, which covered the planet almost 4 billion years ago. This was an oxygen-free world, predating photosynthesis, when the waters were rich in iron, as well as other metals and phosphate. All these substances could potentially facilitate chemical reactions like the ones seen in modern cells.

Metabolic backbone

Ralser’s team took early ocean solutions and added substances known to be starting points for modern metabolic pathways, before heating the samples to between 50 ?C and 70 ?C – the sort of temperatures you might have found near a hydrothermal vent – for 5 hours. Ralser then analysed the solutions to see what molecules were present.

“In the beginning we had hoped to find one reaction or two maybe, but the results were amazing,” says Ralser. “We could reconstruct two metabolic pathways almost entirely.”

The pathways they detected were glycolysis and the pentose phosphate pathway, “reactions that form the core metabolic backbone of every living cell,” Ralser adds. Together these pathways produce some of the most important materials in modern cells, including ATP – the molecule cells use to drive their machinery, the sugars that form DNA and RNA, and the molecules needed to make fats and proteins.

If these metabolic pathways were occurring in the early oceans, then the first cells could have enveloped them as they developed membranes.

In all, 29 metabolism-like chemical reactions were spotted, seemingly catalysed by iron and other metals that would have been found in early ocean sediments. The metabolic pathways aren’t identical to modern ones; some of the chemicals made by intermediate steps weren’t detected. However, “if you compare them side by side it is the same structure and many of the same molecules are formed,” Ralser says. These pathways could have been refined and improved once enzymes evolved within cells.

Read the entire article here.

Image: Glycolysis metabolic pathway. Courtesy of Wikipedia.

Lost Treasures

Dearth-of-a-Salesman

A small proportion of classic movies remain in circulation and in our memories. Most are quickly forgotten. And some simply go missing. How could an old movie go missing? Well, it’s not very difficult: a temperamental, perfectionist director may demand the original be buried; or a fickle movie studio may wish to hide and remove all traces of last season’s flop; or some old reels, cast in nitrates, may just burn, literally. But, every once in a while an old movie is found in a dusty attic or damp basement. Or as is the case of a more recent find — film reels in a dumpster (if you’re a Brit, that’s a “skip”). Two recent discoveries shed more light on the developing comedic talent of Peter Sellers.

From the Guardian:

In the mid-1950s, Peter Sellers was young and ambitious and still largely unseen. He wanted to break out of his radio ghetto and achieve big-screen success, so he played a bumbling crook in The Ladykillers and a bumbling everyman in a series of comedy shorts for an independent production company called Park Lane Films. The Ladykillers endured and is cherished to this day. The shorts came and then went and were quickly forgotten. To all intents and purposes, they never existed at all.

I’m fascinated by the idea of the films that get lost; that vast, teeming netherworld where the obscure and the unloved rub shoulders, in the dark, with the misplaced and the mythic. Martin Scorsese’s Film Foundation estimates that as many as 50% of the American movies made before 1950 are now gone for good, while the British film archive is similarly holed like Swiss cheese. Somewhere out there, languishing in limbo, are missing pictures from directors including Orson Welles, Michael Powell and Alfred Hitchcock. Most of these orphans will surely never be found. Yet sometimes, against the odds, one will abruptly surface.

In his duties as facilities manager at an office block in central London, Robert Farrow would occasionally visit the basement area where the janitors parked their mops, brooms and vacuum cleaners. Nestled amid this equipment was a stack of 21 canisters, which Farrow assumed contained polishing pads for the cleaning machines. Years later, during an office refurbishment, Farrow saw that these canisters had been removed from the basement and dumped outside in a skip. “You don’t expect to find anything valuable in a skip,” Farrow says ruefully. But inside the canisters he found the lost Sellers shorts.

It’s a blustery spring day when we gather at a converted water works in Southend-on-Sea to meet the movie orphans. Happily the comedies – Dearth of a Salesman and Insomnia is Good For You – have been brushed up in readiness. They have been treated to a spick-and-span Telecine scan and look none the worse for their years in the basement. Each will now premiere (or perhaps re-premiere) at the Southend film festival, nestled amid the screenings of The Great Beauty and Wadjda and a retrospective showing of Sellers’ 1969 fantasy The Magic Christian. In the meantime, festival director Paul Cotgrove has hailed their reappearance as the equivalent of “finding the Dead Sea Scrolls”.

I think that might be overselling it, although one can understand his excitement. Instead, the films might best be viewed as crucial stepping stones, charting a bright spark’s evolution into a fully fledged film star. At the time they were made, Sellers was a big fish in a small pond, flushed from the success of The Goon Show and half-wondering whether he had already peaked. “By this point he had hardly done anything on screen,” Cotgrove explains. “He was obsessed with breaking away from radio and getting into film. You can see the early styles in these films that he would then use later on.”

To the untrained eye, he looks to be adapting rather well. Dearth of a Salesman and Insomnia is Good For You both run 29 minutes and come framed as spoof information broadcasts, installing Sellers in the role of lowly Herbert Dimwitty. In the first, Dimwitty attempts to strike out as a go-getting entrepreneur, peddling print dresses and dishwashers and regaling his clients with a range of funny accents. “I’m known as the Peter Ustinov of East Acton,” he informs a harried suburban housewife.

Dearth, it must be said, feels a little faded and cosy; its line in comedy too thinly spread. But Insomnia is terrific. Full of spark, bite and invention, the film chivvies Sellers’s sleep-deprived employee through a “good night’s wake”, thrilling to the “tone poem” of nocturnal noises from the street outside and replaying awkward moments from the office until they bloom into full-on waking nightmares. Who cares if Dimwitty is little more than a low-rent archetype, the kind of bumbling sitcom staple that has been embodied by everyone from Tony Hancock to Terry Scott? Sellers keeps the man supple and spiky. It’s a role the actor would later reprise, with a few variations, in the 1962 Kingsley Amis adaptation Only Two Can Play.

But what were these pictures and where did they go? Cotgrove and Farrow’s research can only take us so far. Dearth and Insomnia were probably shot in 1956, or possibly 1957, for Park Lane Films, which then later went bust. They would have played in British cinemas ahead of the feature presentation, folded in among the cartoons and the news, and may even have screened in the US and Canada as well. Records suggest that Sellers was initially contracted to shoot 12 movies in total, but may well have wriggled out of the deal after The Ladykillers was released. Only three have been found: Dearth, Insomnia and the below-par Cold Comfort, which was already in circulation. Conceivably there might be more Sellers shorts out there somewhere, either idling in skips or buried in basements. But there is no way of knowing; it’s akin to proving a negative. Cotgrove and Farrow aren’t even sure who owns the copyright. “If you find something on the street, it’s not yours,” Farrow points out. “You only have guardianship.”

As it is, the Sellers shorts can be safely filed away among other reclaimed items, plucked out of a skip and brought in from the cold. They take their place alongside such works as Carl Dreyer’s silent-screen classic The Passion of Joan of Arc, which turned up (unaccountably) at a Norwegian psychiatric hospital, or the vital lost footage from Fritz Lang’s Metropolis, found in Buenos Aires back in 2008. But these happy few are just the tip of the iceberg. Thousands of movies have simply vanished from view.

Read the entire article here.

Image: Still from newly discovered movie Dearth of a Salesman, featuring a young Peter Sellers. Courtesy of Southend Film Festival / Guardian.

 

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Kids With Guns

lily-with-her-gunIf you were asked to picture a contemporary scene populated with gun-toting children it’s possible your first thoughts might lean toward child soldiers in Chad, Burma, Central African Republic, Afghanistan or South Sudan. You’d be partially correct — that this abhorrent violation of children goes on in this world today, is incomprehensible and morally repugnant. Yet, you’d also be partially wrong.

So, think closer to home, think Louisiana, Texas, Alabama and Kentucky in the United States. A recent series of portraits titled “My First Rifle” by photographer An-Sofie Kesteleyn showcases children posing with their guns. See more of her fascinating and award-winning images here.

From  Wired:

Approaching strangers at gun ranges across America and asking to photograph their children holding guns isn’t likely to get you the warmest reception. But that’s exactly what photographer An-Sofie Kesteleyn did last June for her series My First Rifle. “One of the only things I had going for me was that I’m not some weird-looking guy,” she says.

Kesteleyn lives in Amsterdam but visited the United States to meet gun owners about a month after reading a news story about a 5-year-old boy in Kentucky who killed his 2-year-old sister with his practice rifle. She was taken aback by the death, which was deemed an accident. Not only because it was a tragic story, but also because in the Netherlands, few people if any own guns and it was unheard of to give a 5-year-old his own firearm.

“I really wanted to know what parents and kids thought about having the guns,” she says. “For me it was hard to understand because we don’t have a gun culture at all. The only people with guns [in the Netherlands] are the police.”

Thinking Texas would be cliché, Kesteleyn started her project in Ohio and worked her way though Tennessee, Alabama, Mississippi, and Louisiana before ending in the Lone Star State. Most of the time, it was rough going. Many people didn’t want to talk about their gun ownership. More often than not, she ended up talking to gun shop or shooting range owners, the most outspoken proponents.

During the three weeks she was on the ground, about 15 people were willing to let her photograph their children with Crickett rifles, which come in variety of colors, including hot pink. She always asked to visit people at home, because photos at the gun range were too expected and Kesteleyn wanted to reveal more details about the child and the parents.

“At home it was a lot more personal,” she says.

She spent time following one young girl who owned a Crickett and tried to develop a traditional documentary story, but that didn’t pan out so she switched, mid-project, to portraits. If the parents were OK with the idea, she’s ask children to pose in their rooms, in whatever way they felt comfortable.

“By photographing them in their bedroom I thought it helped remind us that they’re kids,” she says.

Kesteleyn also had the children write down what they were most scared of and what they might use the gun to defend themselves against (zombies, dinosaurs, bears). She then photographed those letters and turned the portrait and letter into a diptych.

So far the project has been well received in Europe. But Kesteleyn has yet to show it many places in the United States worries about how people might react. Though she tried coming to the story with an open mind and didn’t develop a strong opinion one way or another, she knows some viewers might assume she has an agenda.

Kesteleyn says that the majority of parents give their kids guns to educate them and ensure they know how to properly use a firearm when they get older. At the same time, she never could shake how odd she felt standing next to a child with a gun.

“I don’t want to be like I’m against guns or pro guns, but I do think giving a child a gun is sort of like giving your kids car keys,” she says.

Read the entire article here.

Image: Lily, 6. Courtesy of An-Sofie Kesteleyn / Wired.

The Arrow of Time

Arthur_Stanley_EddingtonEinstein’s “spooky action at a distance” and quantum information theory (QIT) may help explain the so-called arrow of time — specifically, why it seems to flow in only one direction. Astronomer Arthur Eddington first described this asymmetry in 1927, and it has stumped theoreticians ever since.

At a macro-level the classic and simple example is that of an egg breaking when it hits your kitchen floor: repeat this over and over, and it’s likely that the egg will always make for a scrambled mess on your clean tiles, but it will never rise up from the floor and spontaneously re-assemble in your slippery hand. Yet at the micro-level, physicists know their underlying laws apply equally in both directions. Enter two new tenets of the quantum world that may help us better understand this perplexing forward flow of time: entanglement and QIT.

From Wired:

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”

If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

Read the entire article here.

Image: English astrophysicist Sir Arthur Stanley Eddington (1882–1944). Courtesy: George Grantham Bain Collection (Library of Congress).

Nuclear Codes and Floppy Disks

Floppy_disksSometimes a good case can be made for remaining a technological Luddite; sometimes eschewing the latest-and-greatest technical gizmo may actually work for you.

 

Take the case of the United States’ nuclear deterrent. A recent report on CBS 60 Minutes showed us how part of the computer system responsible for launch control of US intercontinental ballistic missiles (ICBM) still uses antiquated 8-inch floppy disks. This part of the national defense is so old and arcane it’s actually more secure than most contemporary computing systems and communications infrastructure. So, next time your internet-connected, cloud-based tablet or laptop gets hacked consider reverting to a pre-1980s device.

From ars technica:

In a report that aired on April 27, CBS 60 Minutes correspondent Leslie Stahl expressed surprise that part of the computer system responsible for controlling the launch of the Minuteman III intercontinental ballistic missiles relied on data loaded from 8-inch floppy disks. Most of the young officers stationed at the launch control center had never seen a floppy disk before they became “missileers.”

An Air Force officer showed Stahl one of the disks, marked “Top Secret,” which is used with the computer that handles what was once called the Strategic Air Command Digital Network (SACDIN), a communication system that delivers launch commands to US missile forces. Beyond the floppies, a majority of the systems in the Wyoming US Air Force launch control center (LCC) Stahl visited dated back to the 1960s and 1970s, offering the Air Force’s missile forces an added level of cyber security, ICBM forces commander Major General Jack Weinstein told 60 Minutes.

“A few years ago we did a complete analysis of our entire network,” Weinstein said. “Cyber engineers found out that the system is extremely safe and extremely secure in the way it’s developed.”

However, not all of the Minuteman launch control centers’ aging hardware is an advantage. The analog phone systems, for example, often make it difficult for the missileers to communicate with each other or with their base. The Air Force commissioned studies on updating the ground-based missile force last year, and it’s preparing to spend $19 million this year on updates to the launch control centers. The military has also requested $600 million next year for further improvements.

Read the entire article here.

Image: Various floppy disks. Courtesy: George George Chernilevsky,  2009 / Wikipedia.

Zentai Coming to a City Near You

google-search-zentai

The latest Japanese export may not become as ubiquitous as Pokemon or the Toyota Camry. However, aficionados of Zentai seem to be increasing in numbers, and outside of the typical esoteric haunts such as clubs or during Halloween parties. Though, it may be a while before Zentai outfits appear around the office.

From the Washington Post:

They meet on clandestine Internet forums. Or in clubs. Or sometimes at barbecue parties, where as many as 10 adherents gather every month to eat meat and frolic in an outfit that falls somewhere between a Power Ranger’s tunic and Spider-Man’s digs.

They meet on clandestine Internet forums. Or in clubs. Or sometimes at barbecue parties, where as many as 10 adherents gather every month to eat meat and frolic in an outfit that falls somewhere between a Power Ranger’s tunic and Spider-Man’s digs.

It’s called “zentai.” And in Japan, it can mean a lot of things. To 20-year-old Hokkyoku Nigo, it means liberation from the judgment and opinions of others. To a 22-year-old named Hanaka, it represents her lifelong fascination with superheroes. To a 36-year-old teacher named Nezumiko, it elicits something sexual. “I like to touch and stroke others and to be touched and stroked like this,” she told the AFP’s Harumi Ozawa.

But to most outsiders, zentai means exactly what it looks like: spandex body suits.

Where did this phenomenon come from and what does it mean? In a culture of unique displays — from men turning trucks into glowing light shows to women wearing Victoria-era clothing — zentai appears to be yet another oddity in a country well accustomed to them.

The trend can take on elements of prurience, however, and groups with names such as “zentai addict” and “zentai fetish” teem on Facebook. There are zentai ninjas. There are zentai Pokemon. There are zentai British flags and zentai American flags.

An organization called the Zentai Project, based in England, explains it as “a tight, colorful suit that transforms a normal person into amusement for all who see them. … The locals don’t know what to make of us, but the tourists love us and we get onto lots of tourist snaps — sometimes we can hardly walk 3 steps down the street before being stopped to pose for another picture.”

Though the trend is now apparently global, it was once just a group of Japanese climbing into skintight latex for unknown reasons.

“With my face covered, I cannot eat or drink like other customers,” Hokkyoku Nigo says in the AFP story. “I have led my life always worrying about what other people think of me. They say I look cute, gentle, childish or naive. I have always felt suffocated by that. But wearing this, I am just a person in a full body suit.”

Ikuo Daibo, a professor at Tokyo Mirai University, says wearing full body suits may reflect a sense of societal abandonment. People are acting out to define their individuality.

“In Japan,” he said, ”many people feel lost; they feel unable to find their role in society. They have too many role models and cannot choose which one to follow.”

Read the entire article here.

Image courtesy of Google Search.