You May Be Living Inside a Simulation

real-and-simulated-cosmos

Some theorists posit that we are living inside a simulation, that the entire universe is one giant, evolving model inside a grander reality. This is a fascinating idea, but may never be experimentally verifiable. So just relax — you and I may not be real, but we’ll never know.

On the other hand, but in a similar vein, researchers have themselves developed the broadest and most detailed simulation of the universe to date. Now, there are no “living” things yet inside this computer model, but it’s probably only a matter of time before our increasingly sophisticated simulations start wondering if they are simulations as well.

From the BBC:

An international team of researchers has created the most complete visual simulation of how the Universe evolved.

The computer model shows how the first galaxies formed around clumps of a mysterious, invisible substance called dark matter.

It is the first time that the Universe has been modelled so extensively and to such great resolution.

The research has been published in the journal Nature.

Now we can get to grips with how stars and galaxies form and relate it to dark matter”

The simulation will provide a test bed for emerging theories of what the Universe is made of and what makes it tick.

One of the world’s leading authorities on galaxy formation, Professor Richard Ellis of the California Institute of Technology (Caltech) in Pasadena, described the simulation as “fabulous”.

“Now we can get to grips with how stars and galaxies form and relate it to dark matter,” he told BBC News.

The computer model draws on the theories of Professor Carlos Frenk of Durham University, UK, who said he was “pleased” that a computer model should come up with such a good result assuming that it began with dark matter.

“You can make stars and galaxies that look like the real thing. But it is the dark matter that is calling the shots”.

Cosmologists have been creating computer models of how the Universe evolved for more than 20 years. It involves entering details of what the Universe was like shortly after the Big Bang, developing a computer program which encapsulates the main theories of cosmology and then letting the programme run.

The simulated Universe that comes out at the other end is usually a very rough approximation of what astronomers really see.

The latest simulation, however, comes up with the Universe that is strikingly like the real one.

Immense computing power has been used to recreate this virtual Universe. It would take a normal laptop nearly 2,000 years to run the simulation. However, using state-of-the-art supercomputers and clever software called Arepo, researchers were able to crunch the numbers in three months.

Cosmic tree

In the beginning, it shows strands of mysterious material which cosmologists call “dark matter” sprawling across the emptiness of space like branches of a cosmic tree. As millions of years pass by, the dark matter clumps and concentrates to form seeds for the first galaxies.

Then emerges the non-dark matter, the stuff that will in time go on to make stars, planets and life emerge.

But early on there are a series of cataclysmic explosions when it gets sucked into black holes and then spat out: a chaotic period which was regulating the formation of stars and galaxies. Eventually, the simulation settles into a Universe that is similar to the one we see around us.

According to Dr Mark Vogelsberger of Massachusetts Institute of Technology (MIT), who led the research, the simulations back many of the current theories of cosmology.

“Many of the simulated galaxies agree very well with the galaxies in the real Universe. It tells us that the basic understanding of how the Universe works must be correct and complete,” he said.

In particular, it backs the theory that dark matter is the scaffold on which the visible Universe is hanging.

“If you don’t include dark matter (in the simulation) it will not look like the real Universe,” Dr Vogelsberger told BBC News.

Read the entire article here.

Image: On the left: the real universe imaged via the Hubble telescope. On the right: a view of what emerges from the computer simulation. Courtesy of BBC / Illustris Collaboration.

Paper is the Next Big Thing

Da-Vinci-Hammer-Codex

Luddites and technophobes rejoice, paper-bound books may be with us for quite some time. And, there may be some genuinely scientific reasons why physical books will remain. Recent research shows that people learn more effectively when reading from paper versus its digital offspring.

From Wired:

Paper books were supposed to be dead by now. For years, information theorists, marketers, and early adopters have told us their demise was imminent. Ikea even redesigned a bookshelf to hold something other than books. Yet in a world of screen ubiquity, many people still prefer to do their serious reading on paper.

Count me among them. When I need to read deeply—when I want to lose myself in a story or an intellectual journey, when focus and comprehension are paramount—I still turn to paper. Something just feels fundamentally richer about reading on it. And researchers are starting to think there’s something to this feeling.

To those who see dead tree editions as successors to scrolls and clay tablets in history’s remainder bin, this might seem like literary Luddism. But I e-read often: when I need to copy text for research or don’t want to carry a small library with me. There’s something especially delicious about late-night sci-fi by the light of a Kindle Paperwhite.

What I’ve read on screen seems slippery, though. When I later recall it, the text is slightly translucent in my mind’s eye. It’s as if my brain better absorbs what’s presented on paper. Pixels just don’t seem to stick. And often I’ve found myself wondering, why might that be?

The usual explanation is that internet devices foster distraction, or that my late-thirty-something brain isn’t that of a true digital native, accustomed to screens since infancy. But I have the same feeling when I am reading a screen that’s not connected to the internet and Twitter or online Boggle can’t get in the way. And research finds that kids these days consistently prefer their textbooks in print rather than pixels. Whatever the answer, it’s not just about habit.

Another explanation, expressed in a recent Washington Post article on the decline of deep reading, blames a sweeping change in our lifestyles: We’re all so multitasked and attention-fragmented that our brains are losing the ability to focus on long, linear texts. I certainly feel this way, but if I don’t read deeply as often or easily as I used to, it does still happen. It just doesn’t happen on screen, and not even on devices designed specifically for that experience.

Maybe it’s time to start thinking of paper and screens another way: not as an old technology and its inevitable replacement, but as different and complementary interfaces, each stimulating particular modes of thinking. Maybe paper is a technology uniquely suited for imbibing novels and essays and complex narratives, just as screens are for browsing and scanning.

“Reading is human-technology interaction,” says literacy professor Anne Mangen of Norway’s University of Stavenger. “Perhaps the tactility and physical permanence of paper yields a different cognitive and emotional experience.” This is especially true, she says, for “reading that can’t be done in snippets, scanning here and there, but requires sustained attention.”

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Mangen is among a small group of researchers who study how people read on different media. It’s a field that goes back several decades, but yields no easy conclusions. People tended to read slowly and somewhat inaccurately on early screens. The technology, particularly e-paper, has improved dramatically, to the point where speed and accuracy aren’t now problems, but deeper issues of memory and comprehension are not yet well-characterized.

Complicating the scientific story further, there are many types of reading. Most experiments involve short passages read by students in an academic setting, and for this sort of reading, some studies have found no obvious differences between screens and paper. Those don’t necessarily capture the dynamics of deep reading, though, and nobody’s yet run the sort of experiment, involving thousands of readers in real-world conditions who are tracked for years on a battery of cognitive and psychological measures, that might fully illuminate the matter.

In the meantime, other research does suggest possible differences. A 2004 study found that students more fully remembered what they’d read on paper. Those results were echoed by an experiment that looked specifically at e-books, and another by psychologist Erik Wästlund at Sweden’s Karlstad University, who found that students learned better when reading from paper.

Wästlund followed up that study with one designed to investigate screen reading dynamics in more detail. He presented students with a variety of on-screen document formats. The most influential factor, he found, was whether they could see pages in their entirety. When they had to scroll, their performance suffered.

According to Wästlund, scrolling had two impacts, the most basic being distraction. Even the slight effort required to drag a mouse or swipe a finger requires a small but significant investment of attention, one that’s higher than flipping a page. Text flowing up and down a page also disrupts a reader’s visual attention, forcing eyes to search for a new starting point and re-focus.

Read the entire electronic article here.

Image: Leicester or Hammer Codex, by Leonardo da Vinci (1452-1519). Courtesy of Wikipedia / Public domain.

 

Clothing Design by National Sub-Committee

North-Korean-Military

It’s probably safe to assume that clothing designed by committee will be more utilitarian and drab than that from the colored pencils of say Yves Saint Laurent, Tom Ford, Giorgio Armani or Coco Chanel.

So, imagine what clothing would look like if it was designed by the Apparel Research Center, a sub-subcommittee of the Clothing Industry Department, itself a sub-committee of the National Industry Committee. Yes, welcome to the strange, centrally planned and tightly controlled world of our favorite rogue nation, North Korea. Imagine no more as Paul French takes us on a journey through daily life in North Korea, excerpted from his new book North Korea: State of Paranoia by Paul French. It makes for sobering reading.

From the Guardian:

6am The day starts early in Pyongyang, the city described by the North Korean government as the “capital of revolution”. Breakfast is usually corn or maize porridge, possibly a boiled egg and sour yoghurt, with perhaps powdered milk for children.

Then it is time to get ready for work. North Korea has a large working population: approximately 59% of the total in 2010. A growing number of women work in white-collar office jobs; they make up around 90% of workers in light industry and 80% of the rural workforce. Many women are now the major wage-earner in the family – though still housewife, mother and cook as well as a worker, or perhaps a soldier.

Makeup is increasingly common in Pyongyang, though it is rarely worn until after college graduation. Chinese-made skin lotions, foundation, eyeliner and lipstick are available and permissible in the office. Many women suffer from blotchy skin caused by the deteriorating national diet, so are wearing more makeup. Long hair is common, but untied hair is frowned upon.

Men’s hairstyles could not be described as radical. In the 1980s, when Kim Jong-il first came to public prominence, his trademark crewcut, known as a “speed battle cut”, became popular, while the more bouffant style favoured by Kim Il-sung, and then Kim Jong-il, in their later years, is also popular. Kim Jong-un’s trademark short-back-and-sides does not appear to have inspired much imitation so far. Hairdressers and barbers are run by the local Convenience Services Management Committee; at many, customers can wash their hair themselves.

Fashion is not really an applicable term in North Korea, as the Apparel Research Centre under the Clothing Industry Department of the National Light Industry Committee designs most clothing. However, things have loosened up somewhat, with bright colours now permitted as being in accordance with a “socialist lifestyle”. Pyongyang offers some access to foreign styles. A Japanese watch denotes someone in an influential position; a foreign luxury watch indicates a very senior position. The increasing appearance of Adidas, Disney and other brands (usually fake) indicates that access to goods smuggled from China is growing. Jeans have at times been fashionable, though risky – occasionally they have been banned as “decadent”, along with long hair on men, which can lead to arrest and a forced haircut.

One daily ritual of all North Koreans is making sure they have their Kim Il-sung badge attached to their lapel. The badges have been in circulation since the late 1960s, when the Mansudae Art Studio started producing them for party cadres. Desirable ones can change hands on the black market for several hundred NKW. In a city where people rarely carry cash, jewellery or credit cards, Kim badges are one of the most prized targets of Pyongyang’s pickpockets.

Most streets are boulevards of utilitarian high-rise blocks. Those who live on higher floors may have to set out for work or school a little earlier than those lower down. Due to chronic power cuts, many elevators work only intermittently, if at all. Many buildings are between 20 and 40 storeys tall – there are stories of old people who have never been able to leave. Even in the better blocks elevators can be sporadic and so people just don’t take the chance. Families make great efforts to relocate older relatives on lower floors, but this is difficult and a bribe is sometimes required. With food shortages now constant, many older people share their meagre rations with their grandchildren, weakening themselves further and making the prospect of climbing stairs even more daunting.

Some people do drive to work, but congestion is not a major problem. Despite the relative lack of cars, police enforce traffic regulations strictly and issue tickets. Fines can be equivalent to two weeks’ salary. Most cars belong to state organisations, but are often used as if they were privately owned. All vehicles entering Pyongyang must be clean; owners of dirty cars may be fined. Those travelling out of Pyongyang require a travel certificate. There are few driving regulations; however, on hills ascending vehicles have the right of way, and trucks cannot pass passenger cars under any circumstances. Drunk-driving is punished with hard labour. Smoking while driving is banned on the grounds that a smoking driver cannot smell a problem with the car.

Those who have a bicycle usually own a Sea Gull, unless they are privileged and own an imported second-hand Japanese bicycle. But even a Sea Gull costs several months’ wages and requires saving.

7.30am For many North Koreans the day starts with a 30-minute reading session and exercises before work begins. The reading includes receiving instructions and studying the daily editorial in the party papers. This is followed by directives on daily tasks and official announcements.

For children, the school day starts with exercises to a medley of populist songs before a session of marching on the spot and saluting the image of the leader. The curriculum is based Kim Il-sung’s 1977 Thesis on Socialist Education, emphasising the political role of education in developing revolutionary spirit. All children study Kim Il-sung’s life closely. Learning to read means learning to read about Kim Il-sung; music class involves singing patriotic songs. Rote learning and memorising political tracts is integral and can bring good marks, which help in getting into university – although social rank is a more reliable determinant of college admission. After graduation, the state decides where graduates will work.

8am Work begins. Pyongyang is the centre of the country’s white-collar workforce, though a Pyongyang office would appear remarkably sparse to most outsiders. Banks, industrial enterprises and businesses operate almost wholly without computers, photocopiers and modern office technology. Payrolls and accounting are done by hand.

12pm Factories, offices and workplaces break for lunch for an hour. Many workers bring a packed lunch, or, if they live close by, go home to eat. Larger workplaces have a canteen serving cheap lunches, such as corn soup, corn cake and porridge. The policy of eating in work canteens, combined with the lack of food shops and restaurants, means that Pyongyang remains strangely empty during the working day with no busy lunchtime period, as seen in other cities around the world.

Shopping is an as-and-when activity. If a shop has stock, then returning later is not an option as it will be sold out. According to defectors, North Koreans want “five chests and seven appliances”. The chests are a quilt chest, wardrobe, bookshelf, cupboard and shoe closet, while the appliances comprise a TV, refrigerator, washing machine, electric fan, sewing machine, tape recorder and camera. Most ordinary people only have a couple of appliances, usually a television and a sewing machine.

Food shopping is equally problematic. Staples such as soy sauce, soybean paste, salt and oil, as well as toothpaste, soap, underwear and shoes, sell out fast. The range of food items available is highly restricted. White cabbage, cucumber and tomato are the most common; meat is rare, and eggs increasingly so. Fruit is largely confined to apples and pears. The main staple of the North Korean diet is rice, though bread is sometimes available, accompanied by a form of butter that is often rancid. Corn, maize and mushrooms also appear sometimes.

Read the entire excerpt here.

Image: Soldiers from the Korean People’s Army look south while on duty in the Joint Security Area, 2008. Courtesy of U.S. government.

 

Metabolism Without Life

Glycolysis2-pathway

A remarkable chance discovery in a Cambridge University research lab shows that a number of life-sustaining metabolic processes can occur spontaneously and outside of living cells. This opens a rich, new vein of theories and approaches to studying the origin of life.

From the New Scientist:

Metabolic processes that underpin life on Earth have arisen spontaneously outside of cells. The serendipitous finding that metabolism – the cascade of reactions in all cells that provides them with the raw materials they need to survive – can happen in such simple conditions provides fresh insights into how the first life formed. It also suggests that the complex processes needed for life may have surprisingly humble origins.

“People have said that these pathways look so complex they couldn’t form by environmental chemistry alone,” says Markus Ralser at the University of Cambridge who supervised the research.

But his findings suggest that many of these reactions could have occurred spontaneously in Earth’s early oceans, catalysed by metal ions rather than the enzymes that drive them in cells today.

The origin of metabolism is a major gap in our understanding of the emergence of life. “If you look at many different organisms from around the world, this network of reactions always looks very similar, suggesting that it must have come into place very early on in evolution, but no one knew precisely when or how,” says Ralser.

Happy accident

One theory is that RNA was the first building block of life because it helps to produce the enzymes that could catalyse complex sequences of reactions. Another possibility is that metabolism came first; perhaps even generating the molecules needed to make RNA, and that cells later incorporated these processes – but there was little evidence to support this.

“This is the first experiment showing that it is possible to create metabolic networks in the absence of RNA,” Ralser says.

Remarkably, the discovery was an accident, stumbled on during routine quality control testing of the medium used to culture cells at Ralser’s laboratory. As a shortcut, one of his students decided to run unused media through a mass spectrometer, which spotted a signal for pyruvate – an end product of a metabolic pathway called glycolysis.

To test whether the same processes could have helped spark life on Earth, they approached colleagues in the Earth sciences department who had been working on reconstructing the chemistry of the Archean Ocean, which covered the planet almost 4 billion years ago. This was an oxygen-free world, predating photosynthesis, when the waters were rich in iron, as well as other metals and phosphate. All these substances could potentially facilitate chemical reactions like the ones seen in modern cells.

Metabolic backbone

Ralser’s team took early ocean solutions and added substances known to be starting points for modern metabolic pathways, before heating the samples to between 50 ?C and 70 ?C – the sort of temperatures you might have found near a hydrothermal vent – for 5 hours. Ralser then analysed the solutions to see what molecules were present.

“In the beginning we had hoped to find one reaction or two maybe, but the results were amazing,” says Ralser. “We could reconstruct two metabolic pathways almost entirely.”

The pathways they detected were glycolysis and the pentose phosphate pathway, “reactions that form the core metabolic backbone of every living cell,” Ralser adds. Together these pathways produce some of the most important materials in modern cells, including ATP – the molecule cells use to drive their machinery, the sugars that form DNA and RNA, and the molecules needed to make fats and proteins.

If these metabolic pathways were occurring in the early oceans, then the first cells could have enveloped them as they developed membranes.

In all, 29 metabolism-like chemical reactions were spotted, seemingly catalysed by iron and other metals that would have been found in early ocean sediments. The metabolic pathways aren’t identical to modern ones; some of the chemicals made by intermediate steps weren’t detected. However, “if you compare them side by side it is the same structure and many of the same molecules are formed,” Ralser says. These pathways could have been refined and improved once enzymes evolved within cells.

Read the entire article here.

Image: Glycolysis metabolic pathway. Courtesy of Wikipedia.

Lost Treasures

Dearth-of-a-Salesman

A small proportion of classic movies remain in circulation and in our memories. Most are quickly forgotten. And some simply go missing. How could an old movie go missing? Well, it’s not very difficult: a temperamental, perfectionist director may demand the original be buried; or a fickle movie studio may wish to hide and remove all traces of last season’s flop; or some old reels, cast in nitrates, may just burn, literally. But, every once in a while an old movie is found in a dusty attic or damp basement. Or as is the case of a more recent find — film reels in a dumpster (if you’re a Brit, that’s a “skip”). Two recent discoveries shed more light on the developing comedic talent of Peter Sellers.

From the Guardian:

In the mid-1950s, Peter Sellers was young and ambitious and still largely unseen. He wanted to break out of his radio ghetto and achieve big-screen success, so he played a bumbling crook in The Ladykillers and a bumbling everyman in a series of comedy shorts for an independent production company called Park Lane Films. The Ladykillers endured and is cherished to this day. The shorts came and then went and were quickly forgotten. To all intents and purposes, they never existed at all.

I’m fascinated by the idea of the films that get lost; that vast, teeming netherworld where the obscure and the unloved rub shoulders, in the dark, with the misplaced and the mythic. Martin Scorsese’s Film Foundation estimates that as many as 50% of the American movies made before 1950 are now gone for good, while the British film archive is similarly holed like Swiss cheese. Somewhere out there, languishing in limbo, are missing pictures from directors including Orson Welles, Michael Powell and Alfred Hitchcock. Most of these orphans will surely never be found. Yet sometimes, against the odds, one will abruptly surface.

In his duties as facilities manager at an office block in central London, Robert Farrow would occasionally visit the basement area where the janitors parked their mops, brooms and vacuum cleaners. Nestled amid this equipment was a stack of 21 canisters, which Farrow assumed contained polishing pads for the cleaning machines. Years later, during an office refurbishment, Farrow saw that these canisters had been removed from the basement and dumped outside in a skip. “You don’t expect to find anything valuable in a skip,” Farrow says ruefully. But inside the canisters he found the lost Sellers shorts.

It’s a blustery spring day when we gather at a converted water works in Southend-on-Sea to meet the movie orphans. Happily the comedies – Dearth of a Salesman and Insomnia is Good For You – have been brushed up in readiness. They have been treated to a spick-and-span Telecine scan and look none the worse for their years in the basement. Each will now premiere (or perhaps re-premiere) at the Southend film festival, nestled amid the screenings of The Great Beauty and Wadjda and a retrospective showing of Sellers’ 1969 fantasy The Magic Christian. In the meantime, festival director Paul Cotgrove has hailed their reappearance as the equivalent of “finding the Dead Sea Scrolls”.

I think that might be overselling it, although one can understand his excitement. Instead, the films might best be viewed as crucial stepping stones, charting a bright spark’s evolution into a fully fledged film star. At the time they were made, Sellers was a big fish in a small pond, flushed from the success of The Goon Show and half-wondering whether he had already peaked. “By this point he had hardly done anything on screen,” Cotgrove explains. “He was obsessed with breaking away from radio and getting into film. You can see the early styles in these films that he would then use later on.”

To the untrained eye, he looks to be adapting rather well. Dearth of a Salesman and Insomnia is Good For You both run 29 minutes and come framed as spoof information broadcasts, installing Sellers in the role of lowly Herbert Dimwitty. In the first, Dimwitty attempts to strike out as a go-getting entrepreneur, peddling print dresses and dishwashers and regaling his clients with a range of funny accents. “I’m known as the Peter Ustinov of East Acton,” he informs a harried suburban housewife.

Dearth, it must be said, feels a little faded and cosy; its line in comedy too thinly spread. But Insomnia is terrific. Full of spark, bite and invention, the film chivvies Sellers’s sleep-deprived employee through a “good night’s wake”, thrilling to the “tone poem” of nocturnal noises from the street outside and replaying awkward moments from the office until they bloom into full-on waking nightmares. Who cares if Dimwitty is little more than a low-rent archetype, the kind of bumbling sitcom staple that has been embodied by everyone from Tony Hancock to Terry Scott? Sellers keeps the man supple and spiky. It’s a role the actor would later reprise, with a few variations, in the 1962 Kingsley Amis adaptation Only Two Can Play.

But what were these pictures and where did they go? Cotgrove and Farrow’s research can only take us so far. Dearth and Insomnia were probably shot in 1956, or possibly 1957, for Park Lane Films, which then later went bust. They would have played in British cinemas ahead of the feature presentation, folded in among the cartoons and the news, and may even have screened in the US and Canada as well. Records suggest that Sellers was initially contracted to shoot 12 movies in total, but may well have wriggled out of the deal after The Ladykillers was released. Only three have been found: Dearth, Insomnia and the below-par Cold Comfort, which was already in circulation. Conceivably there might be more Sellers shorts out there somewhere, either idling in skips or buried in basements. But there is no way of knowing; it’s akin to proving a negative. Cotgrove and Farrow aren’t even sure who owns the copyright. “If you find something on the street, it’s not yours,” Farrow points out. “You only have guardianship.”

As it is, the Sellers shorts can be safely filed away among other reclaimed items, plucked out of a skip and brought in from the cold. They take their place alongside such works as Carl Dreyer’s silent-screen classic The Passion of Joan of Arc, which turned up (unaccountably) at a Norwegian psychiatric hospital, or the vital lost footage from Fritz Lang’s Metropolis, found in Buenos Aires back in 2008. But these happy few are just the tip of the iceberg. Thousands of movies have simply vanished from view.

Read the entire article here.

Image: Still from newly discovered movie Dearth of a Salesman, featuring a young Peter Sellers. Courtesy of Southend Film Festival / Guardian.

 

Neuromorphic Chips

Neuromorphic chips are here. But don’t worry these are not brain implants that you might expect to see in a William Gibson or Iain Banks novel. Neuromorphic processors are designed to simulate brain function, and learn or mimic certain types of human processes such as sensory perception, image processing and object recognition. The field is making tremendous advances, with companies like Qualcomm — better known for its mobile and wireless chips — leading the charge. Until recently complex sensory and mimetic processes had been the exclusive realm of supercomputers.

From Technology Review:

A pug-size robot named pioneer slowly rolls up to the Captain America action figure on the carpet. They’re facing off inside a rough model of a child’s bedroom that the wireless-chip maker Qualcomm has set up in a trailer. The robot pauses, almost as if it is evaluating the situation, and then corrals the figure with a snowplow-like implement mounted in front, turns around, and pushes it toward three squat pillars representing toy bins. Qualcomm senior engineer Ilwoo Chang sweeps both arms toward the pillar where the toy should be deposited. Pioneer spots that gesture with its camera and dutifully complies. Then it rolls back and spies another action figure, Spider-Man. This time Pioneer beelines for the toy, ignoring a chessboard nearby, and delivers it to the same pillar with no human guidance.

This demonstration at Qualcomm’s headquarters in San Diego looks modest, but it’s a glimpse of the future of computing. The robot is performing tasks that have typically needed powerful, specially programmed computers that use far more electricity. Powered by only a smartphone chip with specialized software, Pioneer can recognize objects it hasn’t seen before, sort them by their similarity to related objects, and navigate the room to deliver them to the right location—not because of laborious programming but merely by being shown once where they should go. The robot can do all that because it is simulating, albeit in a very limited fashion, the way a brain works.

Later this year, Qualcomm will begin to reveal how the technology can be embedded into the silicon chips that power every manner of electronic device. These “neuromorphic” chips—so named because they are modeled on biological brains—will be designed to process sensory data such as images and sound and to respond to changes in that data in ways not specifically programmed. They promise to accelerate decades of fitful progress in artificial intelligence and lead to machines that are able to understand and interact with the world in humanlike ways. Medical sensors and devices could track individuals’ vital signs and response to treatments over time, learning to adjust dosages or even catch problems early. Your smartphone could learn to anticipate what you want next, such as background on someone you’re about to meet or an alert that it’s time to leave for your next meeting. Those self-driving cars Google is experimenting with might not need your help at all, and more adept Roombas wouldn’t get stuck under your couch. “We’re blurring the boundary between silicon and biological systems,” says Qualcomm’s chief technology officer, Matthew Grob.

Qualcomm’s chips won’t become available until next year at the earliest; the company will spend 2014 signing up researchers to try out the technology. But if it delivers, the project—known as the Zeroth program—would be the first large-scale commercial platform for neuromorphic computing. That’s on top of promising efforts at universities and at corporate labs such as IBM Research and HRL Laboratories, which have each developed neuromorphic chips under a $100 million project for the Defense Advanced Research Projects Agency. Likewise, the Human Brain Project in Europe is spending roughly 100 million euros on neuromorphic projects, including efforts at Heidelberg University and the University of Manchester. Another group in Germany recently reported using a neuromorphic chip and software modeled on insects’ odor-processing systems to recognize plant species by their flowers.

Today’s computers all use the so-called von Neumann architecture, which shuttles data back and forth between a central processor and memory chips in linear sequences of calculations. That method is great for crunching numbers and executing precisely written programs, but not for processing images or sound and making sense of it all. It’s telling that in 2012, when Google demonstrated artificial-­intelligence software that learned to recognize cats in videos without being told what a cat was, it needed 16,000 processors to pull it off.

Continuing to improve the performance of such processors requires their manufacturers to pack in ever more, ever faster transistors, silicon memory caches, and data pathways, but the sheer heat generated by all those components is limiting how fast chips can be operated, especially in power-stingy mobile devices. That could halt progress toward devices that effectively process images, sound, and other sensory information and then apply it to tasks such as face recognition and robot or vehicle navigation.

No one is more acutely interested in getting around those physical challenges than Qualcomm, maker of wireless chips used in many phones and tablets. Increasingly, users of mobile devices are demanding more from these machines. But today’s personal-assistant services, such as Apple’s Siri and Google Now, are limited because they must call out to the cloud for more powerful computers to answer or anticipate queries. “We’re running up against walls,” says Jeff Gehlhaar, the Qualcomm vice president of technology who heads the Zeroth engineering team.

Neuromorphic chips attempt to model in silicon the massively parallel way the brain processes information as billions of neurons and trillions of synapses respond to sensory inputs such as visual and auditory stimuli. Those neurons also change how they connect with each other in response to changing images, sounds, and the like. That is the process we call learning. The chips, which incorporate brain-inspired models called neural networks, do the same. That’s why Qualcomm’s robot—even though for now it’s merely running software that simulates a neuromorphic chip—can put Spider-Man in the same location as Captain America without having seen Spider-Man before.

Read the entire article here.

Kids With Guns

lily-with-her-gunIf you were asked to picture a contemporary scene populated with gun-toting children it’s possible your first thoughts might lean toward child soldiers in Chad, Burma, Central African Republic, Afghanistan or South Sudan. You’d be partially correct — that this abhorrent violation of children goes on in this world today, is incomprehensible and morally repugnant. Yet, you’d also be partially wrong.

So, think closer to home, think Louisiana, Texas, Alabama and Kentucky in the United States. A recent series of portraits titled “My First Rifle” by photographer An-Sofie Kesteleyn showcases children posing with their guns. See more of her fascinating and award-winning images here.

From  Wired:

Approaching strangers at gun ranges across America and asking to photograph their children holding guns isn’t likely to get you the warmest reception. But that’s exactly what photographer An-Sofie Kesteleyn did last June for her series My First Rifle. “One of the only things I had going for me was that I’m not some weird-looking guy,” she says.

Kesteleyn lives in Amsterdam but visited the United States to meet gun owners about a month after reading a news story about a 5-year-old boy in Kentucky who killed his 2-year-old sister with his practice rifle. She was taken aback by the death, which was deemed an accident. Not only because it was a tragic story, but also because in the Netherlands, few people if any own guns and it was unheard of to give a 5-year-old his own firearm.

“I really wanted to know what parents and kids thought about having the guns,” she says. “For me it was hard to understand because we don’t have a gun culture at all. The only people with guns [in the Netherlands] are the police.”

Thinking Texas would be cliché, Kesteleyn started her project in Ohio and worked her way though Tennessee, Alabama, Mississippi, and Louisiana before ending in the Lone Star State. Most of the time, it was rough going. Many people didn’t want to talk about their gun ownership. More often than not, she ended up talking to gun shop or shooting range owners, the most outspoken proponents.

During the three weeks she was on the ground, about 15 people were willing to let her photograph their children with Crickett rifles, which come in variety of colors, including hot pink. She always asked to visit people at home, because photos at the gun range were too expected and Kesteleyn wanted to reveal more details about the child and the parents.

“At home it was a lot more personal,” she says.

She spent time following one young girl who owned a Crickett and tried to develop a traditional documentary story, but that didn’t pan out so she switched, mid-project, to portraits. If the parents were OK with the idea, she’s ask children to pose in their rooms, in whatever way they felt comfortable.

“By photographing them in their bedroom I thought it helped remind us that they’re kids,” she says.

Kesteleyn also had the children write down what they were most scared of and what they might use the gun to defend themselves against (zombies, dinosaurs, bears). She then photographed those letters and turned the portrait and letter into a diptych.

So far the project has been well received in Europe. But Kesteleyn has yet to show it many places in the United States worries about how people might react. Though she tried coming to the story with an open mind and didn’t develop a strong opinion one way or another, she knows some viewers might assume she has an agenda.

Kesteleyn says that the majority of parents give their kids guns to educate them and ensure they know how to properly use a firearm when they get older. At the same time, she never could shake how odd she felt standing next to a child with a gun.

“I don’t want to be like I’m against guns or pro guns, but I do think giving a child a gun is sort of like giving your kids car keys,” she says.

Read the entire article here.

Image: Lily, 6. Courtesy of An-Sofie Kesteleyn / Wired.

The Arrow of Time

Arthur_Stanley_EddingtonEinstein’s “spooky action at a distance” and quantum information theory (QIT) may help explain the so-called arrow of time — specifically, why it seems to flow in only one direction. Astronomer Arthur Eddington first described this asymmetry in 1927, and it has stumped theoreticians ever since.

At a macro-level the classic and simple example is that of an egg breaking when it hits your kitchen floor: repeat this over and over, and it’s likely that the egg will always make for a scrambled mess on your clean tiles, but it will never rise up from the floor and spontaneously re-assemble in your slippery hand. Yet at the micro-level, physicists know their underlying laws apply equally in both directions. Enter two new tenets of the quantum world that may help us better understand this perplexing forward flow of time: entanglement and QIT.

From Wired:

Coffee cools, buildings crumble, eggs break and stars fizzle out in a universe that seems destined to degrade into a state of uniform drabness known as thermal equilibrium. The astronomer-philosopher Sir Arthur Eddington in 1927 cited the gradual dispersal of energy as evidence of an irreversible “arrow of time.”

But to the bafflement of generations of physicists, the arrow of time does not seem to follow from the underlying laws of physics, which work the same going forward in time as in reverse. By those laws, it seemed that if someone knew the paths of all the particles in the universe and flipped them around, energy would accumulate rather than disperse: Tepid coffee would spontaneously heat up, buildings would rise from their rubble and sunlight would slink back into the sun.

“In classical physics, we were struggling,” said Sandu Popescu, a professor of physics at the University of Bristol in the United Kingdom. “If I knew more, could I reverse the event, put together all the molecules of the egg that broke? Why am I relevant?”

Surely, he said, time’s arrow is not steered by human ignorance. And yet, since the birth of thermodynamics in the 1850s, the only known approach for calculating the spread of energy was to formulate statistical distributions of the unknown trajectories of particles, and show that, over time, the ignorance smeared things out.

Now, physicists are unmasking a more fundamental source for the arrow of time: Energy disperses and objects equilibrate, they say, because of the way elementary particles become intertwined when they interact — a strange effect called “quantum entanglement.”

“Finally, we can understand why a cup of coffee equilibrates in a room,” said Tony Short, a quantum physicist at Bristol. “Entanglement builds up between the state of the coffee cup and the state of the room.”

Popescu, Short and their colleagues Noah Linden and Andreas Winter reported the discovery in the journal Physical Review E in 2009, arguing that objects reach equilibrium, or a state of uniform energy distribution, within an infinite amount of time by becoming quantum mechanically entangled with their surroundings. Similar results by Peter Reimann of the University of Bielefeld in Germany appeared several months earlier in Physical Review Letters. Short and a collaborator strengthened the argument in 2012 by showing that entanglement causes equilibration within a finite time. And, in work that was posted on the scientific preprint site arXiv.org in February, two separate groups have taken the next step, calculating that most physical systems equilibrate rapidly, on time scales proportional to their size. “To show that it’s relevant to our actual physical world, the processes have to be happening on reasonable time scales,” Short said.

The tendency of coffee — and everything else — to reach equilibrium is “very intuitive,” said Nicolas Brunner, a quantum physicist at the University of Geneva. “But when it comes to explaining why it happens, this is the first time it has been derived on firm grounds by considering a microscopic theory.”

If the new line of research is correct, then the story of time’s arrow begins with the quantum mechanical idea that, deep down, nature is inherently uncertain. An elementary particle lacks definite physical properties and is defined only by probabilities of being in various states. For example, at a particular moment, a particle might have a 50 percent chance of spinning clockwise and a 50 percent chance of spinning counterclockwise. An experimentally tested theorem by the Northern Irish physicist John Bell says there is no “true” state of the particle; the probabilities are the only reality that can be ascribed to it.

Quantum uncertainty then gives rise to entanglement, the putative source of the arrow of time.

When two particles interact, they can no longer even be described by their own, independently evolving probabilities, called “pure states.” Instead, they become entangled components of a more complicated probability distribution that describes both particles together. It might dictate, for example, that the particles spin in opposite directions. The system as a whole is in a pure state, but the state of each individual particle is “mixed” with that of its acquaintance. The two could travel light-years apart, and the spin of each would remain correlated with that of the other, a feature Albert Einstein famously described as “spooky action at a distance.”

“Entanglement is in some sense the essence of quantum mechanics,” or the laws governing interactions on the subatomic scale, Brunner said. The phenomenon underlies quantum computing, quantum cryptography and quantum teleportation.

The idea that entanglement might explain the arrow of time first occurred to Seth Lloyd about 30 years ago, when he was a 23-year-old philosophy graduate student at Cambridge University with a Harvard physics degree. Lloyd realized that quantum uncertainty, and the way it spreads as particles become increasingly entangled, could replace human uncertainty in the old classical proofs as the true source of the arrow of time.

Using an obscure approach to quantum mechanics that treated units of information as its basic building blocks, Lloyd spent several years studying the evolution of particles in terms of shuffling 1s and 0s. He found that as the particles became increasingly entangled with one another, the information that originally described them (a “1” for clockwise spin and a “0” for counterclockwise, for example) would shift to describe the system of entangled particles as a whole. It was as though the particles gradually lost their individual autonomy and became pawns of the collective state. Eventually, the correlations contained all the information, and the individual particles contained none. At that point, Lloyd discovered, particles arrived at a state of equilibrium, and their states stopped changing, like coffee that has cooled to room temperature.

“What’s really going on is things are becoming more correlated with each other,” Lloyd recalls realizing. “The arrow of time is an arrow of increasing correlations.”

The idea, presented in his 1988 doctoral thesis, fell on deaf ears. When he submitted it to a journal, he was told that there was “no physics in this paper.” Quantum information theory “was profoundly unpopular” at the time, Lloyd said, and questions about time’s arrow “were for crackpots and Nobel laureates who have gone soft in the head.” he remembers one physicist telling him.

“I was darn close to driving a taxicab,” Lloyd said.

Advances in quantum computing have since turned quantum information theory into one of the most active branches of physics. Lloyd is now a professor at the Massachusetts Institute of Technology, recognized as one of the founders of the discipline, and his overlooked idea has resurfaced in a stronger form in the hands of the Bristol physicists. The newer proofs are more general, researchers say, and hold for virtually any quantum system.

“When Lloyd proposed the idea in his thesis, the world was not ready,” said Renato Renner, head of the Institute for Theoretical Physics at ETH Zurich. “No one understood it. Sometimes you have to have the idea at the right time.”

Read the entire article here.

Image: English astrophysicist Sir Arthur Stanley Eddington (1882–1944). Courtesy: George Grantham Bain Collection (Library of Congress).

Nuclear Codes and Floppy Disks

Floppy_disksSometimes a good case can be made for remaining a technological Luddite; sometimes eschewing the latest-and-greatest technical gizmo may actually work for you.

 

Take the case of the United States’ nuclear deterrent. A recent report on CBS 60 Minutes showed us how part of the computer system responsible for launch control of US intercontinental ballistic missiles (ICBM) still uses antiquated 8-inch floppy disks. This part of the national defense is so old and arcane it’s actually more secure than most contemporary computing systems and communications infrastructure. So, next time your internet-connected, cloud-based tablet or laptop gets hacked consider reverting to a pre-1980s device.

From ars technica:

In a report that aired on April 27, CBS 60 Minutes correspondent Leslie Stahl expressed surprise that part of the computer system responsible for controlling the launch of the Minuteman III intercontinental ballistic missiles relied on data loaded from 8-inch floppy disks. Most of the young officers stationed at the launch control center had never seen a floppy disk before they became “missileers.”

An Air Force officer showed Stahl one of the disks, marked “Top Secret,” which is used with the computer that handles what was once called the Strategic Air Command Digital Network (SACDIN), a communication system that delivers launch commands to US missile forces. Beyond the floppies, a majority of the systems in the Wyoming US Air Force launch control center (LCC) Stahl visited dated back to the 1960s and 1970s, offering the Air Force’s missile forces an added level of cyber security, ICBM forces commander Major General Jack Weinstein told 60 Minutes.

“A few years ago we did a complete analysis of our entire network,” Weinstein said. “Cyber engineers found out that the system is extremely safe and extremely secure in the way it’s developed.”

However, not all of the Minuteman launch control centers’ aging hardware is an advantage. The analog phone systems, for example, often make it difficult for the missileers to communicate with each other or with their base. The Air Force commissioned studies on updating the ground-based missile force last year, and it’s preparing to spend $19 million this year on updates to the launch control centers. The military has also requested $600 million next year for further improvements.

Read the entire article here.

Image: Various floppy disks. Courtesy: George George Chernilevsky,  2009 / Wikipedia.

Zentai Coming to a City Near You

google-search-zentai

The latest Japanese export may not become as ubiquitous as Pokemon or the Toyota Camry. However, aficionados of Zentai seem to be increasing in numbers, and outside of the typical esoteric haunts such as clubs or during Halloween parties. Though, it may be a while before Zentai outfits appear around the office.

From the Washington Post:

They meet on clandestine Internet forums. Or in clubs. Or sometimes at barbecue parties, where as many as 10 adherents gather every month to eat meat and frolic in an outfit that falls somewhere between a Power Ranger’s tunic and Spider-Man’s digs.

They meet on clandestine Internet forums. Or in clubs. Or sometimes at barbecue parties, where as many as 10 adherents gather every month to eat meat and frolic in an outfit that falls somewhere between a Power Ranger’s tunic and Spider-Man’s digs.

It’s called “zentai.” And in Japan, it can mean a lot of things. To 20-year-old Hokkyoku Nigo, it means liberation from the judgment and opinions of others. To a 22-year-old named Hanaka, it represents her lifelong fascination with superheroes. To a 36-year-old teacher named Nezumiko, it elicits something sexual. “I like to touch and stroke others and to be touched and stroked like this,” she told the AFP’s Harumi Ozawa.

But to most outsiders, zentai means exactly what it looks like: spandex body suits.

Where did this phenomenon come from and what does it mean? In a culture of unique displays — from men turning trucks into glowing light shows to women wearing Victoria-era clothing — zentai appears to be yet another oddity in a country well accustomed to them.

The trend can take on elements of prurience, however, and groups with names such as “zentai addict” and “zentai fetish” teem on Facebook. There are zentai ninjas. There are zentai Pokemon. There are zentai British flags and zentai American flags.

An organization called the Zentai Project, based in England, explains it as “a tight, colorful suit that transforms a normal person into amusement for all who see them. … The locals don’t know what to make of us, but the tourists love us and we get onto lots of tourist snaps — sometimes we can hardly walk 3 steps down the street before being stopped to pose for another picture.”

Though the trend is now apparently global, it was once just a group of Japanese climbing into skintight latex for unknown reasons.

“With my face covered, I cannot eat or drink like other customers,” Hokkyoku Nigo says in the AFP story. “I have led my life always worrying about what other people think of me. They say I look cute, gentle, childish or naive. I have always felt suffocated by that. But wearing this, I am just a person in a full body suit.”

Ikuo Daibo, a professor at Tokyo Mirai University, says wearing full body suits may reflect a sense of societal abandonment. People are acting out to define their individuality.

“In Japan,” he said, ”many people feel lost; they feel unable to find their role in society. They have too many role models and cannot choose which one to follow.”

Read the entire article here.

Image courtesy of Google Search.

Good Mutations and Breathing

Van_andel_113

Stem cells — the factories that manufacture all our component body parts — may hold a key to divining why our bodies gradually break down as we age. A new body of research shows how the body’s population of blood stem cells mutates, and gradually dies, over a typical lifespan. Sometimes these mutations turn cancerous, sometimes not. Luckily for us, the research is centered on the blood samples of Hendrikje van Andel-Schipper — she died in 2005 at the age of 115, and donated her body to science. Her body showed a remarkable resilience — no hardening of the arteries and no deterioration of her brain tissue.  When quizzed about the secret of her longevity, she once retorted, “breathing”.

From the New Scientist:

Death is the one certainty in life – a pioneering analysis of blood from one of the world’s oldest and healthiest women has given clues to why it happens.

Born in 1890, Hendrikje van Andel-Schipper was at one point the oldest woman in the world. She was also remarkable for her health, with crystal-clear cognition until she was close to death, and a blood circulatory system free of disease. When she died in 2005, she bequeathed her body to science, with the full support of her living relatives that any outcomes of scientific analysis – as well as her name – be made public.

Researchers have now examined her blood and other tissues to see how they were affected by age.

What they found suggests, as we could perhaps expect, that our lifespan might ultimately be limited by the capacity for stem cells to keep replenishing tissues day in day out. Once the stem cells reach a state of exhaustion that imposes a limit on their own lifespan, they themselves gradually die out and steadily diminish the body’s capacity to keep regenerating vital tissues and cells, such as blood.

Two little cells

In van Andel-Schipper’s case, it seemed that in the twilight of her life, about two-thirds of the white blood cells remaining in her body at death originated from just two stem cells, implying that most or all of the blood stem cells she started life with had already burned out and died.

“Is there a limit to the number of stem cell divisions, and does that imply that there’s a limit to human life?” asks Henne Holstege of the VU University Medical Center in Amsterdam, the Netherlands, who headed the research team. “Or can you get round that by replenishment with cells saved from earlier in your life?” she says.

The other evidence for the stem cell fatigue came from observations that van Andel-Schipper’s white blood cells had drastically worn-down telomeres – the protective tips on chromosomes that burn down like wicks each time a cell divides. On average, the telomeres on the white blood cells were 17 times shorter than those on brain cells, which hardly replicate at all throughout life.

The team could establish the number of white blood cell-generating stem cells by studying the pattern of mutations found within the blood cells. The pattern was so similar in all cells that the researchers could conclude that they all came from one of two closely related “mother” stem cells.

Point of exhaustion

“It’s estimated that we’re born with around 20,000 blood stem cells, and at any one time, around 1000 are simultaneously active to replenish blood,” says Holstege. During life, the number of active stem cells shrinks, she says, and their telomeres shorten to the point at which they die – a point called stem-cell exhaustion.

Holstege says the other remarkable finding was that the mutations within the blood cells were harmless – all resulted from mistaken replication of DNA during van Andel-Schipper’s life as the “mother” blood stem cells multiplied to provide clones from which blood was repeatedly replenished.

She says this is the first time patterns of lifetime “somatic” mutations have been studied in such an old and such a healthy person. The absence of mutations posing dangers of disease and cancer suggest that van Andel-Schipper had a superior system for repairing or aborting cells with dangerous mutations.

Read the entire article here.

Image: Hendrikje van Andel-Schipper, aged 113. Courtesy of Wikipedia.

Literally? Literally!

In everyday conversation the word “literally” is now as overused as the word “like” or the pause “um”. But, it’s also thoroughly abused (figuratively) and misused (literally). Unfortunately for pedants and linguistic purists, the word has become an intensifier of sorts. So, while you’ll still have to resort to cringing and correcting — if you are that inclined — your conversational partners next time they exclaim “… he was literally dying from laughter”, you have help online. A new web browser extension scans the page you’re on and replaces the word “literally” with “figuratively”. Now that’s really mind-blowing, literally.

From Slate:

If you’re a cool-headed, fair-minded, forward-thinking descriptivist like my colleague David Haglund, it doesn’t bother you one bit that people often use the word “literally” when describing things figuratively.

If, on the other hand, you’re a cranky language bully like me, it figuratively bugs the crap out of you every time.

We pedants are waging a losing battle, of course. Even major dictionaries now recognize the use of “literally” as an intensifier for statements that are not literally true.

Fortunately, Yahoo Tech‘s Alyssa Bereznak has run across a simple remedy for this galling inversion of the term’s original meaning. Built by a programmer named Mike Walker, it’s an extension for Google’s Chrome browser that replaces the word “literally” with “figuratively” on sites and articles across the Web, with deeply gratifying results.

It doesn’t work in every instance—tweets, for example, are immune to the extension’s magic, as are illustrations. But it works widely enough to put you in metaphorical stitches when you see some of the results. For instance, a quick Google News search for “literally” turns up the following headlines, modified by the browser extension to a state of unintentional accuracy:

  • The 2014 MTV Movie Awards Were Figuratively on Fire
  • 10 Things You Figuratively Do Not Have Time For
  • Momentum Is Figuratively the Next Starting Pitcher for LSU

Be warned, though: Walker’s widget does not distinguish between the literal and figurative uses of “literally.” So if you install it, you’ll also start seeing the word “figuratively” to describe things that are literally true, as in, “White Sox Rookie Abreu Figuratively Destroys a Baseball.” (The baseball was in fact destroyed.)

But hey, that’s no worse than the current state of affairs. Come to think of it, by the anti-prescriptivists’ logic, there’s nothing wrong with using “figuratively” to mean “literally,” as long as enough people do it. Anything can mean anything, literally—I mean figuratively!

If you’re signed into the Chrome browser, you can install the extension here. For those who want a browser extension that zaps hyperbole more broadly, try Alison Dianotto’s Downworthy tool, which performs similar operations on phrases like “will blow your mind” and “you won’t believe.”

Read the entire article here.

Google: The Standard Oil of Our Age

Google’s aim to organize the world’s information sounds benign enough. But delve a little deeper into its research and development efforts or witness its boundless encroachment into advertising, software, phones, glasses, cars, home automation, travel, internet services, artificial intelligence, robotics, online shopping (and so on), and you may get a more uneasy and prickly sensation. Is Google out to organize information or you? Perhaps it’s time to begin thinking about Google as a corporate hegemony, not quite a monopoly yet, but so powerful that counter-measures become warranted.

An open letter, excerpted below, from Mathias Döpfner, CEO of Axel Springer AG, does us all a service by raising the alarm bells.

From the Guardian:

Dear Eric Schmidt,

As you know, I am a great admirer of Google’s entrepreneurial success. Google’s employees are always extremely friendly to us and to other publishing houses, but we are not communicating with each other on equal terms. How could we? Google doesn’t need us. But we need Google. We are afraid of Google. I must state this very clearly and frankly, because few of my colleagues dare do so publicly. And as the biggest among the small, perhaps it is also up to us to be the first to speak out in this debate. You yourself speak of the new power of the creators, owners, and users.

In the long term I’m not so sure about the users. Power is soon followed by powerlessness. And this is precisely the reason why we now need to have this discussion in the interests of the long-term integrity of the digital economy’s ecosystem. This applies to competition – not only economic, but also political. As the situation stands, your company will play a leading role in the various areas of our professional and private lives – in the house, in the car, in healthcare, in robotronics. This is a huge opportunity and a no less serious threat. I am afraid that it is simply not enough to state, as you do, that you want to make the world a “better place”.

Google lists its own products, from e-commerce to pages from its own Google+ network, higher than those of its competitors, even if these are sometimes of less value for consumers and should not be displayed in accordance with the Google algorithm. It is not even clearly pointed out to the user that these search results are the result of self-advertising. Even when a Google service has fewer visitors than that of a competitor, it appears higher up the page until it eventually also receives more visitors.

You know very well that this would result in long-term discrimination against, and weakening of, any competition, meaning that Google would be able to develop its superior market position still further. And that this would further weaken the European digital economy in particular.

This also applies to the large and even more problematic set of issues concerning data security and data utilisation. Ever since Edward Snowden triggered the NSA affair, and ever since the close relations between major American online companies and the American secret services became public, the social climate – at least in Europe – has fundamentally changed. People have become more sensitive about what happens to their user data. Nobody knows as much about its customers as Google. Even private or business emails are read by Gmail and, if necessary, can be evaluated. You yourself said in 2010: “We know where you are. We know where you’ve been. We can more or less know what you’re thinking about.” This is a remarkably honest sentence. The question is: are users happy with the fact that this information is used not only for commercial purposes – which may have many advantages, yet a number of spooky negative aspects as well – but could end up in the hands of the intelligence services, and to a certain extent already has?

Google is sitting on the entire current data trove of humanity, like the giant Fafner in The Ring of the Nibelung: “Here I lie and here I hold.” I hope you are aware of your company’s special responsibility. If fossil fuels were the fuels of the 20th century, then those of the 21st century are surely data and user profiles. We need to ask ourselves whether competition can generally still function in the digital age, if data is so extensively concentrated in the hands of one party.

There is a quote from you in this context that concerns me. In 2009 you said: “If you have something that you don’t want anyone to know, maybe you shouldn’t be doing it in the first place.” The essence of freedom is precisely the fact that I am not obliged to disclose everything that I am doing, that I have a right to confidentiality and, yes, even to secrets; that I am able to determine for myself what I wish to disclose about myself. The individual right to this is what makes a democracy. Only dictatorships want transparent citizens instead of a free press.

Against this background, it greatly concerns me that Google – which has just announced the acquisition of drone manufacturer Titan Aerospace – has been seen for some time as being behind a number of planned enormous ships and floating working environments that can cruise and operate in the open ocean. What is the reason for this development? You don’t have to be a conspiracy theorist to find this alarming.

Historically, monopolies have never survived in the long term. Either they have failed as a result of their complacency, which breeds its own success, or they have been weakened by competition – both unlikely scenarios in Google’s case. Or they have been restricted by political initiatives.

Another way would be voluntary self-restraint on the part of the winner. Is it really smart to wait until the first serious politician demands the breakup of Google? Or even worse – until the people refuse to follow?

Sincerely yours,

Mathias Döpfner

Read the entire article here.

 

Water From Air

WARKAWATER

Ideas and innovations that solve a particular human hardship are worthy of reward and recognition. When the idea is also ingenious and simple it should be celebrated. Take the invention of industrial designers Arturo Vittori and Andreas Vogler. Fashioned from plant stalks and nylon mess their 30 foot tall WarkaWater Towers soak up moisture from the air for later collection — often up to 25 gallons of drinking water today. When almost a quarter of the world’s population has poor access to daily potable water this remarkable invention serves a genuine need.

From Smithsonian:

In some parts of Ethiopia, finding potable water is a six-hour journey.

People in the region spend 40 billion hours a year trying to find and collect water, says a group called the Water Project. And even when they find it, the water is often not safe, collected from ponds or lakes teeming with infectious bacteria, contaminated with animal waste or other harmful substances.

The water scarcity issue—which affects nearly 1 billion people in Africa alone—has drawn the attention of big-name philanthropists like actor and Water.org co-founder Matt Damon and Microsoft co-founder Bill Gates, who, through their respective nonprofits, have poured millions of dollars into research and solutions, coming up with things like a system that converts toilet water to drinking water and a “Re-invent the Toilet Challenge,” among others.

Critics, however, have their doubts about integrating such complex technologies in remote villages that don’t even have access to a local repairman. Costs and maintenance could render many of these ideas impractical.

“If the many failed development projects of the past 60 years have taught us anything,” wrote one critic, Toilets for People founder Jason Kasshe, in a New York Times editorial, “it’s that complicated, imported solutions do not work.”

Other low-tech inventions, like this life straw, aren’t as complicated, but still rely on users to find a water source.

It was this dilemma—supplying drinking water in a way that’s both practical and convenient—that served as the impetus for a new product called Warka Water, an inexpensive, easily-assembled structure that extracts gallons of fresh water from the air.

The invention from Arturo Vittori, an industrial designer, and his colleague Andreas Vogler doesn’t involve complicated gadgetry or feats of engineering, but instead relies on basic elements like shape and material and the ways in which they work together.

At first glance, the 30-foot-tall, vase-shaped towers, named after a fig tree native to Ethiopia, have the look and feel of a showy art installation. But every detail, from carefully-placed curves to unique materials, has a functional purpose.

The rigid outer housing of each tower is comprised of lightweight and elastic juncus stalks, woven in a pattern that offers stability in the face of strong wind gusts while still allowing air to flow through. A mesh net made of nylon or  polypropylene, which calls to mind a large Chinese lantern, hangs inside, collecting droplets of dew that form along the surface. As cold air condenses, the droplets roll down into a container at the bottom of the tower. The water in the container then passes through a tube that functions as a faucet, carrying the water to those waiting on the ground.

Using mesh to facilitate clean drinking water isn’t an entirely new concept. A few years back, an MIT student designed a fog-harvesting device with the material. But Vittori’s invention yields more water, at a lower cost, than some other concepts that came before it.

“[In Ethiopia], public infrastructures do not exist and building [something like] a well is not easy,” Vittori says of the country. “To find water, you need to drill in the ground very deep, often as much as 1,600 feet.  So it’s technically difficult and expensive. Moreover, pumps need electricity to run as well as access to spare parts in case the pump breaks down.”

So how would Warka Water’s low-tech design hold up in remote sub-Saharan villages? Internal field tests have shown that one Warka Water tower can supply more than 25 gallons of water throughout the course of a day, Vittori claims. He says because the most important factor in collecting condensation is the difference in temperature between nightfall and daybreak, the towers are proving successful even in the desert, where temperatures, in that time, can differ as much as 50 degrees Fahrenheit.

The structures, made from biodegradable materials, are easy to clean and can be erected without mechanical tools in less than a week. Plus, he says, “once locals have the necessary know-how, they will be able to teach other villages and communities to build the Warka.”

In all, it costs about $500 to set up a tower—less than a quarter of the cost of something like the Gates toilet, which costs about $2,200 to install and more to maintain. If the tower is mass produced, the price would be even lower, Vittori says. His team hopes to install two Warka Towers in Ethiopia by next year and is currently searching for investors who may be interested in scaling the water harvesting technology across the region.

Read the entire article here.

Image: WarkaWater Tower. Courtesy of Andreas vogler and Arturo Vittori, WARKAWATER PROJECT / www.architectureandvision.com.

 

European Extremely Large Telescope

Rendering_of_the_E-ELT

When it is cited in the high mountains in the Chilean coastal desert the European Extremely Large Telescope (or E-ELT) will be the biggest and the baddest telescope to date.  With a mirror having a diameter of around 125 feet, the E-ELT will give observers unprecedented access to the vast panoramas of the cosmos. Astronomers are even confident that when it is fully operational, in about 2030, the telescope will be able to observe exo-planets directly, for the first time.

From the Observer:

Cerro Armazones is a crumbling dome of rock that dominates the parched peaks of the Chilean Coast Range north of Santiago. A couple of old concrete platforms and some rusty pipes, parts of the mountain’s old weather station, are the only hints that humans have ever taken an interest in this forbidding, arid place. Even the views look alien, with the surrounding boulder-strewn desert bearing a remarkable resemblance to the landscape of Mars.

Dramatic change is coming to Cerro Armazones, however – for in a few weeks, the 10,000ft mountain is going to have its top knocked off. “We are going to blast it with dynamite and then carry off the rubble,” says engineer Gird Hudepohl. “We will take about 80ft off the top of the mountain to create a plateau – and when we have done that, we will build the world’s biggest telescope there.”

Given the peak’s remote, inhospitable location that might sound an improbable claim – except for the fact that Hudepohl has done this sort of thing before. He is one of the European Southern Observatory’s most experienced engineers and was involved in the decapitation of another nearby mountain, Cerro Paranal, on which his team then erected one of the planet’s most sophisticated observatories.

The Paranal complex has been in operation for more than a decade and includes four giant instruments with eight-metre-wide mirrors – known as the Very Large Telescopes or VLTs – as well as control rooms and a labyrinth of underground tunnels linking its instruments. More than 100 astronomers, engineers and support staff work and live there. A few dozen metres below the telescopes, they have a sports complex with a squash court, an indoor football pitch, and a luxurious 110-room residence that has a central swimming pool and a restaurant serving meals and drinks around the clock. Built overlooking one of the world’s driest deserts, the place is an amazing oasis. (See box.)

Now the European Southern Observatory, of which Britain is a key member state, wants Hudepohl and his team to repeat this remarkable trick and take the top off Cerro Armazones, which is 20km distant. Though this time they will construct an instrument so huge it will dwarf all the telescopes on Paranal put together, and any other telescope on the planet. When completed, the European Extremely Large Telescope (E-ELT) and its 39-metre mirror will allow astronomers to peer further into space and look further back into the history of the universe than any other astronomical device in existence. Its construction will push telescope-making to its limit, however. Its primary mirror will be made of almost 800 segments – each 1.4 metres in diameter but only a few centimetres thick – which will have to be aligned with microscopic precision.

It is a remarkable juxtaposition: in the midst of utter desolation, scientists have built giant machines engineered to operate with smooth perfection and are now planning to top this achievement by building an even more vast device. The question is: for what purpose? Why go to a remote wilderness in northern Chile and chop down peaks to make homes for some of the planet’s most complex scientific hardware?

The answer is straightforward, says Cambridge University astronomer Professor Gerry Gilmore. It is all about water. “The atmosphere here is as dry as you can get and that is critically important. Water molecules obscure the view from telescopes on the ground. It is like trying to peer through mist – for mist is essentially a suspension of water molecules in the air, after all, and they obscure your vision. For a telescope based at sea level that is a major drawback.

“However, if you build your telescope where the atmosphere above you is completely dry, you will get the best possible views of the stars – and there is nowhere on Earth that has air drier than this place. For good measure, the high-altitude winds blow in a smooth, laminar manner above Paranal – like slabs of glass – so images of stars remain remarkably steady as well.”

The view of the heavens here is close to perfect, in other words – as an evening stroll around the viewing platform on Paranal demonstrates vividly. During my visit, the Milky Way hung over the observatory like a single white sheet. I could see the four main stars of the Southern Cross; Alpha Centauri, whose unseen companion Proxima Centauri is the closest star to our solar system; the two Magellanic Clouds, satellite galaxies of our own Milky Way; and the Coalsack, an interstellar dust cloud that forms a striking silhouette against the starry Milky Way. None are visible in northern skies and none appear with such brilliance anywhere else on the planet.

Hence the decision to build this extraordinary complex of VLTs. At sunset, each one’s housing is opened and the four great telescopes are brought slowly into operation. Each machine is made to rotate and swivel, like football players stretching muscles before a match. Each housing is the size of a block of flats. Yet they move in complete silence, so precise is their engineering.

Read the entire article here.

Image: Architectural rendering of ESO’s planned European Extremely Large Telescope (E-ELT) shows the telescope at work, with its dome open and its record-setting 42-metre primary mirror pointed to the sky. Courtesy of the European Southern Observatory (ESO) / Wikipedia.

Peak Beard

google-search-beards

Followers of all things hirsute, particularly male facial hair have recently declared “peak beard”. The declaration means that it’s no longer cool to be bearded (if you’re a man, anyway), since being bearded no longer represents a small, and hence very hip, minority. Does this mean our friends over a Duck Dynasty will have to don a clean-shaven look to maintain their ratings? Time will tell.

From the Guardian:

Hirsute men have been warned their attractiveness to potential partners may fade as facial hair becomes more prevalent, in a scenario researchers have called “peak beard”.

Research conducted by the University of NSW finds that, when people are confronted by a succession of bearded men, clean-shaven men become more attractive to them.

This process also works in reverse, with men with heavy stubble and full, Ned Kelly-style beards judged more attractive when present in a sea of hairless visages.

Researchers picked 1,453 bisexual or heterosexual women and 213 heterosexual men to take part in the study.

Participants were shown 36 images of men’s faces, with the first 24 pictures used to condition the subjects by showing them exclusively bearded or non-bearded men, or a mixture of the two.

The final 12 images then showed clean shaven or bearded men, with the participants ranking their attractiveness on a scale of minus four to four.

Researchers found the ranking of these men strongly depended upon the exposure of participants to bearded men prior to this. The more beards they’d already seen, the less attractive subsequent beards were, and vice versa with clean-shaven men.

This phenomenon is called “negative frequency-dependent sexual selection” and is present in several animal species, according to the UNSW team.

Researcher Robert Brooks told Guardian Australia the aim of the work was to look at the dynamics that drove the fashion of beards.

“There is a lot of faddishness with beards, they are on the way back and it’s interesting to look at that interaction with culture,” he said.

“It appears that beards gain an advantage when rare, but when they are in fashion and common, they are declared trendy and that attractiveness is over.”

Brooks conceded it was hard to tell how the experiment related to the real world, but said the fashion for beards might be reaching its zenith.

“The bigger the trend gets, the weaker the preference for beards and the tide will go out again,” he said. “We may well be at peak beard. Obviously, you will see more beards in Surry Hills than in Bondi, but I think we are near saturation point. This thing can’t get much bigger.

“These trends usually move in 30-year cycles from when they are first noticed but, with the internet, things are moving a lot faster.”

The researchers are now working on a larger, follow-up study that will look at the link between facial hair and masculinity.

“We still don’t really know the primary function of the beard,” Brooks said. “Some women are attracted to it, some are repelled. It is clear it is a sign of manliness, it makes men look older and also more aggressive. How much women like that depends, in a way, on how overtly masculine they like their men.

Read the entire article hair (pun intended).

Image: Men with beards. Courtesy of Google Search.

Caveat Asterisk and Corporate Un-Ethics

Froot-Loops-Cereal-BowlWe have to believe that most companies are in business to help us with their products and services, not hurt us. Yet, more and more enterprises are utilizing novel ways to shield themselves and their executives from the consequences and liabilities of shoddy and dangerous products and questionable business practices.

Witness the latest corporate practice:  buried deeply within a company’s privacy policy you may be surprised to find a clause that states the company is not liable to you in any way if you have purchased one of their products, or downloaded a coupon, or “liked” them via a social network!

You have to admire the combined creativity of these corporate legal teams — who needs real product innovation with tangible consumer benefits when you can increase the corporate bottom-line through legal shenanigans that abrogate ethical responsibility.

So if you ever find a dead rodent in your next box of Cheerios, which you purchased with a $1-off coupon, you may be out of luck; and General Mills executives will be as happy as the families in their blue sky cereal commercials.

From the NYT:

Might downloading a 50-cent coupon for Cheerios cost you legal rights?

General Mills, the maker of cereals like Cheerios and Chex as well as brands like Bisquick and Betty Crocker, has quietly added language to its website to alert consumers that they give up their right to sue the company if they download coupons, “join” it in online communities like Facebook, enter a company-sponsored sweepstakes or contest or interact with it in a variety of other ways.

Instead, anyone who has received anything that could be construed as a benefit and who then has a dispute with the company over its products will have to use informal negotiation via email or go through arbitration to seek relief, according to the new terms posted on its site.

In language added on Tuesday after The New York Times contacted it about the changes, General Mills seemed to go even further, suggesting that buying its products would bind consumers to those terms.

“We’ve updated our Privacy Policy,” the company wrote in a thin, gray bar across the top of its home page. “Please note we also have new Legal Terms which require all disputes related to the purchase or use of any General Mills product or service to be resolved through binding arbitration.”

The change in legal terms, which occurred shortly after a judge refused to dismiss a case brought against the company by consumers in California, made General Mills one of the first, if not the first, major food companies to seek to impose what legal experts call “forced arbitration” on consumers.

“Although this is the first case I’ve seen of a food company moving in this direction, others will follow — why wouldn’t you?” said Julia Duncan, director of federal programs and an arbitration expert at the American Association for Justice, a trade group representing plaintiff trial lawyers. “It’s essentially trying to protect the company from all accountability, even when it lies, or say, an employee deliberately adds broken glass to a product.”

General Mills declined to make anyone available for an interview about the changes. “While it rarely happens, arbitration is an efficient way to resolve disputes — and many companies take a similar approach,” the company said in a statement. “We even cover the cost of arbitration in most cases. So this is just a policy update, and we’ve tried to communicate it in a clear and visible way.”

A growing number of companies have adopted similar policies over the years, especially after a 2011 Supreme Court decision, AT&T Mobility v. Concepcion, that paved the way for businesses to bar consumers claiming fraud from joining together in a single arbitration. The decision allowed companies to forbid class-action lawsuits with the use of a standard-form contract requiring that disputes be resolved through the informal mechanism of one-on-one arbitration.

Credit card and mobile phone companies have included such limitations on consumers in their contracts, and in 2008, the magazine Mother Jones published an article about a Whataburger fast-food restaurant that hung a sign on its door warning customers that simply by entering the premises, they agreed to settle disputes through arbitration.

Companies have continued to push for expanded protection against litigation, but legal experts said that a food company trying to limit its customers’ ability to litigate against it raised the stakes in a new way.

What if a child allergic to peanuts ate a product that contained trace amounts of nuts but mistakenly did not include that information on its packaging? Food recalls for mislabeling, including failures to identify nuts in products, are not uncommon.

“When you’re talking about food, you’re also talking about things that can kill people,” said Scott L. Nelson, a lawyer at Public Citizen, a nonprofit advocacy group. “There is a huge difference in the stakes, between the benefit you’re getting from this supposed contract you’re entering into by, say, using the company’s website to download a coupon, and the rights they’re saying you’re giving up. That makes this agreement a lot broader than others out there.”

Big food companies are concerned about the growing number of consumers filing class-action lawsuits against them over labeling, ingredients and claims of health threats. Almost every major gathering of industry executives has at least one session on fighting litigation.

Last year, General Mills paid $8.5 million to settle lawsuits over positive health claims made on the packaging of its Yoplait Yoplus yogurt, saying it did not agree with the plaintiff’s accusations but wanted to end the litigation. In December 2012, it agreed to settle another suit by taking the word “strawberry” off the packaging label for Strawberry Fruit Roll-Ups, which did not contain strawberries.

General Mills amended its legal terms after a judge in California on March 26 ruled against its motion to dismiss a case brought by two mothers who contended that the company deceptively marketed its Nature Valley products as “natural” when they contained processed and genetically engineered ingredients.

“The front of the Nature Valley products’ packaging prominently displays the term ‘100% Natural’ that could lead a reasonable consumer to believe the products contain only natural ingredients,” wrote the district judge, William H. Orrick.

He wrote that the packaging claim “appears to be false” because the products contain processed ingredients like high-fructose corn syrup and maltodextrin.

Read the entire article here.

Image: Bowl of cereal. Courtesy of Wikipedia / Evan-Amos.

It’s Official: The U.S. is an Oligarchy

US_Capitol_west_side

Until recently the term oligarchy was usually only applied to Russia and some ex-Soviet satellites. A new study out of Princeton and Northwestern universities makes a case for the oligarchic label right here in the United States. Jaded voters will yawn at this so-called news — most ordinary citizens have known for decades that the U.S. political system is thoroughly broken, polluted with money (“free speech” as the U.S. Supreme Court would deem it) and serves only special interests (on the right or the left).

From the Telegraph:

The US government does not represent the interests of the majority of the country’s citizens, but is instead ruled by those of the rich and powerful, a new study from Princeton and Northwestern Universities has concluded.

The report, entitled Testing Theories of American Politics: Elites, Interest Groups, and Average Citizens, used extensive policy data collected from between the years of 1981 and 2002 to empirically determine the state of the US political system.

After sifting through nearly 1,800 US policies enacted in that period and comparing them to the expressed preferences of average Americans (50th percentile of income), affluent Americans (90th percentile) and large special interests groups, researchers concluded that the United States is dominated by its economic elite.

The peer-reviewed study, which will be taught at these universities in September, says: “The central point that emerges from our research is that economic elites and organised groups representing business interests have substantial independent impacts on US government policy, while mass-based interest groups and average citizens have little or no independent influence.”

Researchers concluded that US government policies rarely align with the the preferences of the majority of Americans, but do favour special interests and lobbying oragnisations: “When a majority of citizens disagrees with economic elites and/or with organised interests, they generally lose. Moreover, because of the strong status quo bias built into the US political system, even when fairly large majorities of Americans favour policy change, they generally do not get it.”

The positions of powerful interest groups are “not substantially correlated with the preferences of average citizens”, but the politics of average Americans and affluent Americans sometimes does overlap. This merely a coincidence, the report says, with the the interests of the average American being served almost exclusively when it also serves those of the richest 10 per cent.

The theory of “biased pluralism” that the Princeton and Northwestern researchers believe the US system fits holds that policy outcomes “tend to tilt towards the wishes of corporations and business and professional associations.”

Read more here.

Image: U.S. Capitol. Courtesy of Wikipedia.

Fourteen Years in Four Minutes

[tube]UH1x5aRtjSQ[/tube]

Dutch filmmaker Frans Hofmeester has made a beautiful and enduring timelapse portrait. Shot over a period of 14 years, the video shows his daughter growing up before our eyes. To create this momentous documentary work Hofmeester filmed his daughter, Lotte, for 15 seconds every week since birth. This is a remarkable feat  for both filmmaker and his subject, and probably makes many of us wish we could have done the same. Hofmeester created a similar timelapse video of Lotte’s younger brother Vince.

Read more on this story here.

Video courtesy of Frans Hofmeester.

Now Where Did I Put Those Keys?

key_chain

We all lose our car keys and misplace our cell phones. We leave umbrellas on public transport. We forget things at the office. We all do it — some more frequently than others. And, it’s not merely a symptom of aging. Many younger people seem to be increasingly prone to losing their personal items, perhaps a characteristic of their increasingly fragmented, distracted and limited attention spans.

From the WSJ:

You’ve put your keys somewhere and now they appear to be nowhere, certainly not in the basket by the door they’re supposed to go in and now you’re 20 minutes late for work. Kitchen counter, night stand, book shelf, work bag: Wait, finally, there they are under the mail you brought in last night.

Losing things is irritating and yet we are a forgetful people. The average person misplaces up to nine items a day, and one-third of respondents in a poll said they spend an average of 15 minutes each day searching for items—cellphones, keys and paperwork top the list, according to an online survey of 3,000 people published in 2012 by a British insurance company.

Everyday forgetfulness isn’t a sign of a more serious medical condition like Alzheimer’s or dementia. And while it can worsen with age, minor memory lapses are the norm for all ages, researchers say.

Our genes are at least partially to blame, experts say. Stress, fatigue, and multitasking can exacerbate our propensity to make such errors. Such lapses can also be linked to more serious conditions like depression and attention-deficit hyperactivity disorders.

“It’s the breakdown at the interface of attention and memory,” says Daniel L. Schacter, a psychology professor at Harvard University and author of “The Seven Sins of Memory.”

That breakdown can occur in two spots: when we fail to activate our memory and encode what we’re doing—where we put down our keys or glasses—or when we try to retrieve the memory. When you encode a memory, the hippocampus, a central part of the brain involved in memory function, takes a snapshot which is preserved in a set of neurons, says Kenneth Norman, a psychology professor at Princeton University. Those neurons can be activated later with a reminder or cue.

It is important to pay attention when you put down an item, or during encoding. If your state of mind at retrieval is different than it was during encoding, that could pose a problem. Case in point: You were starving when you walked into the house and deposited your keys. When you then go to look for them later, you’re no longer hungry so the memory may be harder to access.

The act of physically and mentally retracing your steps when looking for lost objects can work. Think back to your state of mind when you walked into the house (Were you hungry?). “The more you can make your brain at retrieval like the way it was when you lay down that original memory trace,” the more successful you will be, Dr. Norman says.

In a recent study, researchers in Germany found that the majority of people surveyed about forgetfulness and distraction had a variation in the so-called dopamine D2 receptor gene (DRD2), leading to a higher incidence of forgetfulness. According to the study, 75% of people carry a variation that makes them more prone to forgetfulness.

“Forgetfulness is quite common,” says Sebastian Markett, a researcher in psychology neuroscience at the University of Bonn in Germany and lead author of the study currently in the online version of the journal Neuroscience Letters, where it is expected to be published soon.

The study was based on a survey filled out by 500 people who were asked questions about memory lapses, perceptual failures (failing to notice a stop sign) and psychomotor failures (bumping into people on the street). The individuals also provided a saliva sample for molecular genetic testing.

About half of the total variation of forgetfulness can be explained by genetic effects, likely involving dozens of gene variations, Dr. Markett says.

The buildup of what psychologists call proactive interference helps explain how we can forget where we parked the car when we park in the same lot but different spaces every day. Memory may be impaired by the buildup of interference from previous experiences so it becomes harder to retrieve the specifics, like which parking space, Dr. Schacter says.

A study conducted by researchers at the Salk Institute for Biological Studies in California found that the brain keeps track of similar but distinct memories (where you parked your car today, for example) in the dentate gyrus, part of the hippocampus. There the brain stores separates recordings of each environment and different groups of neurons are activated when similar but nonidentical memories are encoded and later retrieved. The findings appeared last year in the online journal eLife.

The best way to remember where you put something may be the most obvious: Find a regular spot for it and somewhere that makes sense, experts say. If it’s reading glasses, leave them by the bedside. Charge your phone in the same place. Keep a container near the door for keys or a specific pocket in your purse.

Read the entire article here.

Image: Leather key chain. Courtesy of Wikipedia / The Egyptian.

 

Second Amendment Redux

Retired Justice of the U.S. Supreme Court, John Paul Stevens, argues for a five-word change to the Second Amendment to U.S. Constitution. His cogent argument is set forth in his essay, excerpted below, from his new book, “Six Amendments: How and Why We Should Change the Constitution.”

Stevens’ newly worded paragraph would read as follows:

A well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms when serving in the Militia shall not be infringed.

Sadly, for those of us who advocate gun control, any such change is highly unlikely during our lifetimes, so you can continue to add a further 30,000 annual count of bodies to the gun lobby’s books. The five words should have been inserted 200 years ago. It’s far too late now — and school massacres just aren’t enough to shake the sensibilities of most apathetic or paranoid Americans.

From the Washington Post:

Following the massacre of grammar-school children in Newtown, Conn., in December 2012, high-powered weapons have been used to kill innocent victims in more senseless public incidents. Those killings, however, are only a fragment of the total harm caused by the misuse of firearms. Each year, more than 30,000 people die in the United States in firearm-related incidents. Many of those deaths involve handguns.

The adoption of rules that will lessen the number of those incidents should be a matter of primary concern to both federal and state legislators. Legislatures are in a far better position than judges to assess the wisdom of such rules and to evaluate the costs and benefits that rule changes can be expected to produce. It is those legislators, rather than federal judges, who should make the decisions that will determine what kinds of firearms should be available to private citizens, and when and how they may be used. Constitutional provisions that curtail the legislative power to govern in this area unquestionably do more harm than good.

The first 10 amendments to the Constitution placed limits on the powers of the new federal government. Concern that a national standing army might pose a threat to the security of the separate states led to the adoption of the Second Amendment, which provides that “a well regulated Militia, being necessary to the security of a free State, the right of the people to keep and bear Arms, shall not be infringed.”

For more than 200 years following the adoption of that amendment, federal judges uniformly understood that the right protected by that text was limited in two ways: First, it applied only to keeping and bearing arms for military purposes, and second, while it limited the power of the federal government, it did not impose any limit whatsoever on the power of states or local governments to regulate the ownership or use of firearms. Thus, in United States v. Miller, decided in 1939, the court unanimously held that Congress could prohibit the possession of a sawed-off shotgun because that sort of weapon had no reasonable relation to the preservation or efficiency of a “well regulated Militia.”

When I joined the court in 1975, that holding was generally understood as limiting the scope of the Second Amendment to uses of arms that were related to military activities. During the years when Warren Burger was chief justice, from 1969 to 1986, no judge or justice expressed any doubt about the limited coverage of the amendment, and I cannot recall any judge suggesting that the amendment might place any limit on state authority to do anything.

Organizations such as the National Rifle Association disagreed with that position and mounted a vigorous campaign claiming that federal regulation of the use of firearms severely curtailed Americans’ Second Amendment rights. Five years after his retirement, during a 1991 appearance on “The MacNeil/Lehrer NewsHour,” Burger himself remarked that the Second Amendment “has been the subject of one of the greatest pieces of fraud, I repeat the word ‘fraud,’ on the American public by special interest groups that I have ever seen in my lifetime.”

In recent years two profoundly important changes in the law have occurred. In 2008, by a vote of 5 to 4, the Supreme Court decided in District of Columbia v. Heller that the Second Amendment protects a civilian’s right to keep a handgun in his home for purposes of self-defense. And in 2010, by another vote of 5 to 4, the court decided in McDonald v. Chicago that the due process clause of the 14th Amendment limits the power of the city of Chicago to outlaw the possession of handguns by private citizens. I dissented in both of those cases and remain convinced that both decisions misinterpreted the law and were profoundly unwise. Public policies concerning gun control should be decided by the voters’ elected representatives, not by federal judges.

In my dissent in the McDonald case, I pointed out that the court’s decision was unique in the extent to which the court had exacted a heavy toll “in terms of state sovereignty. . . . Even apart from the States’ long history of firearms regulation and its location at the core of their police powers, this is a quintessential area in which federalism ought to be allowed to flourish without this Court’s meddling. Whether or not we can assert a plausible constitutional basis for intervening, there are powerful reasons why we should not do so.”

“Across the Nation, States and localities vary significantly in the patterns and problems of gun violence they face, as well as in the traditions and cultures of lawful gun use. . . . The city of Chicago, for example, faces a pressing challenge in combating criminal street gangs. Most rural areas do not.”

In response to the massacre of grammar-school students at Sandy Hook Elementary School, some legislators have advocated stringent controls on the sale of assault weapons and more complete background checks on purchasers of firearms. It is important to note that nothing in either the Heller or the McDonald opinion poses any obstacle to the adoption of such preventive measures.

First, the court did not overrule Miller. Instead, it “read Miller to say only that the Second Amendment does not protect those weapons not typically possessed by law-abiding citizens for lawful purposes, such as short-barreled shotguns.” On the preceding page of its opinion, the court made it clear that even though machine guns were useful in warfare in 1939, they were not among the types of weapons protected by the Second Amendment because that protected class was limited to weapons in common use for lawful purposes such as self-defense. Even though a sawed-off shotgun or a machine gun might well be kept at home and be useful for self-defense, neither machine guns nor sawed-off shotguns satisfy the “common use” requirement.

Read the entire article here.

 

 

Nightmares And Art

Sleep-Nicolas-Bruno

You probably believe that your nightmares are best left locked in a dark closet. On the other hand, artist Nicolas Bruno believes they make good art.

See more of Bruno’s nightmarish images here.

From the Guardian:

Sufferer of sleep paralysis Nicolas Bruno transforms his terrifying dreams into photographic realities. The characters depicted are often stuck within their scenes, unable to escape. The 20 year old New York native suggests ‘Sleep paralysis is an experience in which the individual becomes conscious and is left immobile in a state between being awake and asleep.’

Image courtesy of Nicolas Bruno / Hot Spot Media.

 

New Street Artists Meet Old Masters

Some enterprising and talented street artists have re-imagined works by old masters, such as Rembrandt. An example below: Judith with the Head of Holofernes by 17th century artist Cristofano Allori, followed by a contemporary rendition courtesy of street artist Discreet, 2013.

Judith-AlloriJudith with the Head of Holofernes, 17th century, Cristofano Allori. Photograph: Dulwich Picture Gallery.

 

Judith-DiscreetJudith with the Head of Holofernes (2013), Dscreet, formerly on Blackwater Street/153 Lordship Lane, London, SE22.

See more juxtaposed, old and new images here.

Images courtesy of Dulwich Picture Gallery and Discreet (photograph by Ingrid Beazley), respectively.

It’s Happening Now

Greenland-ice

There is one thing wrong with the dystopian future painted by climate change science — it’s not in our future; it’s happening now.

From the New York Times:

Climate change is already having sweeping effects on every continent and throughout the world’s oceans, scientists reported on Monday, and they warned that the problem was likely to grow substantially worse unless greenhouse emissions are brought under control.

The report by the Intergovernmental Panel on Climate Change, a United Nations group that periodically summarizes climate science, concluded that ice caps are melting, sea ice in the Arctic is collapsing, water supplies are coming under stress, heat waves and heavy rains are intensifying, coral reefs are dying, and fish and many other creatures are migrating toward the poles or in some cases going extinct.

The oceans are rising at a pace that threatens coastal communities and are becoming more acidic as they absorb some of the carbon dioxide given off by cars and power plants, which is killing some creatures or stunting their growth, the report found.

Organic matter frozen in Arctic soils since before civilization began is now melting, allowing it to decay into greenhouse gases that will cause further warming, the scientists said. And the worst is yet to come, the scientists said in the second of three reports that are expected to carry considerable weight next year as nations try to agree on a new global climate treaty.

In particular, the report emphasized that the world’s food supply is at considerable risk — a threat that could have serious consequences for the poorest nations.

“Nobody on this planet is going to be untouched by the impacts of climate change,” Rajendra K. Pachauri, chairman of the intergovernmental panel, said at a news conference here on Monday presenting the report.

The report was among the most sobering yet issued by the scientific panel. The group, along with Al Gore, was awarded the Nobel Peace Prize in 2007 for its efforts to clarify the risks of climate change. The report is the final work of several hundred authors; details from the drafts of this and of the last report in the series, which will be released in Berlin in April, leaked in the last few months.

The report attempts to project how the effects will alter human society in coming decades. While the impact of global warming may actually be moderated by factors like economic or technological change, the report found, the disruptions are nonetheless likely to be profound. That will be especially so if emissions are allowed to continue at a runaway pace, the report said.

It cited the risk of death or injury on a wide scale, probable damage to public health, displacement of people and potential mass migrations.

“Throughout the 21st century, climate-change impacts are projected to slow down economic growth, make poverty reduction more difficult, further erode food security, and prolong existing and create new poverty traps, the latter particularly in urban areas and emerging hot spots of hunger,” the report declared.

The report also cited the possibility of violent conflict over land, water or other resources, to which climate change might contribute indirectly “by exacerbating well-established drivers of these conflicts such as poverty and economic shocks.”

The scientists emphasized that climate change is not just a problem of the distant future, but is happening now.

Studies have found that parts of the Mediterranean region are drying out because of climate change, and some experts believe that droughts there have contributed to political destabilization in the Middle East and North Africa.

In much of the American West, mountain snowpack is declining, threatening water supplies for the region, the scientists said in the report. And the snow that does fall is melting earlier in the year, which means there is less melt water to ease the parched summers. In Alaska, the collapse of sea ice is allowing huge waves to strike the coast, causing erosion so rapid that it is already forcing entire communities to relocate.

“Now we are at the point where there is so much information, so much evidence, that we can no longer plead ignorance,” Michel Jarraud, secretary general of the World Meteorological Organization, said at the news conference.

The report was quickly welcomed in Washington, where President Obama is trying to use his executive power under the Clean Air Act and other laws to impose significant new limits on the country’s greenhouse emissions. He faces determined opposition in Congress.

“There are those who say we can’t afford to act,” Secretary of State John Kerry said in a statement. “But waiting is truly unaffordable. The costs of inaction are catastrophic.”

Amid all the risks the experts cited, they did find a bright spot. Since the intergovernmental panel issued its last big report in 2007, it has found growing evidence that governments and businesses around the world are making extensive plans to adapt to climate disruptions, even as some conservatives in the United States and a small number of scientists continue to deny that a problem exists.

“I think that dealing effectively with climate change is just going to be something that great nations do,” said Christopher B. Field, co-chairman of the working group that wrote the report and an earth scientist at the Carnegie Institution for Science in Stanford, Calif. Talk of adaptation to global warming was once avoided in some quarters, on the ground that it would distract from the need to cut emissions. But the past few years have seen a shift in thinking, including research from scientists and economists who argue that both strategies must be pursued at once.

Read the entire article here.

Image: Greenland ice melt. Courtesy of Christine Zenino / Smithsonian.

No Work Past 6pm. C’est La Vie

les-deux-magots

Many westerners either love or hate the French. But, you have to hand it to them: where American’s love to work; the French, well, just love to do other stuff.

Famous for its maximum 35-hour work week enacted in 1999, the country recently established another restriction on employer demands. It is now illegal for superiors to demand that their office employees check computers, tablets or smartphones after 6pm. So, while the Brits must be whining that their near neighbors have gained yet another enviable characteristic, Americans must be leaving the country in droves. After all, 6pm is merely a signal that the workday is only half over in the United States. Mind you, the French do seem to live longer. Sacre bleu.

From the Independent:

It’s an international version of the postcode lottery. The dateline lottery, if you like, which means that if you are born in Limoges rather than Lancaster, you’re likely to live longer. The 2013 list of life expectancy compiled by the World Health Organisation has France in 13th position and the United Kingdom way behind in 29th spot.

The average life expectancy for these two countries, separated only by 23 miles of waterway, is 82.3 years and 81 years respectively. While it may not seem much of a difference at this remove, it’s something those Britons who are approaching their 81st birthdays might not be feeling too cheerful about.

We are repeatedly told that it is to do with the French diet, all that olive oil and fresh fruit and a glass of red wine with meals, which wards off heart disease. Anyone who’s been to provincial France, however, and tried to buy something from a shop between noon and 3pm, or, depending on where you are, on a Monday, Tuesday or Wednesday afternoon, might have stumbled upon the real reason for the greater longevity of the French. This is a place that still believes in half-day closing and taking lunch breaks. This is a country that has a very different attitude towards work from some of its close Northern European neighbours. And the chances are that, if your work-life balance is tilted more towards life, you are going to live longer.

France is the only country in the world to have adopted a 35-hour working week and this is strictly enforced. So much so that, yesterday, an agreement was signed between bosses and unions representing more than a million white-collar employees that would strike the average British worker as an edict from Cloud Cuckoo Land. It is a legally enforceable deal that means workers should not be contacted once they have left the office. It is as if the smartphone had never been invented (and yes, I know, many of us might hanker for a return to those days).

It’s rather ironic that French businesses in the technology sector will not be allowed to urge their employees to check emails once they’ve done their day’s work, and the unions will from now on be measuring what they are neatly calling “digital working time”.

How quaint these ideas seem to us. Heaven only knows what the average British working week would be if digital hours were taken into consideration. No matter what time of the day or night, whatever we may be doing in our leisure hours, we are only a ping away from being back at a virtual desk. I rarely have dinner with anyone these days who isn’t attached to their smartphone, waiting for a pause in the conversation so they can check their emails. Not good for digestion, not good for quality of life.

Here’s the thing, too. French productivity levels outstrip those of Britain and Germany, and French satisfaction with their quality of life is above the OECD average. No wonder, we may say. We’d all like to take a couple of hours off for lunch, washed down with a nice glass of Côtes du Rhône, and then switch our phones off as soon as we leave work. It’s just that our bosses won’t let us.

Read the entire article here (before 6pm if you’re in France at a work computer).

I Don’t Know, But I Like What I Like: The New Pluralism

choiceIn an insightful opinion piece, excerpted below, a millennial wonders if our fragmented and cluttered, information-rich society has damaged pluralism by turning action into indecision. Even aesthetic preferences come to be so laden with judgmental baggage that expressing a preference for one type of art, or car, or indeed cereal, seems to become an impossible conundrum  for many born in the mid-1980s or later. So, a choice becomes a way to alienate those not chosen — when did selecting a cereal become such an onerous exercise in political correctness and moral relativism?

From the New York Times:

Critics of the millennial generation, of which I am a member, consistently use terms like “apathetic,” “lazy” and “narcissistic” to explain our tendency to be less civically and politically engaged. But what these critics seem to be missing is that many millennials are plagued not so much by apathy as by indecision. And it’s not surprising: Pluralism has been a large influence on our upbringing. While we applaud pluralism’s benefits, widespread enthusiasm has overwhelmed desperately needed criticism of its side effects.

By “pluralism,” I mean a cultural recognition of difference: individuals of varying race, gender, religious affiliation, politics and sexual preference, all exalted as equal. In recent decades, pluralism has come to be an ethical injunction, one that calls for people to peacefully accept and embrace, not simply tolerate, differences among individuals. Distinct from the free-for-all of relativism, pluralism encourages us (in concept) to support our own convictions while also upholding an “energetic engagement with diversity, ” as Harvard’s Pluralism Project suggested in 1991. Today, paeans to pluralism continue to sound throughout the halls of American universities, private institutions, left-leaning households and influential political circles.

However, pluralism has had unforeseen consequences. The art critic Craig Owens once wrote that pluralism is not a “recognition, but a reduction of difference to absolute indifference, equivalence, interchangeability.” Some millennials who were greeted by pluralism in this battered state are still feelings its effects. Unlike those adults who encountered pluralism with their beliefs close at hand, we entered the world when truth-claims and qualitative judgments were already on trial and seemingly interchangeable. As a result, we continue to struggle when it comes to decisively avowing our most basic convictions.

Those of us born after the mid-1980s whose upbringing included a liberal arts education and the fruits of a fledgling World Wide Web have grown up (and are still growing up) with an endlessly accessible stream of texts, images and sounds from far-reaching times and places, much of which were unavailable to humans for all of history. Our most formative years include not just the birth of the Internet and the ensuing accelerated global exchange of information, but a new orthodoxy of multiculturalist ethics and “political correctness.”

These ideas were reinforced in many humanities departments in Western universities during the 1980s, where facts and claims to objectivity were eagerly jettisoned. Even “the canon” was dislodged from its historically privileged perch, and since then, many liberal-minded professors have avoided opining about “good” literature or “high art” to avoid reinstating an old hegemony. In college today, we continue to learn about the byproducts of absolute truths and intractable forms of ideology, which historically seem inextricably linked to bigotry and prejudice.

For instance, a student in one of my English classes was chastened for his preference for Shakespeare over that of the Haitian-American writer Edwidge Danticat. The professor challenged the student to apply a more “disinterested” analysis to his reading so as to avoid entangling himself in a misinformed gesture of “postcolonial oppression.” That student stopped raising his hand in class.

I am not trying to tackle the challenge as a whole or indict contemporary pedagogies, but I have to ask: How does the ethos of pluralism inside universities impinge on each student’s ability to make qualitative judgments outside of the classroom, in spaces of work, play, politics or even love?

In 2004, the French sociologist of science Bruno Latour intimated that the skeptical attitude which rebuffs claims to absolute knowledge might have had a deleterious effect on the younger generation: “Good American kids are learning the hard way that facts are made up, that there is no such thing as natural, unmediated, unbiased access to truth, that we are always prisoners of language, that we always speak from a particular standpoint, and so on.” Latour identified a condition that resonates: Our tenuous claims to truth have not simply been learned in university classrooms or in reading theoretical texts but reinforced by the decentralized authority of the Internet. While trying to form our fundamental convictions in this dizzying digital and intellectual global landscape, some of us are finding it increasingly difficult to embrace qualitative judgments.

Matters of taste in music, art and fashion, for example, can become a source of anxiety and hesitation. While clickable ways of “liking” abound on the Internet, personalized avowals of taste often seem treacherous today. Admittedly, many millennials (and nonmillennials) might feel comfortable simply saying, “I like what I like,” but some of us find ourselves reeling in the face of choice. To affirm a preference for rap over classical music, for instance, implicates the well-meaning millennial in a web of judgments far beyond his control. For the millennial generation, as a result, confident expressions of taste have become more challenging, as aesthetic preference is subjected to relentless scrutiny.

Philosophers and social theorists have long weighed in on this issue of taste. Pierre Bourdieu claimed that an “encounter with a work of art is not ‘love at first sight’ as is generally supposed.” Rather, he thought “tastes” function as “markers of ‘class.’ ” Theodor Adorno and Max Horkheimer argued that aesthetic preference could be traced along socioeconomic lines and reinforce class divisions. To dislike cauliflower is one thing. But elevating the work of one writer or artist over another has become contested territory.

This assured expression of “I like what I like,” when strained through pluralist-inspired critical inquiry, deteriorates: “I like what I like” becomes “But why do I like what I like? Should I like what I like? Do I like it because someone else wants me to like it? If so, who profits and who suffers from my liking what I like?” and finally, “I am not sure I like what I like anymore.” For a number of us millennials, commitment to even seemingly simple aesthetic judgments have become shot through with indecision.

Read the entire article here.

A Case for Slow Reading

With 24/7 infotainment available to us through any device, anywhere it is more than likely that these immense torrents of competing words, images and sounds will have an effect on our reading. This is particularly evident online where consumers of information are increasingly scanning and skimming — touching only the bare surface of an article — before clicking a link and moving elsewhere (and so on) across the digital ocean. The fragmentation of this experience is actually rewiring our brains, and as some researchers suggest, perhaps not for the best.

From the Washington Post.

Claire Handscombe has a commitment problem online. Like a lot of Web surfers, she clicks on links posted on social networks, reads a few sentences, looks for exciting words, and then grows restless, scampering off to the next page she probably won’t commit to.

“I give it a few seconds — not even minutes — and then I’m moving again,” says Handscombe, a 35-year-old graduate student in creative writing at American University.

But it’s not just online anymore. She finds herself behaving the same way with a novel.

“It’s like your eyes are passing over the words but you’re not taking in what they say,” she confessed. “When I realize what’s happening, I have to go back and read again and again.”

To cognitive neuroscientists, Handscombe’s experience is the subject of great fascination and growing alarm. Humans, they warn, seem to be developing digital brains with new circuits for skimming through the torrent of information online. This alternative way of reading is competing with traditional deep reading circuitry developed over several millennia.

“I worry that the superficial way we read during the day is affecting us when we have to read with more in-depth processing,” said Maryanne Wolf, a Tufts University cognitive neuroscientist and the author of “Proust and the Squid: The Story and Science of the Reading Brain.”

If the rise of nonstop cable TV news gave the world a culture of sound bites, the Internet, Wolf said, is bringing about an eye byte culture. Time spent online — on desktop and mobile devices — was expected to top five hours per day in 2013 for U.S. adults, according to eMarketer, which tracks digital behavior. That’s up from three hours in 2010.

Word lovers and scientists have called for a “slow reading” movement, taking a branding cue from the “slow food” movement. They are battling not just cursory sentence galloping but the constant social network and e-mail temptations that lurk on our gadgets — the bings and dings that interrupt “Call me Ishmael.”

Researchers are working to get a clearer sense of the differences between online and print reading — comprehension, for starters, seems better with paper — and are grappling with what these differences could mean not only for enjoying the latest Pat Conroy novel but for understanding difficult material at work and school. There is concern that young children’s affinity and often mastery of their parents’ devices could stunt the development of deep reading skills.

The brain is the innocent bystander in this new world. It just reflects how we live.

“The brain is plastic its whole life span,” Wolf said. “The brain is constantly adapting.”

Wolf, one of the world’s foremost experts on the study of reading, was startled last year to discover her brain was apparently adapting, too. After a day of scrolling through the Web and hundreds of e-mails, she sat down one evening to read Hermann Hesse’s “The Glass Bead Game.”

“I’m not kidding: I couldn’t do it,” she said. “It was torture getting through the first page. I couldn’t force myself to slow down so that I wasn’t skimming, picking out key words, organizing my eye movements to generate the most information at the highest speed. I was so disgusted with myself.”

Adapting to read

The brain was not designed for reading. There are no genes for reading like there are for language or vision. But spurred by the emergence of Egyptian hieroglyphics, the Phoenician alphabet, Chinese paper and, finally, the Gutenberg press, the brain has adapted to read.

Before the Internet, the brain read mostly in linear ways — one page led to the next page, and so on. Sure, there might be pictures mixed in with the text, but there didn’t tend to be many distractions. Reading in print even gave us a remarkable ability to remember where key information was in a book simply by the layout, researchers said. We’d know a protagonist died on the page with the two long paragraphs after the page with all that dialogue.

The Internet is different. With so much information, hyperlinked text, videos alongside words and interactivity everywhere, our brains form shortcuts to deal with it all — scanning, searching for key words, scrolling up and down quickly. This is nonlinear reading, and it has been documented in academic studies. Some researchers believe that for many people, this style of reading is beginning to invade when dealing with other mediums as well.

“We’re spending so much time touching, pushing, linking, scroll­ing and jumping through text that when we sit down with a novel, your daily habits of jumping, clicking, linking is just ingrained in you,” said Andrew Dillon, a University of Texas professor who studies reading. “We’re in this new era of information behavior, and we’re beginning to see the consequences of that.”

Brandon Ambrose, a 31-year-old Navy financial analyst who lives in Alexandria, knows of those consequences.

His book club recently read “The Interestings,” a best-seller by Meg Wolitzer. When the club met, he realized he had missed a number of the book’s key plot points. It hit him that he had been scanning for information about one particular aspect of the book, just as he might scan for one particular fact on his computer screen, where he spends much of his day.

“When you try to read a novel,” he said, “it’s almost like we’re not built to read them anymore, as bad as that sounds.”

Ramesh Kurup noticed something even more troubling. Working his way recently through a number of classic authors — George Eliot, Marcel Proust, that crowd — Kurup, 47, discovered that he was having trouble reading long sentences with multiple, winding clauses full of background information. Online sentences tend to be shorter, and the ones containing complicated information tend to link to helpful background material.

“In a book, there are no graphics or links to keep you on track,” Kurup said.

It’s easier to follow links, he thinks, than to keep track of so many clauses in page after page of long paragraphs.

 

Read the entire article here (but don’t click anywhere else).

Beauty Of and In Numbers

golden-ratio

Mathematics seems to explain and underlie much of our modern world: manufacturing, exploration, transportation, logistics, healthcare, technology — all depend on numbers in one form or another. So it should come as no surprise that there are mathematical formulae that describe our notions of beauty. This would seem to be counter-intuitive since beauty is a very subjective experience for all of us — one person’s colorful, blotted mess is another’s Jackson Pollock masterpiece. Yet, mathematicians have known for some time that there is a certain composition of sizes that are more frequently characterized as beautiful than others. Known as the golden ratio, architects, designers and artists have long exploited this mathematical formula to render their works more appealing.

From Wired:

Mathematical concepts can be difficult to grasp, but given the right context they can help explain some of the world’s biggest mysteries. For instance, what is it about a sunflower that makes it so pleasing to look at? Or why do I find the cereal box-shaped United Nations building in New York City to be so captivating?

Beauty may very well be subjective, but there’s thought to be mathematical reasoning behind why we’re attracted to certain shapes and objects. Called the golden ratio, this theory states there’s a recurring proportion of arrangement that lends certain things their beauty. Represented as an equation: a/b = (a+b)/a, the golden ratio is all around us—conical sea shells, human faces, flower petals, buildings—we just don’t always know we’re looking at it. In Golden Meaning, a new book from London publisher GraphicDesign&, 55 designers aim to demystify the golden ratio using clever illustrations and smart graphic design.

GraphicDesign& founders Lucienne Roberts and Rebecca Wright partnered up with math evangelist Alex Bellos to develop the book, with the main goal of making math accessible through design. “We want this to be a useful tool to demonstrate something that often makes people anxious,” explains Roberts. “We hope it’s as interesting to people who are interested in math as it is to the people who are interested in the visual.”

Each designer came at the problem from a different angle, but in order to appreciate the cleverness found in the book, it’s important to have a little background on the golden mean. Bellos uses this line to illustrate the concept at its most basic.

n Golden Meaning he writes: “The line is separated into two sections in such a way that the ratio of the whole line to the larger section is equal to the ratio of the larger section to the smaller section.” This ratio ends up being 1.618.

Salvador Dali and Le Corbusier have used the golden mean as a guiding principle in their work, the Taj Mahal was designed with it in mind, and it’s thought that many of the faces of attractive people follow these proportions. The golden ratio then is essentially a formula for beauty.

With this in mind, Robert and Wright gave designers a simple brief: To explore, explain and communicate the golden ratio however they see fit. There’s a recipe for golden bars that requires bakers to parcel out ingredients based the ratio instead of exact measurements, an illustration that shows a bottle of wine being poured into glasses using the ratio. The book itself is actually a golden rectangle. “You get it much more than looking at an equation,” says Roberts.

A particular favorite shows two side-by-side images of British designer Oli Kellett. On the left is his normal face, on the right is the same face after he rearranged his features in accordance to the golden ratio. So is he really more beautiful after his mathematical surgery? “We liked him as he is,” says Roberts. “In a way it disproves the theory.”

Read the entire article here.

Image: Several examples of the golden ratio at work, from the book Golden Meaning by GraphicDesign&. Courtesy of GraphicDesign&.

Tales From the Office: I Hate My Job

cubiclesIt is no coincidence that I post this article on a Monday. After all it’s the most loathsome day of the week according to most people this side of the galaxy. All because of the very human invention known as work.

Some present-day Bartleby (the Scrivener)’s are taking up arms and rising up against the man. A few human gears in the vast corporate machine are no longer content to suck up to the boss or accept every demand from the corner office — take the recent case of a Manhattan court stenographer.

From the Guardian:

If you want a vision of the future, imagine a wage slave typing: “I hate my job. I hate my job. I hate my job,” on a keyboard, for ever. That’s what a Manhattan court typist is accused of doing, having been fired from his post two years ago, after jeopardising upwards of 30 trials, according to the New York Post. Many of the court transcripts were “complete gibberish” as the stenographer was alledgedly suffering the effects of alcohol abuse, but the one that has caught public attention contains the phrase “I hate my job” over and over again. Officials are reportedly struggling to mitigate the damage, and the typist now says he’s in recovery, but it’s worth considering how long it took the court officials to realise he hadn’t been taking proper notes at all.

You can’t help but feel a small pang of joy at part of the story, though. Surely everyone, at some point, has longed, but perhaps not dared, to do the same. In a dreary Coventry bedsit in 2007, I read Herman Melville’s Bartleby the Scrivener, the tale of a new employee who calmly refuses to do anything he is paid to do, to the complete bafflement of his boss, and found myself thinking in wonder: “This is the greatest story I have ever read.” No wonder it still resonates. Who hasn’t sat in their office, and felt like saying to their bosses: “I would prefer not to,” when asked to stuff envelopes or run to the post office?

For some bizarre reason, it’s still taboo to admit that most jobs are unspeakably dull. On application forms, it’s anathema to write: “Reason for leaving last job: hated it”, and “Reason for applying for this post: I like money.” The fact that so many people gleefully shared this story shows that many of us, deep down, harbour a suspicion that our jobs aren’t necessarily what we want to be doing for the rest of our lives. A lot of us aren’t always happy and fulfilled at work, and aren’t always completely productive.

Dreaming of turning to our boss and saying: “I would prefer not to,” or spending an afternoon typing “I hate my job. I hate my job. I hate my job” into Microsoft Word seems like a worthy way of spending the time. And, as with the court typist, maybe people wouldn’t even notice. In one of my workplaces, before a round of redundancies, on my last day my manager piled yet more work on to my desk and said yet again that she was far too busy to do her invoices. With nothing to lose, I pointed out that she had a large plate glass window behind her, so for the entire length of my temp job, I’d been able to see that she spent most of the day playing Spider Solitaire.

Howard Beale’s rant in Network, caricaturish as it is cathartic, strikes a nerve too: there’s something endlessly satisfying in fantasising about pushing your computer over, throwing your chair through the window and telling your most hated colleagues what you’ve always thought about them. But instead we keep it bottled up, go to the pub and grind our teeth. Still, here’s to the modern-day Bartlebys.

Read the entire article here.

Image: Office cubicles. Courtesy of Nomorecubes.

 

Dump Arial. Garamond is Cheaper and Less Dull

ArialMTsp.svgNot only is the Arial font dreadfully sleep-inducing — most corporate Powerpoint presentations live and breathe Arial — it’s expensive. Print a document suffused with Arial and its variants and it will cost you more in expensive ink. So, jettison Arial for some sleeker typefaces like Century Gothic or Garamond; besides, they’re prettier too!

A fascinating body of research by an 8th-grader (14 years old) from Pittsburgh shows that the U.S. government could save around $400 million per year by moving away from Arial to a thinner, less thirsty typeface. Interestingly enough, researchers have also found that readers tend to retain more from documents set in more esoteric fonts versus simple typefaces such as Arial and Helvetica.

From the Guardian:

In what can only be described as an impressive piece of research, a Pittsburgh schoolboy has calculated that the US state and federal governments could save getting on for $400m (£240m) a year by changing the typeface they use for printed documents.

Shocked by the number of teacher’s handouts he was getting at his new school, 14-year-old Suvir Mirchandani – having established that ink represents up to 60% of the cost of a printed page and is, ounce for ounce, twice as expensive as Chanel No 5 – embarked on a cost-benefit analysis of a range of different typefaces, CNN reports.

He discovered that by switching to Garamond, whose thin, elegant strokes were designed by the 16th-century French publisher in the 16th century by Claude Garamond, his school district could reduce its ink consumption by 24%, saving as much as $21,000 annually. On that basis, he extrapolated, the federal and state governments could economise $370m (£222m) between them.

But should they? For starters, as the government politely pointed out, the real savings these days are in stopping printing altogether. Also, a 2010 study by the University of Wisconsin-Green Bay estimated it could save $10,000 a year by switching from Arial to Century Gothic, which uses 30% less ink – but also found that because the latter is wider, some documents that fitted on a single page in Arial would now run to two, and so use more paper.

Font choice can affect more than just the bottom line. A 2010 Princeton university study found readers consistently retained more information from material displayed in so-called disfluent or ugly fonts (Monotype Corsiva, Haettenschweiler) than in simple, more readable fonts (Helvetica, Arial).

Read the entire article here.

Image: Arial Monotype font example. Courtesy of Wikipedia.