Saucepan Lids No Longer Understand Cockney

You may not “adam and eve it”, but it seems that fewer and fewer Londoners now take to their “jam jars” for a drive down the “frog and toad” to their neighborhood “rub a dub dub”.

From the Daily Telegraph:

The slang is dying out amid London’s diverse, multi-cultural society, new research has revealed.

A study of 2,000 adults, including half from the capital, found the world famous East End lingo which has been mimicked and mocked for decades is on the wane.

The survey, commissioned by The Museum of London, revealed almost 80 per cent of Londoners do not understand phrases such as ‘donkey’s ears’ – slang for years.

Other examples of rhyming slang which baffled participants included ‘mother hubbard’, which means cupboard, and ‘bacon and eggs’ which means legs.

Significantly, Londoners’ own knowledge of the jargon is now almost as bad as those who live outside of the capital.

Yesterday, Alex Werner, head of history collections at the Museum of London, said: “For many people, Cockney rhyming slang is intrinsic to the identity of London.

“However this research suggests that the Cockney dialect itself may not be enjoying the same level of popularity.

“The origins of Cockney slang reflects the diverse, immigrant community of London’s East End in the 19th century so perhaps it’s no surprise that other forms of slang are taking over as the cultural influences on the city change.”

The term ‘cokenay’ was used in The Reeve’s Tale, the third story in Geoffrey Chaucer’s The Canterbury Tales, to describe a child who was “tenderly brought up” and “effeminate”.

By the early 16th century the reference was commonly used as a derogatory term to describe town-dwellers. Later still, it was used to indicate those born specifically within earshot of the ringing of Bow-bell at St Mary-le-Bow church in east London.

Research by The Museum of London found that just 20 per cent of the 2,000 people questioned knew that ‘rabbit and pork’ meant talk.

It also emerged that very few of those polled understood the meaning of tommy tucker (supper), watch the custard and jelly (telly) or spend time with the teapot lids (kids).

Instead, the report found that most Londoners now have a grasp of just a couple of Cockney phrases such as tea leaf (thief), and apples and pears (stairs).

The most-used cockney slang was found to be the phrase ‘porky pies’ with 13 per cent of those questioned still using it. One in 10 used the term ‘cream crackered’.

Read the entire article here.

Image courtesy of Tesco UK.

Send to Kindle

The End of the World for Doomsday Predictions

Apparently the world is due to end, again, this time on December 21, 2012. This latest prediction is from certain scholars of all things ancient Mayan. Now, of course, the world did not end as per Harold Camping’s most recent predictions, so let’s hope, or not, that the Mayan’s get it right for the sake of humanity.

The infographic below courtesy of xerxy brings many of these failed predictions of death, destruction and apocalypse into living color.

Send to Kindle

Are you a Spammer?

Infographic week continues here at theDiagonal with a visual guide to amateur email spammers. You know you may one if you’ve ever sent an email titled “Read now: this will make your Friday!”, to friends, family and office colleagues. You may be a serial offender if you use the “forward this email” button more than a couple of times as day.

Infographic courtesy of OnlineITDegree.

Send to Kindle

Our Children: Independently Dependent

Why can’t our kids tie their own shoes?

Are we raising our children to be self-obsessed, attention-seeking, helpless and dependent groupthinkers? And, why may the phenomenon of “family time” in the U.S. be a key culprit?

These are some of the questions raised by anthropologist Elinor Ochs and her colleagues. Over the last decade they have studied family life across the globe, from the Amazon region, to Samoa, and middle-America.

From the Wall Street Journal:

Why do American children depend on their parents to do things for them that they are capable of doing for themselves? How do U.S. working parents’ views of “family time” affect their stress levels? These are just two of the questions that researchers at UCLA’s Center on Everyday Lives of Families, or CELF, are trying to answer in their work.

By studying families at home—or, as the scientists say, “in vivo”—rather than in a lab, they hope to better grasp how families with two working parents balance child care, household duties and career, and how this balance affects their health and well-being.

The center, which also includes sociologists, psychologists and archeologists, wants to understand “what the middle class thought, felt and what they did,” says Dr. Ochs. The researchers plan to publish two books this year on their work, and say they hope the findings may help families become closer and healthier.

Ten years ago, the UCLA team recorded video for a week of nearly every moment at home in the lives of 32 Southern California families. They have been picking apart the footage ever since, scrutinizing behavior, comments and even their refrigerators’s contents for clues.

The families, recruited primarily through ads, owned their own homes and had two or three children, at least one of whom was between 7 and 12 years old. About a third of the families had at least one nonwhite member, and two were headed by same-sex couples. Each family was filmed by two cameras and watched all day by at least three observers.

Among the findings: The families had very a child-centered focus, which may help explain the “dependency dilemma” seen among American middle-class families, says Dr. Ochs. Parents intend to develop their children’s independence, yet raise them to be relatively dependent, even when the kids have the skills to act on their own, she says.

In addition, these parents tended to have a very specific, idealized way of thinking about family time, says Tami Kremer-Sadlik, a former CELF research director who is now the director of programs for the division of social sciences at UCLA. These ideals appeared to generate guilt when work intruded on family life, and left parents feeling pressured to create perfect time together. The researchers noted that the presence of the observers may have altered some of the families’ behavior.

How kids develop moral responsibility is an area of focus for the researchers. Dr. Ochs, who began her career in far-off regions of the world studying the concept of “baby talk,” noticed that American children seemed relatively helpless compared with those in other cultures she and colleagues had observed.

In those cultures, young children were expected to contribute substantially to the community, says Dr. Ochs. Children in Samoa serve food to their elders, waiting patiently in front of them before they eat, as shown in one video snippet. Another video clip shows a girl around 5 years of age in Peru’s Amazon region climbing a tall tree to harvest papaya, and helping haul logs thicker than her leg to stoke a fire.

By contrast, the U.S. videos showed Los Angeles parents focusing more on the children, using simplified talk with them, doing most of the housework and intervening quickly when the kids had trouble completing a task.

In 22 of 30 families, children frequently ignored or resisted appeals to help, according to a study published in the journal Ethos in 2009. In the remaining eight families, the children weren’t asked to do much. In some cases, the children routinely asked the parents to do tasks, like getting them silverware. “How am I supposed to cut my food?” Dr. Ochs recalls one girl asking her parents.

Asking children to do a task led to much negotiation, and when parents asked, it sounded often like they were asking a favor, not making a demand, researchers said. Parents interviewed about their behavior said it was often too much trouble to ask.

For instance, one exchange caught on video shows an 8-year-old named Ben sprawled out on a couch near the front door, lifting his white, high-top sneaker to his father, the shoe laced. “Dad, untie my shoe,” he pleads. His father says Ben needs to say “please.”

“Please untie my shoe,” says the child in an identical tone as before. After his father hands the shoe back to him, Ben says, “Please put my shoe on and tie it,” and his father obliges.

Read the entire article after the jump:

Image courtesy of Kyle T. Webster / Wall Street Journal.

Send to Kindle

There’s the Big Bang theory and then there’s The Big Bang Theory

Now in it’s fifth season on U.S. television, The Big Bang Theory has made serious geekiness fun and science cool. In fact, the show is rising in popularity to such an extent that a Google search for “big bang theory” ranks the show first and above all other more learned scientific entires.

Brad Hooker from Symmetry Breaking asks some deep questions of David Saltzberg, science advisor to The Big Bang Theory.

From Symmetry Breaking:

For those who live, breathe and laugh physics, one show entangles them all: The Big Bang Theory. Now in its fifth season on CBS, the show follows a group of geeks, including a NASA engineer, an astrophysicist and two particle physicists.

Every episode has at least one particle physics joke. On faster-than-light neutrinos: “Is this observation another Swiss export full of more holes than their cheese?” On Saul Perlmutter clutching the Nobel Prize: “What’s the matter, Saul? You afraid somebody’s going to steal it, like you stole Einstein’s cosmological constant?”

To make these jokes timely and accurate, while sprinkling the sets with authentic scientific plots and posters, the show’s writers depend on one physicist, David Saltzberg. Since the first episode, Saltzberg’s dose of realism has made science chic again, and has even been credited with increasing admissions to physics programs. Symmetry writer Brad Hooker asked the LHC physicist, former Tevatron researcher and University of California, Los Angeles professor to explain how he walks the tightrope between science and sitcom.

Brad: How many of your suggestions are put into the show?

David: In general, when they ask for something, they use it. But it’s never anything that’s funny or moves the story along. It’s the part that you don’t need to understand. They explained to me in the beginning that you can watch an I Love Lucy rerun and not understand Spanish, but understand that Ricky Ricardo is angry. That’s all the level of science understanding needed for the show.

B: These references are current. Astrophysicist Saul Perlmutter of Lawrence Berkeley National Laboratory was mentioned on the show just weeks after winning the Nobel Prize for discovering the accelerating expansion of the universe.

D: Right. And you may wonder why they chose Saul Perlmutter, as opposed to the other two winners. It just comes down to that they liked the sound of his name better. Things like that matter. The writers think of the script in terms of music and the rhythm of the lines. I usually give them multiple choices because I don’t know if they want something short or long or something with odd sounds in it. They really think about that kind of thing.

B: Do the writers ever ask you to explain the science and it goes completely over their heads?

D: We respond by email so I don’t really know. But I don’t think it goes over their heads because you can Wikipedia anything.

One thing was a little difficult for me: they asked for a spoof of the Born-Oppenheimer approximation, which is harder than it sounds. But for the most part it’s just a matter of narrowing it down to a few choices. There are so many ways to go through it and I deliberately chose things that are current.

First of all, these guys live in our universe—they’re talking about the things we physicists are talking about. And also, there isn’t a whole lot of science journalism out there. It’s been cut back a lot. In getting the words out there, whether it’s “dark matter” or “topological insulators,” hopefully some fraction of the audience will Google it.

B: Are you working with any other science advisors? I know one character is a neurobiologist.

D: Luckily the actress who portrays her, Mayim Bialik, is also a neuroscientist. She has a PhD in neuroscience from UCLA. So that worked out really well because I don’t know all of physics, let alone all of science. What I’m able to do with the physics is say, “Well, we don’t really talk like that even though it’s technically correct.” And I can’t do that for biology, but she can.

Read the entire article after the jump.

Image courtesy of The Big Bang Theory, Warner Bros.

Send to Kindle

First, There Was Bell Labs

The results of innovation surround us. Innovation nourishes our food supply and helps us heal when we are sick; innovation lubricates our businesses, underlies our products, and facilitates our interactions. Innovation stokes our forward momentum.

But, before many of our recent technological marvels could come in to being, some fundamental innovations were necessary. These were the technical precursors and catalysts that paves the way for the iPad and the smartphone , GPS and search engines and microwave ovens. The building blocks that made much of this possible included the transistor, the laser, the Unix operating system, the communication satellite. And, all of these came from one place, Bell Labs, during a short but highly productive period from 1920 to 1980.

In his new book, “The Idea Factory”, Jon Gertner explores how and why so much innovation sprung from the visionary leaders, engineers and scientists of Bell Labs

From the New York Times:

In today’s world of Apple, Google and Facebook, the name may not ring any bells for most readers, but for decades — from the 1920s through the 1980s — Bell Labs, the research and development wing of AT&T, was the most innovative scientific organization in the world. As Jon Gertner argues in his riveting new book, “The Idea Factory,” it was where the future was invented.

Indeed, Bell Labs was behind many of the innovations that have come to define modern life, including the transistor (the building block of all digital products), the laser, the silicon solar cell and the computer operating system called Unix (which would serve as the basis for a host of other computer languages). Bell Labs developed the first communications satellites, the first cellular telephone systems and the first fiber-optic cable systems.

The Bell Labs scientist Claude Elwood Shannon effectively founded the field of information theory, which would revolutionize thinking about communications; other Bell Labs researchers helped push the boundaries of physics, chemistry and mathematics, while defining new industrial processes like quality control.

In “The Idea Factory,” Mr. Gertner — an editor at Fast Company magazine and a writer for The New York Times Magazine — not only gives us spirited portraits of the scientists behind Bell Labs’ phenomenal success, but he also looks at the reasons that research organization became such a fount of innovation, laying the groundwork for the networked world we now live in.

It’s clear from this volume that the visionary leadership of the researcher turned executive Mervin Kelly played a large role in Bell Labs’ sense of mission and its ability to institutionalize the process of innovation so effectively. Kelly believed that an “institute of creative technology” needed a critical mass of talented scientists — whom he housed in a single building, where physicists, chemists, mathematicians and engineers were encouraged to exchange ideas — and he gave his researchers the time to pursue their own investigations “sometimes without concrete goals, for years on end.”

That freedom, of course, was predicated on the steady stream of revenue provided (in the years before the AT&T monopoly was broken up in the early 1980s) by the monthly bills paid by telephone subscribers, which allowed Bell Labs to function “much like a national laboratory.” Unlike, say, many Silicon Valley companies today, which need to keep an eye on quarterly reports, Bell Labs in its heyday could patiently search out what Mr. Gertner calls “new and fundamental ideas,” while using its immense engineering staff to “develop and perfect those ideas” — creating new products, then making them cheaper, more efficient and more durable.

Given the evolution of the digital world we inhabit today, Kelly’s prescience is stunning in retrospect. “He had predicted grand vistas for the postwar electronics industry even before the transistor,” Mr. Gertner writes. “He had also insisted that basic scientific research could translate into astounding computer and military applications, as well as miracles within the communications systems — ‘a telephone system of the future,’ as he had said in 1951, ‘much more like the biological systems of man’s brain and nervous system.’ ”

Read the entire article after jump.

Image: Jack A. Morton (left) and J. R. Wilson at Bell Laboratories, circa 1948. Courtesy of Computer History Museum.

Send to Kindle

GE and EE: The Dark Side of Facebook

That’s G.E. and E.E, not “Glee”. In social psychology circles GE means grandiose exhibitionism, while EE stands for entitlement / exploitativeness. Researchers find that having a large number of “ifriends”on social networks, such as Facebook, correlates with high levels of GE and EE. The greater the number of friends you have online, the greater the odds that you are a chronic attention seeker with shallow relationships or a “socially disruptive narcissist”.

From the Guardian:

People who score highly on the Narcissistic Personality Inventory questionnaire had more friends on Facebook, tagged themselves more often and updated their newsfeeds more regularly.

The research comes amid increasing evidence that young people are becoming increasingly narcissistic, and obsessed with self-image and shallow friendships.

The latest study, published in the journal Personality and Individual Differences, also found that narcissists responded more aggressively to derogatory comments made about them on the social networking site’s public walls and changed their profile pictures more often.

A number of previous studies have linked narcissism with Facebook use, but this is some of the first evidence of a direct relationship between Facebook friends and the most “toxic” elements of narcissistic personality disorder.

Researchers at Western Illinois University studied the Facebook habits of 294 students, aged between 18 and 65, and measured two “socially disruptive” elements of narcissism – grandiose exhibitionism (GE) and entitlement/exploitativeness (EE).

GE includes ”self-absorption, vanity, superiority, and exhibitionistic tendencies” and people who score high on this aspect of narcissism need to be constantly at the centre of attention. They often say shocking things and inappropriately self-disclose because they cannot stand to be ignored or waste a chance of self-promotion.

The EE aspect includes “a sense of deserving respect and a willingness to manipulate and take advantage of others”.

The research revealed that the higher someone scored on aspects of GE, the greater the number of friends they had on Facebook, with some amassing more than 800.

Those scoring highly on EE and GG were also more likely to accept friend requests from strangers and seek social support, but less likely to provide it, according to the research.

Carol Craig, a social scientist and chief executive of the Centre for Confidence and Well-being, said young people in Britain were becoming increasingly narcissistic and Facebook provided a platform for the disorder.

“The way that children are being educated is focussing more and more on the importance of self esteem – on how you are seen in the eyes of others. This method of teaching has been imported from the US and is ‘all about me’.

“Facebook provides a platform for people to self-promote by changing profile pictures and showing how many hundreds of friends you have. I know of some who have more than 1,000.”

Dr Viv Vignoles, senior lecturer in social psychology at Sussex University, said there was “clear evidence” from studies in America that college students were becoming increasingly narcissistic.

Read the entire article after the jump.

Image “Looking at You, and You and You”, Jennifer Daniel, an illustrator, created a fan page on Facebook and asked friends to submit their images for this mosaic; 238 of them did so. Courtesy of the New York Times.

Send to Kindle

Spectres in the Urban Jungle

Following on from our recent article on contemporary artist Rob Mulholland, whose mirrored sculptures wander in a woodland in Scotland, comes Chinese artist Liu Bolin, with his series of “invisible” self-portraits.

Bolin paints himself into the background, and then disappears. Following many hours of meticulous preparation Bolin merges with his surroundings in a performance that makes U.S. military camouflage systems look almost amateurish.

Liu Bolin’s 4th solo exhibit is currently showing at Eli Klein gallery

Send to Kindle

Spectres in the Forest

The best art is simple and evocative.

Like eerie imagined alien life forms mirrored sculptures meander through a woodland in Scotland. The life-size camouflaged figures are on display at the David Marshall Lodge near Aberfoyle, Scotland.

Contemporary artist Rob Mulholland designed the series of six mirrored sculptures, named Vestige, which are shaped from silhouettes of people he knows.

In Rob Mulholland’s own words:

The essence of who we are as individuals in relationship to others and our given environment forms a strong aspect of my artistic practise.

In Vestige I wanted to explore this relationship further by creating a group, a community within the protective elements of the woods, reflecting  the past inhabitants of the space.

The six male and female figures not only absorb their environment, they create a notion of non – space, a link with the past that forces us both as individuals and as a society to consider our relationship with our natural environment .

See more of Rob Mulholland’s art after the jump.

Send to Kindle

Culturomics

From the Wall Street Journal:

Can physicists produce insights about language that have eluded linguists and English professors? That possibility was put to the test this week when a team of physicists published a paper drawing on Google’s massive collection of scanned books. They claim to have identified universal laws governing the birth, life course and death of words.

The paper marks an advance in a new field dubbed “Culturomics”: the application of data-crunching to subjects typically considered part of the humanities. Last year a group of social scientists and evolutionary theorists, plus the Google Books team, showed off the kinds of things that could be done with Google’s data, which include the contents of five-million-plus books, dating back to 1800.

Published in Science, that paper gave the best-yet estimate of the true number of words in English—a million, far more than any dictionary has recorded (the 2002 Webster’s Third New International Dictionary has 348,000). More than half of the language, the authors wrote, is “dark matter” that has evaded standard dictionaries.

The paper also tracked word usage through time (each year, for instance, 1% of the world’s English-speaking population switches from “sneaked” to “snuck”). It also showed that we seem to be putting history behind us more quickly, judging by the speed with which terms fall out of use. References to the year “1880” dropped by half in the 32 years after that date, while the half-life of “1973” was a mere decade.

In the new paper, Alexander Petersen, Joel Tenenbaum and their co-authors looked at the ebb and flow of word usage across various fields. “All these different words are battling it out against synonyms, variant spellings and related words,” says Mr. Tenenbaum. “It’s an inherently competitive, evolutionary environment.”

When the scientists analyzed the data, they found striking patterns not just in English but also in Spanish and Hebrew. There has been, the authors say, a “dramatic shift in the birth rate and death rates of words”: Deaths have increased and births have slowed.

English continues to grow—the 2011 Culturonomics paper suggested a rate of 8,500 new words a year. The new paper, however, says that the growth rate is slowing. Partly because the language is already so rich, the “marginal utility” of new words is declining: Existing things are already well described. This led them to a related finding: The words that manage to be born now become more popular than new words used to get, possibly because they describe something genuinely new (think “iPod,” “Internet,” “Twitter”).

Higher death rates for words, the authors say, are largely a matter of homogenization. The explorer William Clark (of Lewis & Clark) spelled “Sioux” 27 different ways in his journals (“Sieoux,” “Seaux,” “Souixx,” etc.), and several of those variants would have made it into 19th-century books. Today spell-checking programs and vigilant copy editors choke off such chaotic variety much more quickly, in effect speeding up the natural selection of words. (The database does not include the world of text- and Twitter-speak, so some of the verbal chaos may just have shifted online.)

Read the entire article here.

Send to Kindle

Everything Comes in Threes

From the Guardian:

Last week’s results from the Daya Bay neutrino experiment were the first real measurement of the third neutrino mixing angle, ?13 (theta one-three). There have been previous experiments which set limits on the angle, but this is the first time it has been shown to be significantly different from zero.

Since ?13 is a fundamental parameter in the Standard Model of particle physics1, this would be an important measurement anyway. But there’s a bit more to it than that.

Neutrinos – whatever else they might be doing – mix up amongst themselves as they travel through space. This is a quantum mechanical effect, and comes from the fact that there are two ways of defining the three types of neutrino.

You can define them by the way they are produced. So a neutrino which is produced (or destroyed) in conjunction with an electron is an “electron neutrino”. If a muon is involved, it’s a “muon neutrino”. The third one is a “tau neutrino”. We call this the “flavour”.

Or you can define them by their masses. Usually we just call this definition neutrinos 1, 2 and 3.

The two definitions don’t line up, and there is a matrix which tells you how much of each “flavour” neutrino overlaps with each “mass” one. This is the neutrino mixing matrix. Inside this matrix in the standard model there are potentially four parameters describing how the neutrinos mix.

You could just have two-way mixing. For example, the flavour states might just mix up neutrino 1 and 2, and neutrino 2 and 3. This would be the case if the angle ?13 were zero. If it is bigger than zero (as Daya Bay have now shown) then neutrino 1 also mixes with neutrino 3. In this case, and only in this case, a fourth parameter is also allowed in the matrix. This fourth parameter (?) is one we haven’t measured yet, but now we know it is there. And the really important thing is, if it is there, and also not zero, then it introduces an asymmetry between matter and antimatter.

This is important because currently we don’t know why there is more matter than antimatter around. We also don’t know why there are three copies of neutrinos (and indeed of each class of fundamental particle). But we know that three copies is minimum number which allows some difference in the way matter and antimatter experience the weak nuclear force. This is the kind of clue which sets off big klaxons in the minds of physicists: New physics hiding somewhere here! It strongly suggests that these two not-understood facts are connected by some bigger, better theory than the one we have.

We’ve already measured a matter-antimatter difference for quarks; a non-zero ?13 means there can be a difference for neutrinos too. More clues.

Read the entire article here.

Image: The first use of a hydrogen bubble chamber to detect neutrinos, on November 13, 1970. A neutrino hit a proton in a hydrogen atom. The collision occurred at the point where three tracks emanate on the right of the photograph. Courtesy of Wikipedia.

Send to Kindle

Language Translation With a Cool Twist

The last couple of decades has shown a remarkable improvement in the ability of software to translate the written word from one language to another. Yahoo Babel Fish and Google Translate are good examples. Also, voice recognition systems, such as those you encounter every day when trying desperately to connect with a real customer service rep, have taken great leaps forward. Apple’s Siri now leads the pack.

But, what do you get if you combine translation and voice recognition technology? Well, you get a new service that translates the spoken word in your native language to a second. And, here’s the neat twist. The system translates into the second language while keeping a voice like yours. The technology springs from Microsoft’s Research division in Redmond, WA.

From Technology Review:

Researchers at Microsoft have made software that can learn the sound of your voice, and then use it to speak a language that you don’t. The system could be used to make language tutoring software more personal, or to make tools for travelers.

In a demonstration at Microsoft’s Redmond, Washington, campus on Tuesday, Microsoft research scientist Frank Soong showed how his software could read out text in Spanish using the voice of his boss, Rick Rashid, who leads Microsoft’s research efforts. In a second demonstration, Soong used his software to grant Craig Mundie, Microsoft’s chief research and strategy officer, the ability to speak Mandarin.

Hear Rick Rashid’s voice in his native language and then translated into several other languages:

English:

Italian:

Mandarin:

In English, a synthetic version of Mundie’s voice welcomed the audience to an open day held by Microsoft Research, concluding, “With the help of this system, now I can speak Mandarin.” The phrase was repeated in Mandarin Chinese, in what was still recognizably Mundie’s voice.

“We will be able to do quite a few scenario applications,” said Soong, who created the system with colleagues at Microsoft Research Asia, the company’s second-largest research lab, in Beijing, China.

Read the entire article here.

Send to Kindle

Skyscrapers A La Mode

Since 2006 Evolo architecture magazine has run a competition for architects to bring life to their most fantastic skyscraper designs. All the finalists of 2012 competition presented some stunning ideas, and topped by the winner, Himalaya Water Tower, from Zhi Zheng, Hongchuan Zhao, Dongbai Song of China.

From Evolo:

Housed within 55,000 glaciers in the Himalaya Mountains sits 40 percent of the world’s fresh water. The massive ice sheets are melting at a faster-than-ever pace due to climate change, posing possible dire consequences for the continent of Asia and the entire world stand, and especially for the villages and cities that sit on the seven rivers that come are fed from the Himalayas’ runoff as they respond with erratic flooding or drought.

The “Himalaya Water Tower” is a skyscraper located high in the mountain range that serves to store water and helps regulate its dispersal to the land below as the mountains’ natural supplies dry up. The skyscraper, which can be replicated en masse, will collect water in the rainy season, purify it, freeze it into ice and store it for future use. The water distribution schedule will evolve with the needs of residents below; while it can be used to help in times of current drought, it’s also meant to store plentiful water for future generations.

Follow the other notable finalists at Evolo magazine after the jump.

Send to Kindle

Have Wormhole, Will Travel

Intergalactic travel just became a lot easier, well, if only theoretically at the moment.

From New Scientist:

IT IS not every day that a piece of science fiction takes a step closer to nuts-and-bolts reality. But that is what seems to be happening to wormholes. Enter one of these tunnels through space-time, and a few short steps later you may emerge near Pluto or even in the Andromeda galaxy millions of light years away.

You probably won’t be surprised to learn that no one has yet come close to constructing such a wormhole. One reason is that they are notoriously unstable. Even on paper, they have a tendency to snap shut in the blink of an eye unless they are propped open by an exotic form of matter with negative energy, whose existence is itself in doubt.

Now, all that has changed. A team of physicists from Germany and Greece has shown that building wormholes may be possible without any input from negative energy at all. “You don’t even need normal matter with positive energy,” says Burkhard Kleihaus of the University of Oldenburg in Germany. “Wormholes can be propped open with nothing.”

The findings raise the tantalising possibility that we might finally be able to detect a wormhole in space. Civilisations far more advanced than ours may already be shuttling back and forth through a galactic-wide subway system constructed from wormholes. And eventually we might even be able to use them ourselves as portals to other universes.

Wormholes first emerged in Einstein’s general theory of relativity, which famously shows that gravity is nothing more than the hidden warping of space-time by energy, usually the mass-energy of stars and galaxies. Soon after Einstein published his equations in 1916, Austrian physicist Ludwig Flamm discovered that they also predicted conduits through space and time.

But it was Einstein himself who made detailed investigations of wormholes with Nathan Rosen. In 1935, they concocted one consisting of two black holes, connected by a tunnel through space-time. Travelling through their wormhole was only possible if the black holes at either end were of a special kind. A conventional black hole has such a powerful gravitational field that material sucked in can never escape once it has crossed what is called the event horizon. The black holes at the end of an Einstein-Rosen wormhole would be unencumbered by such points of no return.

Einstein and Rosen’s wormholes seemed a mere curiosity for another reason: their destination was inconceivable. The only connection the wormholes offered from our universe was to a region of space in a parallel universe, perhaps with its own stars, galaxies and planets. While today’s theorists are comfortable with the idea of our universe being just one of many, in Einstein and Rosen’s day such a multiverse was unthinkable.

Fortunately, it turned out that general relativity permitted the existence of another type of wormhole. In 1955, American physicist John Wheeler showed that it was possible to connect two regions of space in our universe, which would be far more useful for fast intergalactic travel. He coined the catchy name wormhole to add to black holes, which he can also take credit for.

The trouble is the wormholes of Wheeler and Einstein and Rosen all have the same flaw. They are unstable. Send even a single photon of light zooming through and it instantly triggers the formation of an event horizon, which effectively snaps shut the wormhole.

Bizarrely, it is the American planetary astronomer Carl Sagan who is credited with moving the field on. In his science fiction novel, Contact, he needed a quick and scientifically sound method of galactic transport for his heroine – played by Jodie Foster in the movie. Sagan asked theorist Kip Thorne at the California Institute of Technology in Pasadena for help, and Thorne realised a wormhole would do the trick. In 1987, he and his graduate students Michael Morris and Uri Yertsever worked out the recipe to create a traversable wormhole. It turned out that the mouths could be kept open by hypothetical material possessing a negative energy. Given enough negative energy, such a material has a repulsive form of gravity, which physically pushes open the wormhole mouth.

Negative energy is not such a ridiculous idea. Imagine two parallel metal plates sitting in a vacuum. If you place them close together the vacuum between them has negative energy – that is, less energy than the vacuum outside. This is because a normal vacuum is like a roiling sea of waves, and the waves that are too big to fit between the plates are naturally excluded. This leaves less energy inside the plates than outside.

Unfortunately, this kind of negative energy exists in quantities far too feeble to prop open a wormhole mouth. Not only that but a Thorne-Morris-Yertsever wormhole that is big enough for someone to crawl through requires a tremendous amount of energy – equivalent to the energy pumped out in a year by an appreciable fraction of the stars in the galaxy.

Back to the drawing board then? Not quite. There may be a way to bypass those difficulties. All the wormholes envisioned until recently assume that Einstein’s theory of gravity is correct. In fact, this is unlikely to be the case. For a start, the theory breaks down at the heart of a black hole, as well as at the beginning of time in the big bang. Also, quantum theory, which describes the microscopic world of atoms, is incompatible with general relativity. Since quantum theory is supremely successful – explaining everything from why the ground is solid to how the sun shines – many researchers believe that Einstein’s theory of gravity must be an approximation of a deeper theory.

Read the entire article here.

Image of a traversable wormhole which connects the place in front of the physical institutes of Tübingen University with the sand dunes near Boulogne sur Mer in the north of France. Courtesy of Wikipedia.

Send to Kindle

Creativity: Insight, Shower, Wine, Perspiration? Yes

Some believe creativity stems from a sudden insightful realization, a bolt from the blue that awakens the imagination. Others believe creativity comes from years of discipline and hard work. Well, both groups are correct, but the answer is a little more complex.

From the Wall Street Journal:

Creativity can seem like magic. We look at people like Steve Jobs and Bob Dylan, and we conclude that they must possess supernatural powers denied to mere mortals like us, gifts that allow them to imagine what has never existed before. They’re “creative types.” We’re not.

But creativity is not magic, and there’s no such thing as a creative type. Creativity is not a trait that we inherit in our genes or a blessing bestowed by the angels. It’s a skill. Anyone can learn to be creative and to get better at it. New research is shedding light on what allows people to develop world-changing products and to solve the toughest problems. A surprisingly concrete set of lessons has emerged about what creativity is and how to spark it in ourselves and our work.

The science of creativity is relatively new. Until the Enlightenment, acts of imagination were always equated with higher powers. Being creative meant channeling the muses, giving voice to the gods. (“Inspiration” literally means “breathed upon.”) Even in modern times, scientists have paid little attention to the sources of creativity.

But over the past decade, that has begun to change. Imagination was once thought to be a single thing, separate from other kinds of cognition. The latest research suggests that this assumption is false. It turns out that we use “creativity” as a catchall term for a variety of cognitive tools, each of which applies to particular sorts of problems and is coaxed to action in a particular way.

Does the challenge that we’re facing require a moment of insight, a sudden leap in consciousness? Or can it be solved gradually, one piece at a time? The answer often determines whether we should drink a beer to relax or hop ourselves up on Red Bull, whether we take a long shower or stay late at the office.

The new research also suggests how best to approach the thorniest problems. We tend to assume that experts are the creative geniuses in their own fields. But big breakthroughs often depend on the naive daring of outsiders. For prompting creativity, few things are as important as time devoted to cross-pollination with fields outside our areas of expertise.

Let’s start with the hardest problems, those challenges that at first blush seem impossible. Such problems are typically solved (if they are solved at all) in a moment of insight.

Consider the case of Arthur Fry, an engineer at 3M in the paper products division. In the winter of 1974, Mr. Fry attended a presentation by Sheldon Silver, an engineer working on adhesives. Mr. Silver had developed an extremely weak glue, a paste so feeble it could barely hold two pieces of paper together. Like everyone else in the room, Mr. Fry patiently listened to the presentation and then failed to come up with any practical applications for the compound. What good, after all, is a glue that doesn’t stick?

On a frigid Sunday morning, however, the paste would re-enter Mr. Fry’s thoughts, albeit in a rather unlikely context. He sang in the church choir and liked to put little pieces of paper in the hymnal to mark the songs he was supposed to sing. Unfortunately, the little pieces of paper often fell out, forcing Mr. Fry to spend the service frantically thumbing through the book, looking for the right page. It seemed like an unfixable problem, one of those ordinary hassles that we’re forced to live with.

But then, during a particularly tedious sermon, Mr. Fry had an epiphany. He suddenly realized how he might make use of that weak glue: It could be applied to paper to create a reusable bookmark! Because the adhesive was barely sticky, it would adhere to the page but wouldn’t tear it when removed. That revelation in the church would eventually result in one of the most widely used office products in the world: the Post-it Note.

Mr. Fry’s invention was a classic moment of insight. Though such events seem to spring from nowhere, as if the cortex is surprising us with a breakthrough, scientists have begun studying how they occur. They do this by giving people “insight” puzzles, like the one that follows, and watching what happens in the brain:

A man has married 20 women in a small town. All of the women are still alive, and none of them is divorced. The man has broken no laws. Who is the man?

Read the entire article here.

Image courtesy of Google search.

Send to Kindle

Your Favorite Sitcoms in Italian, Russian or Mandarin

Secretly, you may be wishing you had a 12 foot satellite antenna in your backyard to soak up these esoteric, alien signals directly. How about the 1990’s sitcom “The Nanny” dubbed into Italian, or a Russian remake of “Everybody Loves Raymond”, a French “Law and Order”, or our favorite, a British remake of “Jersey Shore” set in Newcastle, named “Gordie Shore”. This would make an interesting anthropological study and partly highlights why other countries may have a certain and dim view of U.S. culture.

The top ten U.S. cultural exports courtesy of FlavorWire after the jump.

Send to Kindle

How to Be a Great Boss… From Hell, For Dummies

Some very basic lessons on how to be a truly bad boss. Lesson number one: keep your employees from making any contribution to or progress on meaningful work.

From the Washington Post:

Recall your worst day at work, when events of the day left you frustrated, unmotivated by the job, and brimming with disdain for your boss and your organization. That day is probably unforgettable. But do you know exactly how your boss was able to make it so horrible for you? Our research provides insight into the precise levers you can use to re-create that sort of memorable experience for your own underlings.

Over the past 15 years, we have studied what makes people happy and engaged at work. In discovering the answer, we also learned a lot about misery at work. Our research method was pretty straightforward. We collected confidential electronic diaries from 238 professionals in seven companies, each day for several months. All told, those diaries described nearly 12,000 days – how people felt, and the events that stood out in their minds. Systematically analyzing those diaries, we compared the events occurring on the best days with those on the worst.

What we discovered is that the key factor you can use to make employees miserable on the job is to simply keep them from making progress in meaningful work.

People want to make a valuable contribution, and feel great when they make progress toward doing so. Knowing this progress principle is the first step to knowing how to destroy an employee’s work life. Many leaders, from team managers to CEOs, are already surprisingly expert at smothering employee engagement. In fact, on one-third of those 12,000 days, the person writing the diary was either unhappy at work, demotivated by the work, or both.

That’s pretty efficient work-life demolition, but it leaves room for improvement.

Step 1: Never allow pride of accomplishment. When we analyzed the events occurring on people’s very worst days at the office, one thing stood out: setbacks. Setbacks are any instances where employees feel stalled in their most important work or unable to make any meaningful contribution. So, at every turn, stymie employees’ desire to make a difference. One of the most effective examples we saw was a head of product development, who routinely moved people on and off projects like chess pieces in a game for which only he had the rules.

The next step follows organically from the first.

Step 2: Miss no opportunity to block progress on employees’ projects. Every day, you’ll see dozens of ways to inhibit substantial forward movement on your subordinates’ most important efforts. Goal-setting is a great place to start. Give conflicting goals, change them as frequently as possible, and allow people no autonomy in meeting them. If you get this formula just right, the destructive effects on motivation and performance can be truly dramatic.

Read the entire article here.

Image courtesy of Google search.

Send to Kindle

Turing Test 2.0 – Intelligent Behavior Free of Bigotry

One wonders what the world would look like today had Alan Turing been criminally prosecuted and jailed by the British government for his homosexuality before the Second World War, rather than in 1952. Would the British have been able to break German Naval ciphers encoded by their Enigma machine? Would the German Navy have prevailed, and would the Nazis have gone on to conquer the British Isles?

Actually, Turing was not imprisoned in 1952 — rather, he “accepted” chemical castration at the hands of the British government rather than face jail. He died two years later of self-inflicted cyanide poisoning, just short of his 42nd birthday.

Now a hundred years on from his birthday, historians are reflecting on his short life and his lasting legacy. Turing is widely regarded to have founded the discipline of artificial intelligence and he made significant contributions to computing. Yet most of his achievements went unrecognized for many decades or were given short shrift, perhaps, due to his confidential work for the government, or more likely, because of his persona non grata status.

In 2009 the British government offered Turing an apology. And, of course, we now have the Turing Test. (The Turing Test is a test of a machine’s ability to exhibit intelligent behavior). So, one hundred years after Turing’s birth to honor his life we should launch a new and improved Turing Test. Let’s call it the Turing Test 2.0.

This test would measure a human’s ability to exhibit intelligent behavior free of bigotry.

From Nature:

Alan Turing is always in the news — for his place in science, but also for his 1952 conviction for having gay sex (illegal in Britain until 1967) and his suicide two years later. Former Prime Minister Gordon Brown issued an apology to Turing in 2009, and a campaign for a ‘pardon’ was rebuffed earlier this month.

Must you be a great figure to merit a ‘pardon’ for being gay? If so, how great? Is it enough to break the Enigma ciphers used by Nazi Germany in the Second World War? Or do you need to invent the computer as well, with artificial intelligence as a bonus? Is that great enough?

Turing’s reputation has gone from zero to hero, but defining what he achieved is not simple. Is it correct to credit Turing with the computer? To historians who focus on the engineering of early machines, Turing is an also-ran. Today’s scientists know the maxim ‘publish or perish’, and Turing just did not publish enough about computers. He quickly became perishable goods. His major published papers on computability (in 1936) and artificial intelligence (in 1950) are some of the most cited in the scientific literature, but they leave a yawning gap. His extensive computer plans of 1946, 1947 and 1948 were left as unpublished reports. He never put into scientific journals the simple claim that he had worked out how to turn his 1936 “universal machine” into the practical electronic computer of 1945. Turing missed those first opportunities to explain the theory and strategy of programming, and instead got trapped in the technicalities of primitive storage mechanisms.

He could have caught up after 1949, had he used his time at the University of Manchester, UK, to write a definitive account of the theory and practice of computing. Instead, he founded a new field in mathematical biology and left other people to record the landscape of computers. They painted him out of it. The first book on computers to be published in Britain, Faster than Thought (Pitman, 1953), offered this derisive definition of Turing’s theoretical contribution:

“Türing machine. In 1936 Dr. Turing wrote a paper on the design and limitations of computing machines. For this reason they are sometimes known by his name. The umlaut is an unearned and undesirable addition, due, presumably, to an impression that anything so incomprehensible must be Teutonic.”

That a book on computers should describe the theory of computing as incomprehensible neatly illustrates the climate Turing had to endure. He did make a brief contribution to the book, buried in chapter 26, in which he summarized computability and the universal machine. However, his low-key account never conveyed that these central concepts were his own, or that he had planned the computer revolution.

Read the entire article here.

Image: Alan Mathison Turing at the time of his election to a Fellowship of the Royal Society. Photograph was taken at the Elliott & Fry studio on 29 March 1951.

Send to Kindle

The New Middle Age

We have all heard it — 50 is the “new 30”, 60 is the “new 40”. Adolescence now seems to stretch on into the mid- to late-20s. And, what on Earth is “middle age” anyway? As these previously well defined life-stages become more fluid perhaps it’s time for yet another calibration.

From the Independent:

One thing that can be said of “Middle Age” is that it’s moving further from the middle. The annual British Social Attitudes Survey suggests just a third of people in their 40s regard themselves as middle-aged, while almost a third of those in their 70s are still clinging to the label, arthritic fingers notwithstanding. In A Shed of One’s Own, his very funny new memoir of male midlife crisis and its avoidance, Marcus Berkmann reaches for a number of definitions for his time of life: “Middle age is comedy, and also tragedy,” he says. “Other people’s middle age is self-evidently ridiculous, while our own represents the collapse of all our hopes and dreams.”

He cites Denis Norden, who said: “Middle age is when, wherever you go on holiday, you pack a sweater.” And the fictional Frasier Crane, who maintains that the middle-aged “go ‘oof’ when [they] sit down on a sofa”. Shakespeare’s famous Seven Ages of Man speech, delivered by the melancholy Jacques in As You Like It, delineated the phases of human development by occupation: the schoolboy, the adolescent lover, the soldier, and the – presumably, middle-aged – legal professional. We have long defined ourselves compulsively by our stages in life; we yearn for maturity, then mourn the passing of youth. But to what extent are these stages socio-cultural (holidays/sweaters) and to what extent are they biological (sofas/”oof”)?

Patricia Cohen, New York Times reporter and author of another new study of ageing, In Our Prime: The Invention of Middle Age, might not be overly sympathetic to Berkmann’s plight. The mid-life crisis, she suggests, is a marketing trick designed to sell cosmetics, cars and expensive foreign holidays; people in their 20s and 30s are far more vulnerable to such a crisis than their parents. Cohen finds little evidence for so-called “empty nest syndrome”, or for the widespread stereotype of the rich man with the young “trophy wife”.

She even claims that middle age itself is a “cultural fiction”, and that Americans only became neurotic about entering their 40s at the turn of the 20th century, when they started lying to census-takers about their age. Before then, “age was not an essential ingredient of one’s identity”. Rather, people were classified according to “marker events”: marriage, parenthood and so on. In 1800 the average American woman had seven children; by 1900 she had three. They were out of her hair by her early 40s and, thanks to modern medicine, she could look forward to a further 20 years or more of active life.

As Berkmann laments, “one of the most tangible symptoms of middle age is the sensation that you’re being cast adrift from mainstream culture.” Then again, the baby boomers, and the more mature members of “Generation X”, are the most powerful of economic blocs. The over-50s spend far more on consumer goods than their younger counterparts, making them particularly valuable to advertisers – and perpetuating the idea of the middle-aged as a discernible demographic.

David Bainbridge, a vet and evolutionary zoologist, also weighs in on the topic in his latest book, Middle Age: A Natural History. Middle age is an exclusively human phenomenon, Bainbridge explains, and doesn’t exist elsewhere in the animal kingdom, where infirmity often follows hot on the heels of parenthood. It is, he argues, “largely the product of millions of years of human evolution… not a 20th-century cultural invention.” He urges readers to embrace middle age as “flux, not crisis” – which is probably what he said to his wife, when he bought himself a blue vintage Lotus soon after turning 40.

Read the entire article here.

Image courtesy of Practical Financial.

Send to Kindle

Doctors Die Too, But Differently

From the Wall Street Journal:

Years ago, Charlie, a highly respected orthopedist and a mentor of mine, found a lump in his stomach. It was diagnosed as pancreatic cancer by one of the best surgeons in the country, who had developed a procedure that could triple a patient’s five-year-survival odds—from 5% to 15%—albeit with a poor quality of life.

Charlie, 68 years old, was uninterested. He went home the next day, closed his practice and never set foot in a hospital again. He focused on spending time with his family. Several months later, he died at home. He got no chemotherapy, radiation or surgical treatment. Medicare didn’t spend much on him.

It’s not something that we like to talk about, but doctors die, too. What’s unusual about them is not how much treatment they get compared with most Americans, but how little. They know exactly what is going to happen, they know the choices, and they generally have access to any sort of medical care that they could want. But they tend to go serenely and gently.

Doctors don’t want to die any more than anyone else does. But they usually have talked about the limits of modern medicine with their families. They want to make sure that, when the time comes, no heroic measures are taken. During their last moments, they know, for instance, that they don’t want someone breaking their ribs by performing cardiopulmonary resuscitation (which is what happens when CPR is done right).

In a 2003 article, Joseph J. Gallo and others looked at what physicians want when it comes to end-of-life decisions. In a survey of 765 doctors, they found that 64% had created an advanced directive—specifying what steps should and should not be taken to save their lives should they become incapacitated. That compares to only about 20% for the general public. (As one might expect, older doctors are more likely than younger doctors to have made “arrangements,” as shown in a study by Paula Lester and others.)

Why such a large gap between the decisions of doctors and patients? The case of CPR is instructive. A study by Susan Diem and others of how CPR is portrayed on TV found that it was successful in 75% of the cases and that 67% of the TV patients went home. In reality, a 2010 study of more than 95,000 cases of CPR found that only 8% of patients survived for more than one month. Of these, only about 3% could lead a mostly normal life.

Read the entire article here.

Image: The Triumph of Death, Pieter Bruegel the Elder, 1562. Museo del Prado in Madrid.

Send to Kindle

Culture, Language and Genes

In the early 19th century Noah Webster set about re-defining written English. His aim was to standardize the spoken word in the fledgling nation and to distinguish American from British usage. In his own words, “as an independent nation, our honor requires us to have a system of our own, in language as well as government.”

He used his dictionary, which still bears his name today, as a tool to cleanse English of its stubborn reliance on aristocratic pedantry and over-reliance on Latin and Greek. He “simplified” the spelling of numerous words that he believed were contsructed with rules that were all too complicated. Thus, “colour” became “color” and “honour” switched to “honor”; “centre” became “center”, “behaviour” to “behavior”, “traveller” to “traveler”.

Webster offers a perfect example of why humanity seems so adept at fragmenting into diverse cultural groups that thrive through mutual uncomprehension. In “Wired for Culture”, evolutionary biologist Mark Pagel offers a compelling explanation based on that small, yet very selfish biological building block, the gene.

From the Wall Street Journal:

The island of Gaua, part of Vanuatu in the Pacific, is just 13 miles across, yet it has five distinct native languages. Papua New Guinea, an area only slightly bigger than Texas, has 800 languages, some spoken by just a few thousand people.

Evolutionary biologists have long gotten used to the idea that bodies are just genes’ ways of making more genes, survival machines that carry genes to the next generation. Think of a salmon struggling upstream just to expend its body (now expendable) in spawning. Dr. Pagel’s idea is that cultures are an extension of this: that the way we use culture is to promote the long-term interests of our genes.

It need not be this way. When human beings’ lives became dominated by culture, they could have adopted habits that did not lead to having more descendants. But on the whole we did not; we set about using culture to favor survival of those like us at the expense of other groups, using religion, warfare, cooperation and social allegiance. As Dr. Pagel comments: “Our genes’ gamble at handing over control to…ideas paid off handsomely” in the conquest of the world.

What this means, he argues, is that if our “cultures have promoted our genetic interests throughout our history,” then our “particular culture is not for us, but for our genes.”

We’re expendable. The allegiance we feel to one tribe—religious, sporting, political, linguistic, even racial—is a peculiar mixture of altruism toward the group and hostility to other groups. Throughout history, united groups have stood, while divided ones fell.

Language is the most striking exemplar of Dr. Pagel’s thesis. He calls language “one of the most powerful, dangerous and subversive traits that natural selection has ever devised.” He draws attention to the curious parallels between genetics and linguistics. Both are digital systems, in which words or base pairs are recombined to make an infinite possibility of messages. (Elsewhere I once noted the numerical similarity between Shakespeare’s vocabulary of about 20,000 distinct words and his genome of about 21,000 genes).

Dr. Pagel points out that language is a “technology for rewiring other people’s minds…without either of you having to perform surgery.” But natural section was unlikely to favor such a technology if it helped just the speaker, or just the listener, at the expense of the other. Rather, he says that, just as the language of the genes promotes its own survival via a larger cooperative entity called the body, so language itself endures via the survival of the individual and the tribe.

Read the entire article here.

Image courtesy of PA / Daily Mail.

Send to Kindle

A Philosoper On Avoiding Death

Below we excerpt a brilliant essay by Alex Byrne summarizing his argument that our personal survival is grossly over-valued. But, this should not give future teleportation engineers chance to pause. Alex Byrne is a professor of philosophy at MIT.

From the Boston Review:

Star Trek–style teleportation may one day become a reality. You step into the transporter, which instantly scans your body and brain, vaporizing them in the process. The information is transmitted to Mars, where it is used by the receiving station to reconstitute your body and brain exactly as they were on Earth. You then step out of the receiving station, slightly dizzy, but pleased to arrive on Mars in a few minutes, as opposed to the year it takes by old-fashioned spacecraft.

But wait. Do you really step out of the receiving station on Mars? Someone just like you steps out, someone who apparently remembers stepping into the transporter on Earth a few minutes before. But perhaps this person is merely your replica—a kind of clone or copy. That would not make this person you: in Las Vegas there is a replica of the Eiffel Tower, but the Eiffel Tower is in Paris, not in Las Vegas. If the Eiffel Tower were vaporized and a replica instantly erected in Las Vegas, the Eiffel Tower would not have been transported to Las Vegas. It would have ceased to exist. And if teleportation were like that, stepping into the transporter would essentially be a covert way of committing suicide. Troubled by these thoughts, you now realize that “you” have been commuting back and forth to Mars for years . . .

So which is it? You are preoccupied with a question about your survival: Do you survive teleportation to Mars? A lot hangs on the question, and it is not obvious how to answer it. Teleportation is just science fiction, of course; does the urgent fictional question have a counterpart in reality? Indeed it does: Do you, or could you, survive death?

Teeming hordes of humanity adhere to religious doctrines that promise survival after death: perhaps bodily resurrection at the Day of Judgment, reincarnation, or immaterial immortality. For these people, death is not the end.

Some of a more secular persuasion do not disagree. The body of the baseball great Ted Williams lies in a container cooled by liquid nitrogen to -321 degrees Fahrenheit, awaiting the Great Thawing, when he will rise to sign sports memorabilia again. (Williams’s prospects are somewhat compromised because his head has apparently been preserved separately.) For the futurist Ray Kurzweil, hope lies in the possibility that he will be uploaded to new and shiny hardware—as pictures are transferred to Facebook’s servers—leaving his outmoded biological container behind.

Isn’t all this a pipe dream? Why isn’t “uploading” merely a way of producing a perfect Kurzweil-impersonator, rather than the real thing? Cryogenic storage might help if I am still alive when frozen, but what good is it after I am dead? And is the religious line any more plausible? “Earth to earth, ashes to ashes, dust to dust” hardly sounds like the dawn of a new day. Where is—as the Book of Common Prayer has it—the “sure and certain hope of the Resurrection to eternal life”? If a forest fire consumes a house and the luckless family hamster, that’s the end of them, presumably. Why are we any different?

Philosophers have had a good deal of interest to say about these issues, under the unexciting rubric of “personal identity.” Let us begin our tour of some highlights with a more general topic: the survival, or “persistence,” of objects over time.

Physical objects (including plants and animals) typically come into existence at some time, and cease to exist at a later time, or so we normally think. For example, a cottage might come into existence when enough beams and bricks are assembled, and cease to exist a century later, when it is demolished to make room for a McMansion. A mighty oak tree began life as a tiny green shoot, or perhaps an acorn, and will end its existence when it is sawn into planks.

The cottage and the oak survive a variety of vicissitudes throughout their careers. The house survived Hurricane Irene, say. That is, the house existed before Irene and also existed after Irene. We can put this in terms of “identity”: the house existed before Irene and something existed after Irene that was identical to the house.

Read the entire essay here.

Send to Kindle

A Very, Like, Interestaaaaaaang Linguistic Study?

Uptalk? Verbal fry? Linguistic curiosities enter the mainstream courtesy of trendsetting young women aged 18-25 and Australians.

From the Daily Telegraph:

From Valley Girls to the Kardashians, young women have long been mocked for the way they talk.

Whether it be uptalk (pronouncing statements as if they were questions? Like this?), creating slang words like “bitchin’ ” and “ridic,” or the incessant use of “like” as a conversation filler, vocal trends associated with young women are often seen as markers of immaturity or even stupidity.

Right?

But linguists — many of whom once promoted theories consistent with that attitude — now say such thinking is outmoded. Girls and women in their teens and 20s deserve credit for pioneering vocal trends and popular slang, they say, adding that young women use these embellishments in much more sophisticated ways than people tend to realize.

“A lot of these really flamboyant things you hear are cute, and girls are supposed to be cute,” said Penny Eckert, a professor of linguistics at Stanford University. “But they’re not just using them because they’re girls. They’re using them to achieve some kind of interactional and stylistic end.”

The latest linguistic curiosity to emerge from the petri dish of girl culture gained a burst of public recognition in December, when researchers from Long Island University published a paper about it in The Journal of Voice. Working with what they acknowledged was a very small sample — recorded speech from 34 women ages 18 to 25 — the professors said they had found evidence of a new trend among female college students: a guttural fluttering of the vocal cords they called “vocal fry.”

A classic example of vocal fry, best described as a raspy or croaking sound injected (usually) at the end of a sentence, can be heard when Mae West says, “Why don’t you come up sometime and see me,” or, more recently on television, when Maya Rudolph mimics Maya Angelou on “Saturday Night Live.”

Not surprisingly, gadflies in cyberspace were quick to pounce on the study — or, more specifically, on the girls and women who are frying their words. “Are they trying to sound like Kesha or Britney Spears?” teased The Huffington Post, naming two pop stars who employ vocal fry while singing, although the study made no mention of them. “Very interesteeeaaaaaaaaang,” said Gawker.com, mocking the lazy, drawn-out affect.

Do not scoff, says Nassima Abdelli-Beruh, a speech scientist at Long Island University and an author of the study. “They use this as a tool to convey something,” she said. “You quickly realize that for them, it is as a cue.”

Other linguists not involved in the research also cautioned against forming negative judgments.

“If women do something like uptalk or vocal fry, it’s immediately interpreted as insecure, emotional or even stupid,” said Carmen Fought, a professor of linguistics at Pitzer College in Claremont, Calif. “The truth is this: Young women take linguistic features and use them as power tools for building relationships.”

The idea that young women serve as incubators of vocal trends for the culture at large has longstanding roots in linguistics. As Paris is to fashion, the thinking goes, so are young women to linguistic innovation.

Read the entire article here.

Image courtesy of Paul Hoppe, Daily Telegraph.

Send to Kindle