Night Owl? You Are Evil

New research — probably conducted by a group of early-risers — shows that people who prefer to stay up late, and rise late, are more likely to be narcissistic, insensitive, manipulative and psychopathic.

That said, previous research has suggested that night owls are generally more intelligent and wealthier than their early-rising, but nicer, cousins.

From the Telegraph:

Psychologists have found that people who are often described as “night owls” display more signs of narcissism, Machiavellianism and psychopathic tendencies than those who are “morning larks”.

The scientists suggest these reason for these traits, known as the Dark Triad, being more prevalent in those who do better in the night may be linked to our evolutionary past.

They claim that the hours of darkness may have helped to conceal those who adopted a “cheaters strategy” while living in groups.

Some social animals will use the cover of darkness to steal females away from more dominant males. This behaviour was also recently spotted in rhinos in Africa.

Dr Peter Jonason, a psychologist at the University of Western Sydney, said: “It could be adaptively effective for anyone pursuing a fast life strategy like that embodied in the Dark Triad to occupy and exploit a lowlight environment where others are sleeping and have diminished cognitive functioning.

“Such features of the night may facilitate the casual sex, mate-poaching, and risk-taking the Dark Triad traits are linked to.

“In short, those high on the Dark Triad traits, like many other predators such as lions, African hunting dogs and scorpions, are creatures of the night.”

Dr Jonason and his colleagues, whose research is published in the journal of Personality and Individual Differences, surveyed 263 students, asking them to complete a series of standard personality tests designed to test their score for the Dark Triad traits.

They were rated on scales for narcissism, the tendency to seek admiration and special treatment; Machiavellianism, a desire to manipulate others; and psychopathy, an inclination towards callousness and insensitivity.

To test each, they were asked to rate their agreement with statements like: “I have a natural talent for influencing people”, “I could beat a lie detector” and “people suffering from incurable diseases should have the choice of being put painlessly to death”.

The volunteers were also asked to complete a questionnaire about how alert they felt at different times of the day and how late they stayed up at night.

The study revealed that those with a darker personality score tended to say they functioned more effectively in the evening.

They also found that those who stayed up later tended to have a higher sense of entitlement and seemed to be more exploitative.

They could find no evidence, however, that the traits were linked to the participants gender, ruling out the possibility that the tendency to plot and act in the night time had its roots in sexual evolution.

Previous research has suggested that people who thrive at night tend also to be more intelligent.

Combined with the other darker personality traits, this could be a dangerous mix.

Read the entire article here.

Image: Portrait of Niccolò Machiavelli, by Santi di Tito. Courtesy of Wikipedia.

Carlos Danger and Other Pseudonyms

Your friendly editor at theDiagonal, also known as, Salvador Gamble, is always game for some sardonic wit. So, we are very proud to point you to Slate’s online pseudonym generator. If, like New York mayoral candidate and ex-U.S. Congressman, Anthony Weiner, you need a mysterious persona to protect your (lewd) stream of consciousness online, then this is the tool for you!

We used the generator to come up with online alter egos for a few of our favorite, trending personalities:

– Chris Froome: Ronaldo Stealth

– Lance Armstrong: Ignacio Death

– Vladimir Putin: Ronaldo Kill

Mitch McConnell: Inigo Peril

– Ben Bernanke: Pascual Menace

MondayMap: Feeding the Mississippi

The system of streams and tributaries that feeds the great Mississippi river is a complex interconnected web covering around half of the United States. A new mapping tool puts it all in one intricate chart.

From Slate:

A new online tool released by the Department of the Interior this week allows users to select any major stream and trace it up to its sources or down to its watershed. The above map, exported from the tool, highlights all the major tributaries that feed into the Mississippi River, illustrating the river’s huge catchment area of approximately 1.15 million square miles, or 37 percent of the land area of the continental U.S. Use the tool to see where the streams around you are getting their water (and pollution).

See a larger version of the map here.

Image: Map of the Mississippi river system. Courtesy of Nationalatlas.gov.

Warp Factor

To date the fastest speed ever traveled by humans is just under 25,000 miles per hour. This milestone was reached by the reentry capsule from the Apollo 10 moon mission — reaching 24,961 mph as it hurtled through Earth’s upper atmosphere. Yet this pales in comparison to the speed of light, which clocks in at 186,282 miles per second, in a vacuum. A quick visit to the calculator puts Apollo 10 at 6.93 miles per second, or 0.0037 percent speed of light!

Despite our very pedestrian speeds many dream of a future where humans might reach the stars, powered by some kind of “warp drive” (yes, Star Trek comes to mind). A handful of researchers at NASA are actively pondering this today. Though, our poor level of technology combined with our lack of understanding of the workings of the universe, suggests that an Alcubierre-like approach is still centuries away from our grasp.

From the New York Times:

Beyond the security gate at the Johnson Space Center’s 1960s-era campus here, inside a two-story glass and concrete building with winding corridors, there is a floating laboratory.

Harold G. White, a physicist and advanced propulsion engineer at NASA, beckoned toward a table full of equipment there on a recent afternoon: a laser, a camera, some small mirrors, a ring made of ceramic capacitors and a few other objects.

He and other NASA engineers have been designing and redesigning these instruments, with the goal of using them to slightly warp the trajectory of a photon, changing the distance it travels in a certain area, and then observing the change with a device called an interferometer. So sensitive is their measuring equipment that it was picking up myriad earthly vibrations, including people walking nearby. So they recently moved into this lab, which floats atop a system of underground pneumatic piers, freeing it from seismic disturbances.

The team is trying to determine whether faster-than-light travel — warp drive — might someday be possible.

Warp drive. Like on “Star Trek.”

“Space has been expanding since the Big Bang 13.7 billion years ago,” said Dr. White, 43, who runs the research project. “And we know that when you look at some of the cosmology models, there were early periods of the universe where there was explosive inflation, where two points would’ve went receding away from each other at very rapid speeds.”

“Nature can do it,” he said. “So the question is, can we do it?”

Einstein famously postulated that, as Dr. White put it, “thou shalt not exceed the speed of light,” essentially setting a galactic speed limit. But in 1994, a Mexican physicist, Miguel Alcubierre, theorized that faster-than-light speeds were possible in a way that did not contradict Einstein, though Dr. Alcubierre did not suggest anyone could actually construct the engine that could accomplish that.

His theory involved harnessing the expansion and contraction of space itself. Under Dr. Alcubierre’s hypothesis, a ship still couldn’t exceed light speed in a local region of space. But a theoretical propulsion system he sketched out manipulated space-time by generating a so-called “warp bubble” that would expand space on one side of a spacecraft and contract it on another.

“In this way, the spaceship will be pushed away from the Earth and pulled towards a distant star by space-time itself,” Dr. Alcubierre wrote. Dr. White has likened it to stepping onto a moving walkway at an airport.

But Dr. Alcubierre’s paper was purely theoretical, and suggested insurmountable hurdles. Among other things, it depended on large amounts of a little understood or observed type of “exotic matter” that violates typical physical laws.

Dr. White believes that advances he and others have made render warp speed less implausible. Among other things, he has redesigned the theoretical warp-traveling spacecraft — and in particular a ring around it that is key to its propulsion system — in a way that he believes will greatly reduce the energy requirements.

Read the entire article here.

Sounds of Extinction

Camera aficionados will find themselves lamenting the demise of the film advance. Now that the world has moved on from film to digital you will no longer hear that distinctive mechanical sound as you wind on the film, and hope the teeth on the spool engage the plastic of the film.

Hardcore computer buffs will no doubt miss the beep-beep-hiss sound of the 56K modem — that now seemingly ancient box that once connected us to… well, who knows what it actually connected us to at that speed.

Our favorite arcane sounds, soon to become relegated to the audio graveyard: the telephone handset slam, the click and carriage return of the typewriter, the whir of reel-to-reel tape, the crackle of the diamond stylus as it first hits an empty groove on a 33.

More sounds you may (or may not) miss below.

From Wired:

The forward march of technology has a drum beat. These days, it’s custom text-message alerts, or your friend saying “OK, Glass” every five minutes like a tech-drunk parrot. And meanwhile, some of the most beloved sounds are falling out of the marching band.

The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.

Let’s celebrate the dying cries of technology’s past. The follow sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.

The Telephone Slam

Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone’s internal bell. That’s how you knew you were really pissed — when you slammed the phone so hard, it rang.

There are other sounds we’ll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he’s still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.

Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.

The CRT Television

The only TVs left that still use cathode-ray tubes are stashed in the most depressing places — the waiting rooms of hospitals, used car dealerships, and the dusty guest bedroom at your grandparents’ house. But before we all fell prey to the magical resolution of zeros and ones, boxy CRT televisions warmed (literally) the living rooms of every home in America. The sounds they made when you turned them on warmed our hearts, too — the gentle whoosh of the degaussing coil as the set was brought to life with the heavy tug of a pull-switch, or the satisfying mechanical clunk of a power button. As the tube warmed up, you’d see the visuals slowly brighten on the screen, giving you ample time to settle into the couch to enjoy latest episode of Seinfeld.

Read the entire article here.

Image courtesy of Wired.

Dolphins Use Names

From Wired:

For decades, scientists have been fascinated by dolphins’ so-called signature whistles: distinctive vocal patterns learned early and used throughout life. The purpose of these whistles is a matter of debate, but new research shows that dolphins respond selectively to recorded versions of their personal signatures, much as a person might react to someone calling their name.

Combined with earlier findings, the results “present the first case of naming in mammals, providing a clear parallel between dolphin and human communication,” said biologist Stephanie King of Scotland’s University of St. Andrews, an author of the new study.

Earlier research by Janik and King showed that bottlenose dolphins call each other’s signature whistles while temporarily restrained in nets, but questions had remained over how dolphins used them at sea, in their everyday lives. King’s new experiment, conducted with fellow St. Andrews biologist Vincent Janik and described July 22 in Proceedings of the National Academy of Sciences, involved wild bottlenose groups off Scotland’s eastern coast.

Janik and King recorded their signature whistles, then broadcast computer-synthesized versions through a hydrophone. They also played back recordings of unfamiliar signature whistles. The dolphins ignored signatures belonging to other individuals in their groups, as well as unfamiliar whistles.

To their own signatures, however, they usually whistled back, suggesting that dolphins may use the signatures to address one another.

The new findings are “clearly a landmark,” said biologist Shane Gero of Dalhousie University, whose own research suggests that sperm whales have names. “I think this study puts to bed the argument of whether signature whistles are truly signatures.”

Gero is especially interested in the different ways that dolphins responded to hearing their signature called. Sometimes they simply repeated their signature — a bit, perhaps, like hearing your name called and shouting back, “Yes, I’m here!” Some dolphins, however, followed their signatures with a long string of other whistles.

“It opens the door to syntax, to how and when it’s ‘appropriate’ to address one another,” said Gero, who wonders if the different response types might be related to social roles or status. Referring to each other by name suggests that dolphins may recall past experiences with other individual dolphins, Gero said.

“The concept of ‘relationship’ as we know it may be more relevant than just a sequence of independent selfish interactions,” said Gero. “We likely underestimate the complexity of their communication system, cognitive abilities, and the depth of meaning in their actions.”

King and Janik have also observed that dolphins often make their signature whistles when groups encounter one another, as if to announce exactly who is present.

To Peter Tyack, a Woods Hole Oceanographic Institution biologist who has previously studied dolphin signature whistle-copying, the new findings support the possiblity of dolphin names, but more experiments would help illuminate the meanings they attach to their signatures.

Read the entire article here.

Image: Bottlenose dolphin with young. Courtesy of Wikipedia.

Portrait of a Royal Baby

Royal-watchers from all corners of the globe, especially the British one, have been agog over the arrival of the latest royal earlier this week. The overblown media circus got us thinking about baby pictures. Will the Prince of Cambridge be the first heir to the throne to have his portrait enshrined via Instagram? Or, as is more likely, will his royal essence be captured in oil on canvas, as with the 35 or more generations that preceded him?

From Jonathan Jones over at the Guardian:

Royal children have been portrayed by some of the greatest artists down the ages, preserving images of childhood that are still touching today. Will this royal baby fare better than its mother in the portraits that are sure to come? Are there any artists out there who can go head to head with the greats of royal child portraiture?

Agnolo Bronzino has to be first among those greats, because he painted small children in a way that set the tone for many royal images to come. Some might say the Medici rulers of Florence, for whom he worked, were not properly royal – but they definitely acted like a royal family, and the artists who worked for them set the tone of court art all over Europe. In Giovanni de’ Medici As a Child, Bronzino expresses the joy of children and the pleasure of parents in a way that was revolutionary in the 16th century. Chubby-cheeked and jolly, Giovanni clutches a pet goldfinch. In paintings of the Holy Family you know that if Jesus has a pet bird it probably has some dire symbolic meaning. But this pet is just a pet. Giovanni is just a happy kid. Actually, a happy baby: he was about 18 months old.

Hans Holbein took more care to clarify the regal uniqueness of his subject when he portrayed Edward, only son of King Henry VIII of England, in about 1538. Holbein, too, captures the face of early childhood brilliantly. But how old is Edward meant to be? In fact, he was two. Holbein expresses his infancy – his baby face, his baby hands – while having him stand holding out a majestic hand, dressed like his father, next to an inscription that praises the paternal glory of Henry. Who knows, perhaps he really stood like that for a second or two, long enough for Holbein to take a mental photograph.

Diego Velázquez recorded a more nuanced, even anxious, view of royal childhood in his paintings of the royal princesses of 17th-century Spain. In the greatest of them, Las Meninas, the five-year-old Infanta Margarita Teresa stands looking at us, accompanied by her ladies in waiting (meninas) and two dwarves, while Velázquez works on a portrait of her parents, the king and queen. The infanta is beautiful and confident, attended by her own micro-court – but as she looks out of the painting at her parents (who are standing where the spectator of the painting stands) she is performing. And she is under pressure to look and act like a little princess.

The 19th-century painter Stephen Poyntz Denning may not be in the league of these masters. In fact, let’s be blunt: he definitely isn’t. But his painting Queen Victoria, Aged 4 is a fascinating curiosity. Like the Infanta, this royal princess is not allowed to be childlike. She is dressed in an oppressively formal way, in dark clothes that anticipate her mature image – a childhood lost to royal destiny.

Read the entire article here.

Image: Princess Victoria aged Four, Denning, Stephen Poyntz (c. 1787 – 1864). Courtesy of Wikimedia.

Dopamine on the Mind

Dopamine is one of the brain’s key signalling chemicals. And, because of its central role in the risk-reward structures of the brain it often gets much attention — both in neuroscience research and in the public consciousness.

From Slate:

In a brain that people love to describe as “awash with chemicals,” one chemical always seems to stand out. Dopamine: the molecule behind all our most sinful behaviors and secret cravings. Dopamine is love. Dopamine is lust. Dopamine is adultery. Dopamine is motivation. Dopamine is attention. Dopamine is feminism. Dopamine is addiction.

My, dopamine’s been busy.

Dopamine is the one neurotransmitter that everyone seems to know about. Vaughn Bell once called it the Kim Kardashian of molecules, but I don’t think that’s fair to dopamine. Suffice it to say, dopamine’s big. And every week or so, you’ll see a new article come out all about dopamine.

So is dopamine your cupcake addiction? Your gambling? Your alcoholism? Your sex life? The reality is dopamine has something to do with all of these. But it is none of them. Dopamine is a chemical in your body. That’s all. But that doesn’t make it simple.

What is dopamine? Dopamine is one of the chemical signals that pass information from one neuron to the next in the tiny spaces between them. When it is released from the first neuron, it floats into the space (the synapse) between the two neurons, and it bumps against receptors for it on the other side that then send a signal down the receiving neuron. That sounds very simple, but when you scale it up from a single pair of neurons to the vast networks in your brain, it quickly becomes complex. The effects of dopamine release depend on where it’s coming from, where the receiving neurons are going and what type of neurons they are, what receptors are binding the dopamine (there are five known types), and what role both the releasing and receiving neurons are playing.

And dopamine is busy! It’s involved in many different important pathways. But when most people talk about dopamine, particularly when they talk about motivation, addiction, attention, or lust, they are talking about the dopamine pathway known as the mesolimbic pathway, which starts with cells in the ventral tegmental area, buried deep in the middle of the brain, which send their projections out to places like the nucleus accumbens and the cortex. Increases in dopamine release in the nucleus accumbens occur in response to sex, drugs, and rock and roll. And dopamine signaling in this area is changed during the course of drug addiction.  All abused drugs, from alcohol to cocaine to heroin, increase dopamine in this area in one way or another, and many people like to describe a spike in dopamine as “motivation” or “pleasure.” But that’s not quite it. Really, dopamine is signaling feedback for predicted rewards. If you, say, have learned to associate a cue (like a crack pipe) with a hit of crack, you will start getting increases in dopamine in the nucleus accumbens in response to the sight of the pipe, as your brain predicts the reward. But if you then don’t get your hit, well, then dopamine can decrease, and that’s not a good feeling. So you’d think that maybe dopamine predicts reward. But again, it gets more complex. For example, dopamine can increase in the nucleus accumbens in people with post-traumatic stress disorder when they are experiencing heightened vigilance and paranoia. So you might say, in this brain area at least, dopamine isn’t addiction or reward or fear. Instead, it’s what we call salience. Salience is more than attention: It’s a sign of something that needs to be paid attention to, something that stands out. This may be part of the mesolimbic role in attention deficit hyperactivity disorder and also a part of its role in addiction.

But dopamine itself? It’s not salience. It has far more roles in the brain to play. For example, dopamine plays a big role in starting movement, and the destruction of dopamine neurons in an area of the brain called the substantia nigra is what produces the symptoms of Parkinson’s disease. Dopamine also plays an important role as a hormone, inhibiting prolactin to stop the release of breast milk. Back in the mesolimbic pathway, dopamine can play a role in psychosis, and many antipsychotics for treatment of schizophrenia target dopamine. Dopamine is involved in the frontal cortex in executive functions like attention. In the rest of the body, dopamine is involved in nausea, in kidney function, and in heart function.

With all of these wonderful, interesting things that dopamine does, it gets my goat to see dopamine simplified to things like “attention” or “addiction.” After all, it’s so easy to say “dopamine is X” and call it a day. It’s comforting. You feel like you know the truth at some fundamental biological level, and that’s that. And there are always enough studies out there showing the role of dopamine in X to leave you convinced. But simplifying dopamine, or any chemical in the brain, down to a single action or result gives people a false picture of what it is and what it does. If you think that dopamine is motivation, then more must be better, right? Not necessarily! Because if dopamine is also “pleasure” or “high,” then too much is far too much of a good thing. If you think of dopamine as only being about pleasure or only being about attention, you’ll end up with a false idea of some of the problems involving dopamine, like drug addiction or attention deficit hyperactivity disorder, and you’ll end up with false ideas of how to fix them.

Read the entire article here.

Image: 3D model of dopamine. Courtesy of Wikipedia.

Gnarly Names

By most accounts the internet is home to around 650 million websites, of which around 200 million are active. About 8,000 new websites go live every hour of every day.

These are big numbers and the continued phenomenal growth means that it’s increasingly difficult to find a unique and unused domain name (think website). So, web entrepreneurs are getting creative with website and company names, with varying degrees of success.

From Wall Street Journal:

The New York cousins who started a digital sing-along storybook business have settled on the name Mibblio.

The Australian founder of a startup connecting big companies to big-data scientists has dubbed his service Kaggle.

The former toy executive behind a two-year-old mobile screen-sharing platform is going with the name Shodogg.

And the Missourian who founded a website giving customers access to local merchants and service providers? He thinks it should be called Zaarly.

Quirky names for startups first surfaced about 20 years ago in Silicon Valley, with the birth of search engines such as Yahoo, which stands for “Yet Another Hierarchical Officious Oracle,” and Google, a misspelling of googol,? the almost unfathomably high number represented by a 1 followed by 100 zeroes.

By the early 2000s, the trend had spread to startups outside the Valley, including the Vancouver-based photo-sharing site Flickr and New York-based blogging platform Tumblr, to name just two.

The current crop of startups boasts even wackier spellings. The reason, they say, is that practically every new business—be it a popsicle maker or a furniture retailer—needs its own website. With about 252 million domain names currently registered across the Internet, the short, recognizable dot-com Web addresses, or URLs, have long been taken.

The only practical solution, some entrepreneurs say, is to invent words, like Mibblio, Kaggle, Shodogg and Zaarly, to avoid paying as much as $2 million for a concise, no-nonsense dot-com URL.

The rights to Investing.com, for example, sold for about $2.5 million last year.

Choosing a name that’s a made-up word also helps entrepreneurs steer clear of trademark entanglements.

The challenge is to come up with something that conveys meaning, is memorable,?and isn’t just alphabet soup. Most founders don’t have the budget to hire naming advisers.

Founders tend to favor short names of five to seven letters, because they worry that potential customers might forget longer ones, according to Steve Manning, founder of Igor, a name-consulting company.

Linguistically speaking, there are only a few methods of forming new words. They include misspelling, compounding, blending and scrambling.

At Mibblio, the naming process was “the length of a human gestation period,” says the company’s 28-year-old co-founder David Leiberman, “but only more painful,” adds fellow co-founder Sammy Rubin, 35.

The two men made several trips back to the drawing board; early contenders included Babethoven, Yipsqueak and Canarytales, but none was a perfect fit. One they both loved, Squeakbox, was taken.

Read the entire article here.

Rewriting Memories

Important new research suggests that traumatic memories can be rewritten. Timing is critical.

From Technology Review:

It was a Saturday night at the New York Psychoanalytic Institute, and the second-floor auditorium held an odd mix of gray-haired, cerebral Upper East Side types and young, scruffy downtown grad students in black denim. Up on the stage, neuroscientist Daniela Schiller, a riveting figure with her long, straight hair and impossibly erect posture, paused briefly from what she was doing to deliver a mini-lecture about memory.

She explained how recent research, including her own, has shown that memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.

And then Schiller went back to what she had been doing, which was providing a slamming, rhythmic beat on drums and backup vocals for the Amygdaloids, a rock band composed of New York City neuroscientists. During their performance at the institute’s second annual “Heavy Mental Variety Show,” the band blasted out a selection of its greatest hits, including songs about cognition (“Theory of My Mind”), memory (“A Trace”), and psychopathology (“Brainstorm”).

“Just give me a pill,” Schiller crooned at one point, during the chorus of a song called “Memory Pill.” “Wash away my memories …”

The irony is that if research by Schiller and others holds up, you may not even need a pill to strip a memory of its power to frighten or oppress you.

Schiller, 40, has been in the vanguard of a dramatic reassessment of how human memory works at the most fundamental level. Her current lab group at Mount Sinai School of Medicine, her former colleagues at New York University, and a growing army of like-minded researchers have marshaled a pile of data to argue that we can alter the emotional impact of a memory by adding new information to it or recalling it in a different context. This hypothesis challenges 100 years of neuroscience and overturns cultural touchstones from Marcel Proust to best-selling memoirs. It changes how we think about the permanence of memory and identity, and it suggests radical nonpharmacological approaches to treating pathologies like post-traumatic stress disorder, other fear-based anxiety disorders, and even addictive behaviors.

In a landmark 2010 paper in Nature, Schiller (then a postdoc at New York University) and her NYU colleagues, including Joseph E. LeDoux and Elizabeth A. Phelps, published the results of human experiments indicating that memories are reshaped and rewritten every time we recall an event. And, the research suggested, if mitigating information about a traumatic or unhappy event is introduced within a narrow window of opportunity after its recall—during the few hours it takes for the brain to rebuild the memory in the biological brick and mortar of molecules—the emotional experience of the memory can essentially be rewritten.

“When you affect emotional memory, you don’t affect the content,” Schiller explains. “You still remember perfectly. You just don’t have the emotional memory.”

Fear training

The idea that memories are constantly being rewritten is not entirely new. Experimental evidence to this effect dates back at least to the 1960s. But mainstream researchers tended to ignore the findings for decades because they contradicted the prevailing scientific theory about how memory works.

That view began to dominate the science of memory at the beginning of the 20th century. In 1900, two German scientists, Georg Elias Müller and Alfons Pilzecker, conducted a series of human experiments at the University of Göttingen. Their results suggested that memories were fragile at the moment of formation but were strengthened, or consolidated, over time; once consolidated, these memories remained essentially static, permanently stored in the brain like a file in a cabinet from which they could be retrieved when the urge arose.

It took decades of painstaking research for neuroscientists to tease apart a basic mechanism of memory to explain how consolidation occurred at the level of neurons and proteins: an experience entered the neural landscape of the brain through the senses, was initially “encoded” in a central brain apparatus known as the hippocampus, and then migrated—by means of biochemical and electrical signals—to other precincts of the brain for storage. A famous chapter in this story was the case of “H.M.,” a young man whose hippocampus was removed during surgery in 1953 to treat debilitating epileptic seizures; although physiologically healthy for the remainder of his life (he died in 2008), H.M. was never again able to create new long-term memories, other than to learn new motor skills.

Subsequent research also made clear that there is no single thing called memory but, rather, different types of memory that achieve different biological purposes using different neural pathways. “Episodic” memory refers to the recollection of specific past events; “procedural” memory refers to the ability to remember specific motor skills like riding a bicycle or throwing a ball; fear memory, a particularly powerful form of emotional memory, refers to the immediate sense of distress that comes from recalling a physically or emotionally dangerous experience. Whatever the memory, however, the theory of consolidation argued that it was an unchanging neural trace of an earlier event, fixed in long-term storage. Whenever you retrieved the memory, whether it was triggered by an unpleasant emotional association or by the seductive taste of a madeleine, you essentially fetched a timeless narrative of an earlier event. Humans, in this view, were the sum total of their fixed memories. As recently as 2000 in Science, in a review article titled “Memory—A Century of Consolidation,” James L. McGaugh, a leading neuroscientist at the University of California, Irvine, celebrated the consolidation hypothesis for the way that it “still guides” fundamental research into the biological process of long-term memory.

As it turns out, Proust wasn’t much of a neuroscientist, and consolidation theory couldn’t explain everything about memory. This became apparent during decades of research into what is known as fear training.

Schiller gave me a crash course in fear training one afternoon in her Mount Sinai lab. One of her postdocs, Dorothee Bentz, strapped an electrode onto my right wrist in order to deliver a mild but annoying shock. She also attached sensors to several fingers on my left hand to record my galvanic skin response, a measure of physiological arousal and fear. Then I watched a series of images—blue and purple cylinders—flash by on a computer screen. It quickly became apparent that the blue cylinders often (but not always) preceded a shock, and my skin conductivity readings reflected what I’d learned. Every time I saw a blue cylinder, I became anxious in anticipation of a shock. The “learning” took no more than a couple of minutes, and Schiller pronounced my little bumps of anticipatory anxiety, charted in real time on a nearby monitor, a classic response of fear training. “It’s exactly the same as in the rats,” she said.

In the 1960s and 1970s, several research groups used this kind of fear memory in rats to detect cracks in the theory of memory consolidation. In 1968, for example, Donald J. Lewis of Rutgers University led a study showing that you could make the rats lose the fear associated with a memory if you gave them a strong electroconvulsive shock right after they were induced to retrieve that memory; the shock produced an amnesia about the previously learned fear. Giving a shock to animals that had not retrieved the memory, in contrast, did not cause amnesia. In other words, a strong shock timed to occur immediately after a memory was retrieved seemed to have a unique capacity to disrupt the memory itself and allow it to be reconsolidated in a new way. Follow-up work in the 1980s confirmed some of these observations, but they lay so far outside mainstream thinking that they barely received notice.

Moment of silence

At the time, Schiller was oblivious to these developments. A self-described skateboarding “science geek,” she grew up in Rishon LeZion, Israel’s fourth-largest city, on the coastal plain a few miles southeast of Tel Aviv. She was the youngest of four children of a mother from Morocco and a “culturally Polish” father from Ukraine—“a typical Israeli melting pot,” she says. As a tall, fair-skinned teenager with European features, she recalls feeling estranged from other neighborhood kids because she looked so German.

Schiller remembers exactly when her curiosity about the nature of human memory began. She was in the sixth grade, and it was the annual Holocaust Memorial Day in Israel. For a school project, she asked her father about his memories as a Holocaust survivor, and he shrugged off her questions. She was especially puzzled by her father’s behavior at 11 a.m., when a simultaneous eruption of sirens throughout Israel signals the start of a national moment of silence. While everyone else in the country stood up to honor the victims of genocide, he stubbornly remained seated at the kitchen table as the sirens blared, drinking his coffee and reading the newspaper.

“The Germans did something to my dad, but I don’t know what because he never talks about it,” Schiller told a packed audience in 2010 at The Moth, a storytelling event.

During her compulsory service in the Israeli army, she organized scientific and educational conferences, which led to studies in psychology and philosophy at Tel Aviv University; during that same period, she procured a set of drums and formed her own Hebrew rock band, the Rebellion Movement. Schiller went on to receive a PhD in psychobiology from Tel Aviv University in 2004. That same year, she recalls, she saw the movie Eternal Sunshine of the Spotless Mind, in which a young man undergoes treatment with a drug that erases all memories of a former girlfriend and their painful breakup. Schiller heard (mistakenly, it turns out) that the premise of the movie had been based on research conducted by Joe LeDoux, and she eventually applied to NYU for a postdoctoral fellowship.

In science as in memory, timing is everything. Schiller arrived in New York just in time for the second coming of memory reconsolidation in neuroscience.

Altering the story

The table had been set for Schiller’s work on memory modification in 2000, when Karim Nader, a postdoc in LeDoux’s lab, suggested an experiment testing the effect of a drug on the formation of fear memories in rats. LeDoux told Nader in no uncertain terms that he thought the idea was a waste of time and money. Nader did the experiment anyway. It ended up getting published in Nature and sparked a burst of renewed scientific interest in memory reconsolidation (see “Manipulating Memory,” May/June 2009).

The rats had undergone classic fear training—in an unpleasant twist on Pavlovian conditioning, they had learned to associate an auditory tone with an electric shock. But right after the animals retrieved the fearsome memory (the researchers knew they had done so because they froze when they heard the tone), Nader injected a drug that blocked protein synthesis directly into their amygdala, the part of the brain where fear memories are believed to be stored. Surprisingly, that appeared to pave over the fearful association. The rats no longer froze in fear of the shock when they heard the sound cue.

Decades of research had established that long-term memory consolidation requires the synthesis of proteins in the brain’s memory pathways, but no one knew that protein synthesis was required after the retrieval of a memory as well—which implied that the memory was being consolidated then, too. Nader’s experiments also showed that blocking protein synthesis prevented the animals from recalling the fearsome memory only if they received the drug at the right time, shortly after they were reminded of the fearsome event. If Nader waited six hours before giving the drug, it had no effect and the original memory remained intact. This was a big biochemical clue that at least some forms of memories essentially had to be neurally rewritten every time they were recalled.

When Schiller arrived at NYU in 2005, she was asked by Elizabeth Phelps, who was spearheading memory research in humans, to extend Nader’s findings and test the potential of a drug to block fear memories. The drug used in the rodent experiment was much too toxic for human use, but a class of antianxiety drugs known as beta-adrenergic antagonists (or, in common parlance, “beta blockers”) had potential; among these drugs was propranolol, which had previously been approved by the FDA for the treatment of panic attacks and stage fright. ­Schiller immediately set out to test the effect of propranolol on memory in humans, but she never actually performed the experiment because of prolonged delays in getting institutional approval for what was then a pioneering form of human experimentation. “It took four years to get approval,” she recalls, “and then two months later, they took away the approval again. My entire postdoc was spent waiting for this experiment to be approved.” (“It still hasn’t been approved!” she adds.)

While waiting for the approval that never came, Schiller began to work on a side project that turned out to be even more interesting. It grew out of an offhand conversation with a colleague about some anomalous data described at meeting of LeDoux’s lab: a group of rats “didn’t behave as they were supposed to” in a fear experiment, Schiller says.

The data suggested that a fear memory could be disrupted in animals even without the use of a drug that blocked protein synthesis. Schiller used the kernel of this idea to design a set of fear experiments in humans, while Marie-H. Monfils, a member of the LeDoux lab, simultaneously pursued a parallel line of experimentation in rats. In the human experiments, volunteers were shown a blue square on a computer screen and then given a shock. Once the blue square was associated with an impending shock, the fear memory was in place. Schiller went on to show that if she repeated the sequence that produced the fear memory the following day but broke the association within a narrow window of time—that is, showed the blue square without delivering the shock—this new information was incorporated into the memory.

Here, too, the timing was crucial. If the blue square that wasn’t followed by a shock was shown within 10 minutes of the initial memory recall, the human subjects reconsolidated the memory without fear. If it happened six hours later, the initial fear memory persisted. Put another way, intervening during the brief window when the brain was rewriting its memory offered a chance to revise the initial memory itself while diminishing the emotion (fear) that came with it. By mastering the timing, the NYU group had essentially created a scenario in which humans could rewrite a fearsome memory and give it an unfrightening ending. And this new ending was robust: when Schiller and her colleagues called their subjects back into the lab a year later, they were able to show that the fear associated with the memory was still blocked.

The study, published in Nature in 2010, made clear that reconsolidation of memory didn’t occur only in rats.

Read the entire article here.

Hyperloop: Not Your Father’s High-Speed Rail

Europe and Japan have been leading the way with their 200-300 mph bullet trains for several decades. While the United States still tries to play catch up, one serial entrepreneur has other ideas. For Elon Musk, the bullet train is so, well, yesterday. He has in mind a ground based system that would hurtle people around at speeds of 4,000 mph. Welcome to Hyperloop.

From Slate:

High-speed rail is so 20th century. Well, perhaps not in the United States, where we still haven’t gotten around to building any true bullet trains. After 30 years of dithering, California is finally working on one that would get people from Los Angeles to San Francisco in a little under 2 1/2 hours, but it could cost on the order of $100 billion and won’t be ready until at least 2028.

Enter Tesla and SpaceX visionary Elon Musk with one of the craziest-sounding ideas in transportation history. For a while now, Musk has been hinting at an idea he calls the Hyperloop—a ground-based transportation technology that would get people from Los Angeles to San Francisco in under half an hour, for less than 1/10 the cost of building the high-speed rail line. Oh, and this 800-mph system would be self-powered, immune to weather, and would never crash.

What is the Hyperloop? So far Musk hasn’t gotten very specific, though he once called it “a cross between a Concorde and a railgun and an air hockey table.” But we’ll soon find out more. On Monday, Musk tweeted that he will publish an “alpha design” for the Hyperloop by Aug. 12. Responding to questions on Twitter, he indicated that the plans would be open-source, and that he would consider a partnership with someone who shared his vision. Perhaps the best clue came when he responded to an engineer named John Gardi, who published a diagram of his best guess as to how the Hyperloop might work:

It sounds fanciful, and maybe it is. But Musk is not the only one working on ultra-fast land-based transportation systems. And if anyone can turn an idea like this into reality, it might just be the man who has spent the past decade revolutionizing electric cars and space transport. Don’t be surprised if the biggest obstacles to the Hyperloop turn out to be bureaucratic rather than technological. After all, we’ve known how to build bullet trains for half a century, and look how far that has gotten us. Still, a nation can dream—and as long as we’re dreaming, why not dream about something way cooler than what Japan and China are already working on?

Read the entire article here.

Highbrow or Lowbrow?

Do you prefer the Beatles to Beethoven? Do you prefer Rembrandt over the Sunday comics or the latest Marvel? Do you read Patterson or Proust? Gary Gutting professor of philosophy argues that the distinguishing value of aesthetics must drive us to appreciate fine art over popular work. So, you had better dust off those volumes of Shakespeare.

From the New York Times:

Our democratic society is uneasy with the idea that traditional “high culture” (symphonies, Shakespeare, Picasso) is superior to popular culture (rap music, TV dramas, Norman Rockwell). Our media often make a point of blurring the distinction: newspapers and magazines review rock concerts alongside the Met’s operas and “Batman” sequels next to Chekhov plays. Sophisticated academic critics apply the same methods of analysis and appreciation to Proust and to comic books. And at all levels, claims of objective artistic superiority are likely to be met with smug assertions that all such claims are merely relative to subjective individual preferences.

Our democratic unease is understandable, since the alleged superiority of high culture has often supported the pretensions of an aristocratic class claiming to have privileged access to it. For example, Virginia Woolf’s classic essay — arch, snobbish, and very funny — reserved the appreciation of great art to “highbrows”: those “thoroughbreds of the mind” who combine innate taste with sufficient inherited wealth to sustain a life entirely dedicated to art. Lowbrows were working-class people who had neither the taste nor the time for the artistic life. Woolf claimed to admire lowbrows, who did the work highbrows like herself could not and accepted their cultural inferiority. But she expresses only disdain for a third class — the “middlebrows”— who have earned (probably through trade) enough money to purchase the marks of a high culture that they could never properly appreciate. Middlebrows pursue “no single object, neither art itself nor life itself, but both mixed indistinguishably, and rather nastily, with money, fame, power, or prestige.”

There is, however, no need to tie a defense of high art to Woolf’s “snobocracy.” We can define the high/popular distinction directly in terms of aesthetic quality, without tendentious connections to social status or wealth. Moreover, we can appropriate Woolf’s term “middlebrow,” using it to refer to those, not “to the manner born,” who, admirably, employ the opportunities of a democratic society to reach a level of culture they were not born into.

At this point, however, we can no longer avoid the hovering relativist objection: How do we know that there are any objective criteria that authorize claims that one kind of art is better than another?

Centuries of unresolved philosophical debate show that there is, in fact, little hope of refuting someone who insists on a thoroughly relativist view of art. We should not expect, for example, to provide a definition of beauty (or some other criterion of artistic excellence) that we can use to prove to all doubters that, say, Mozart’s 40th Symphony is objectively superior as art to “I Want to Hold Your Hand.” But in practice there is no need for such a proof, since hardly anyone really holds the relativist view. We may say, “You can’t argue about taste,” but when it comes to art we care about, we almost always do.

For example, fans of popular music may respond to the elitist claims of classical music with a facile relativism. But they abandon this relativism when arguing, say, the comparative merits of the early Beatles and the Rolling Stones. You may, for example, maintain that the Stones were superior to the Beatles (or vice versa) because their music is more complex, less derivative, and has greater emotional range and deeper intellectual content. Here you are putting forward objective standards from which you argue for a band’s superiority. Arguing from such criteria implicitly rejects the view that artistic evaluations are simply matters of personal taste. You are giving reasons for your view that you think others ought to accept.

Further, given the standards fans use to show that their favorites are superior, we can typically show by those same standards that works of high art are overall superior to works of popular art. If the Beatles are better than the Stones in complexity, originality, emotional impact, and intellectual content, then Mozart’s operas are, by those standards, superior to the Beatles’ songs. Similarly, a case for the superiority of one blockbuster movie over another would most likely invoke standards of dramatic power, penetration into character, and quality of dialogue by which almost all blockbuster movies would pale in comparison to Sophocles or Shakespeare.

On reflection, it’s not hard to see why — keeping to the example of music —classical works are in general capable of much higher levels of aesthetic value than popular ones. Compared to a classical composer, someone writing a popular song can utilize only a very small range of musical possibilities: a shorter time span, fewer kinds of instruments, a lower level of virtuosity and a greatly restricted range of compositional techniques. Correspondingly, classical performers are able to supply whatever the composers need for a given piece; popular performers seriously restrict what composers can ask for. Of course, there are sublime works that make minimal performance demands. But constant restriction of resources reduces the opportunities for greater achievement.

Read the entire article here.

Image: Detail of the face of Wolfgang Amadeus Mozart. Cropped version of the painting where Mozart is seen with Anna Maria (Mozart’s sister) and father, Leopold, on the wall a portrait of his deceased mother, Anna Maria. By Johann Nepomuk della Croce (1736-1819). Courtesy of Wikipedia.

Atlas Shrugs

She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.

[tube]zkBnFPBV3f0[/tube]

Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.

From the New York Times:

Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.

C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.

Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.

Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.

“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.

The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.

Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.

Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.

“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”

Dr. Pratt equated the current version of Atlas to a 1-year-old.

“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”

But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.

The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.

Read the entire article here.

Helping the Honeybees

Agricultural biotechnology giant Monsanto is joining efforts to help the honeybee. Honeybees the world over have been suffering from a widespread and catastrophic condition often referred to a colony collapse disorder.

From Technology Review:

Beekeepers are desperately battling colony collapse disorder, a complex condition that has been killing bees in large swaths and could ultimately have a massive effect on people, since honeybees pollinate a significant portion of the food that humans consume.

A new weapon in that fight could be RNA molecules that kill a troublesome parasite by disrupting the way its genes are expressed. Monsanto and others are developing the molecules as a means to kill the parasite, a mite that feeds on honeybees.

The killer molecule, if it proves to be efficient and passes regulatory hurdles, would offer welcome respite. Bee colonies have been dying in alarming numbers for several years, and many factors are contributing to this decline. But while beekeepers struggle with malnutrition, pesticides, viruses, and other issues in their bee stocks, one problem that seems to be universal is the Varroa mite, an arachnid that feeds on the blood of developing bee larvae.

“Hives can survive the onslaught of a lot of these insults, but with Varroa, they can’t last,” says Alan Bowman, a University of Aberdeen molecular biologist in Scotland, who is studying gene silencing as a means to control the pest.

The Varroa mite debilitates colonies by hampering the growth of young bees and increasing the lethality of the viruses that it spreads. “Bees can quite happily survive with these viruses, but now, in the presence of Varroa, these viruses become lethal,” says Bowman. Once a hive is infested with Varroa, it will die within two to four years unless a beekeeper takes active steps to control it, he says.

One of the weapons beekeepers can use is a pesticide that kills mites, but “there’s always the concern that mites will become resistant to the very few mitocides that are available,” says Tom Rinderer, who leads research on honeybee genetics at the U.S. Department of Agriculture Research Service in Baton Rouge, Louisiana. And new pesticides to kill mites are not easy to come by, in part because mites and bees are found in neighboring branches of the animal tree. “Pesticides are really difficult for chemical companies to develop because of the relatively close relationship between the Varroa and the bee,” says Bowman.

RNA interference could be a more targeted and effective way to combat the mites. It is a natural process in plants and animals that normally defends against viruses and potentially dangerous bits of DNA that move within genomes. Based upon their nucleotide sequence, interfering RNAs signal the destruction of the specific gene products, thus providing a species-specific self-destruct signal. In recent years, biologists have begun to explore this process as a possible means to turn off unwanted genes in humans (see “Gene-Silencing Technique Targets Scarring”) and to control pests in agricultural plants (see “Crops that Shut Down Pests’ Genes”).  Using the technology to control pests in agricultural animals would be a new application.

In 2011 Monsanto, the maker of herbicides and genetically engineered seeds, bought an Israeli company called Beeologics, which had developed an RNA interference technology that can be fed to bees through sugar water. The idea is that when a nurse bee spits this sugar water into each cell of a honeycomb where a queen bee has laid an egg, the resulting larvae will consume the RNA interference treatment. With the right sequence in the interfering RNA, the treatment will be harmless to the larvae, but when a mite feeds on it, the pest will ingest its own self-destruct signal.

The RNA interference technology would not be carried from generation to generation. “It’s a transient effect; it’s not a genetically modified organism,” says Bowman.

Monsanto says it has identified a few self-destruct triggers to explore by looking at genes that are fundamental to the biology of the mite. “Something in reproduction or egg laying or even just basic housekeeping genes can be a good target provided they have enough difference from the honeybee sequence,” says Greg Heck, a researcher at Monsanto.

Read the entire article here.

Image: Honeybee, Apis mellifera. Courtesy of Wikipedia.

Of Mice and Men

Biomolecular and genetic engineering continue apace. This time researchers have inserted artificially constructed human genes into the cells of living mice.

From the Independent:

Scientists have created genetically-engineered mice with artificial human chromosomes in every cell of their bodies, as part of a series of studies showing that it may be possible to treat genetic diseases with a radically new form of gene therapy.

In one of the unpublished studies, researchers made a human artificial chromosome in the laboratory from chemical building blocks rather than chipping away at an existing human chromosome, indicating the increasingly powerful technology behind the new field of synthetic biology.

The development comes as the Government announces today that it will invest tens of millions of pounds in synthetic biology research in Britain, including an international project to construct all the 16 individual chromosomes of the yeast fungus in order to produce the first synthetic organism with a complex genome.

A synthetic yeast with man-made chromosomes could eventually be used as a platform for making new kinds of biological materials, such as antibiotics or vaccines, while human artificial chromosomes could be used to introduce healthy copies of genes into the diseased organs or tissues of people with genetic illnesses, scientists said.

Researchers involved in the synthetic yeast project emphasised at a briefing in London earlier this week that there are no plans to build human chromosomes and create synthetic human cells in the same way as the artificial yeast project. A project to build human artificial chromosomes is unlikely to win ethical approval in the UK, they said.

However, researchers in the US and Japan are already well advanced in making “mini” human chromosomes called HACs (human artificial chromosomes), by either paring down an existing human chromosome or making them “de novo” in the lab from smaller chemical building blocks.

Natalay Kouprina of the US National Cancer Institute in Bethesda, Maryland, is part of the team that has successfully produced genetically engineered mice with an extra human artificial chromosome in their cells. It is the first time such an advanced form of a synthetic human chromosome made “from scratch” has been shown to work in an animal model, Dr Kouprina said.

“The purpose of developing the human artificial chromosome project is to create a shuttle vector for gene delivery into human cells to study gene function in human cells,” she told The Independent. “Potentially it has applications for gene therapy, for correction of gene deficiency in humans. It is known that there are lots of hereditary diseases due to the mutation of certain genes.”

Read the entire article here.

Image courtesy of Science Daily.

Cosmic portrait

Make a note in your calendar if you are so inclined: you’ll be photographed from space on July 19, 2013, sometime between 9.27 and 9.42 pm (GMT).

No, this is not another wacky mapping stunt courtesy of Google. Rather, NASA’s Cassini spacecraft, which will be somewhere in the vicinity of Saturn, will train its cameras on us for a global family portrait.

From NASA:

NASA’s Cassini spacecraft, now exploring Saturn, will take a picture of our home planet from a distance of hundreds of millions of miles on July 19. NASA is inviting the public to help acknowledge the historic interplanetary portrait as it is being taken.

Earth will appear as a small, pale blue dot between the rings of Saturn in the image, which will be part of a mosaic, or multi-image portrait, of the Saturn system Cassini is composing.

“While Earth will be only about a pixel in size from Cassini’s vantage point 898 million [1.44 billion kilometers] away, the team is looking forward to giving the world a chance to see what their home looks like from Saturn,” said Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “We hope you’ll join us in waving at Saturn from Earth, so we can commemorate this special opportunity.”

Cassini will start obtaining the Earth part of the mosaic at 2:27 p.m. PDT (5:27 p.m. EDT or 21:27 UTC) and end about 15 minutes later, all while Saturn is eclipsing the sun from Cassini’s point of view. The spacecraft’s unique vantage point in Saturn’s shadow will provide a special scientific opportunity to look at the planet’s rings. At the time of the photo, North America and part of the Atlantic Ocean will be in sunlight.

Unlike two previous Cassini eclipse mosaics of the Saturn system in 2006, which captured Earth, and another in 2012, the July 19 image will be the first to capture the Saturn system with Earth in natural color, as human eyes would see it. It also will be the first to capture Earth and its moon with Cassini’s highest-resolution camera. The probe’s position will allow it to turn its cameras in the direction of the sun, where Earth will be, without damaging the spacecraft’s sensitive detectors.

“Ever since we caught sight of the Earth among the rings of Saturn in September 2006 in a mosaic that has become one of Cassini’s most beloved images, I have wanted to do it all over again, only better,” said Carolyn Porco, Cassini imaging team lead at the Space Science Institute in Boulder, Colo. “This time, I wanted to turn the entire event into an opportunity for everyone around the globe to savor the uniqueness of our planet and the preciousness of the life on it.”

Porco and her imaging team associates examined Cassini’s planned flight path for the remainder of its Saturn mission in search of a time when Earth would not be obstructed by Saturn or its rings. Working with other Cassini team members, they found the July 19 opportunity would permit the spacecraft to spend time in Saturn’s shadow to duplicate the views from earlier in the mission to collect both visible and infrared imagery of the planet and its ring system.

“Looking back towards the sun through the rings highlights the tiniest of ring particles, whose width is comparable to the thickness of hair and which are difficult to see from ground-based telescopes,” said Matt Hedman, a Cassini science team member based at Cornell University in Ithaca, N.Y., and a member of the rings working group. “We’re particularly interested in seeing the structures within Saturn’s dusty E ring, which is sculpted by the activity of the geysers on the moon Enceladus, Saturn’s magnetic field and even solar radiation pressure.”

This latest image will continue a NASA legacy of space-based images of our fragile home, including the 1968 “Earthrise” image taken by the Apollo 8 moon mission from about 240,000 miles (380,000 kilometers) away and the 1990 “Pale Blue Dot” image taken by Voyager 1 from about 4 billion miles (6 billion kilometers) away.

Read the entire article here.

Image: This simulated view from NASA’s Cassini spacecraft shows the expected positions of Saturn and Earth on July 19, 2013, around the time Cassini will take Earth’s picture. Cassini will be about 898 million miles (1.44 billion kilometers) away from Earth at the time. That distance is nearly 10 times the distance from the sun to Earth. Courtesy: NASA/JPL-Caltech

The Past is Good For You

From time to time there is no doubt that you will feel nostalgic over some past event or a special place or treasured object. Of course, our sentimental feelings vary tremendously from person to person. But, why do we feel this way, and why is nostalgia important? No too long ago nostalgia was commonly believed to be a neurological disorder (no doubt treatable with prescription medication). However, new research shows that feelings of sentimentality are indeed good for us, individually and as a group.

From the New York Times:

Not long after moving to the University of Southampton, Constantine Sedikides had lunch with a colleague in the psychology department and described some unusual symptoms he’d been feeling. A few times a week, he was suddenly hit with nostalgia for his previous home at the University of North Carolina: memories of old friends, Tar Heel basketball games, fried okra, the sweet smells of autumn in Chapel Hill.

His colleague, a clinical psychologist, made an immediate diagnosis. He must be depressed. Why else live in the past? Nostalgia had been considered a disorder ever since the term was coined by a 17th-century Swiss physician who attributed soldiers’ mental and physical maladies to their longing to return home — nostos in Greek, and the accompanying pain, algos.

But Dr. Sedikides didn’t want to return to any home — not to Chapel Hill, not to his native Greece — and he insisted to his lunch companion that he wasn’t in pain.

“I told him I did live my life forward, but sometimes I couldn’t help thinking about the past, and it was rewarding,” he says. “Nostalgia made me feel that my life had roots and continuity. It made me feel good about myself and my relationships. It provided a texture to my life and gave me strength to move forward.”

The colleague remained skeptical, but ultimately Dr. Sedikides prevailed. That lunch in 1999 inspired him to pioneer a field that today includes dozens of researchers around the world using tools developed at his social-psychology laboratory, including a questionnaire called the Southampton Nostalgia Scale. After a decade of study, nostalgia isn’t what it used to be — it’s looking a lot better.

Nostalgia has been shown to counteract loneliness, boredom and anxiety. It makes people more generous to strangers and more tolerant of outsiders. Couples feel closer and look happier when they’re sharing nostalgic memories. On cold days, or in cold rooms, people use nostalgia to literally feel warmer.

Nostalgia does have its painful side — it’s a bittersweet emotion — but the net effect is to make life seem more meaningful and death less frightening. When people speak wistfully of the past, they typically become more optimistic and inspired about the future.

“Nostalgia makes us a bit more human,” Dr. Sedikides says. He considers the first great nostalgist to be Odysseus, an itinerant who used memories of his family and home to get through hard times, but Dr. Sedikides emphasizes that nostalgia is not the same as homesickness. It’s not just for those away from home, and it’s not a sickness, despite its historical reputation.

Nostalgia was originally described as a “neurological disease of essentially demonic cause” by Johannes Hoffer, the Swiss doctor who coined the term in 1688. Military physicians speculated that its prevalence among Swiss mercenaries abroad was due to earlier damage to the soldiers’ ear drums and brain cells by the unremitting clanging of cowbells in the Alps.

A Universal Feeling

In the 19th and 20th centuries nostalgia was variously classified as an “immigrant psychosis,” a form of “melancholia” and a “mentally repressive compulsive disorder” among other pathologies. But when Dr. Sedikides, Tim Wildschut and other psychologists at Southampton began studying nostalgia, they found it to be common around the world, including in children as young as 7 (who look back fondly on birthdays and vacations).

“The defining features of nostalgia in England are also the defining features in Africa and South America,” Dr. Wildschut says. The topics are universal — reminiscences about friends and family members, holidays, weddings, songs, sunsets, lakes. The stories tend to feature the self as the protagonist surrounded by close friends.

Most people report experiencing nostalgia at least once a week, and nearly half experience it three or four times a week. These reported bouts are often touched off by negative events and feelings of loneliness, but people say the “nostalgizing” — researchers distinguish it from reminiscing — helps them feel better.

To test these effects in the laboratory, researchers at Southampton induced negative moods by having people read about a deadly disaster and take a personality test that supposedly revealed them to be exceptionally lonely. Sure enough, the people depressed about the disaster victims or worried about being lonely became more likely to wax nostalgic. And the strategy worked: They subsequently felt less depressed and less lonely.

Read the entire article here.

Image: Still from “I Love Lucy” U.S. television show. 1955. Courtesy of Wikipedia.

Asteroid 5099

Iain (M.) Banks is now where he rightfully belongs — hurtling through space. Though, we fear that he may well not be traveling as fast as he would have wished.

From the Minor Planet Center:

In early April of this year we learnt from Iain Banks himself that he was sick, very sick. Cancer that started in the gall bladder spread quickly and precluded any cure, though he still hoped to be around for a while and see his upcoming novel, The Quarry, hit store shelves in late June. He never did—Iain Banks died on June 9th.

I was introduced to Iain M. Banks’s Sci-Fi novels in graduate school by a good friend who also enjoyed Sci-Fi; he couldn’t believe I’d never even heard of him and remedied what he saw as a huge lapse in my Sci-Fi culture by lending me a couple of his novels. After that I read a few more novels of my own volition because Mr Banks truly was a gifted story teller.

When I heard of his sickness I immediately asked myself what I could do for Mr Banks, and the answer was obvious: Give him an asteroid!

The Minor Planet Center only has the authority to designate new asteroid discoveries (e.g., ’1971 TD1?) and assign numbers to those whose orbits are of a high enough accuracy (e.g., ’5099?), but names for numbered asteroids must be submitted to, and approved by, the Committee for Small Body Nomenclature (CSBN) of the IAU (International Astronomical Union). With the help of Dr Gareth Williams, the MPC’s representative on the CSBN, we submitted a request to name an asteroid after Iain Banks with the hope that it would be approved soon enough for Mr Banks to enjoy it. Sadly, that has not been possible. Nevertheless, I am here to announce that on June 23rd, 2013, asteroid (5099) was officially named Iainbanks by the IAU, and will be referred to as such for as long as Earth Culture may endure.

The official citation for the asteroid reads:

Iain M. Banks (1954-2013) was a Scottish writer best known for the Culture series of science ?ction novels; he also wrote ?ction as Iain Banks. An evangelical atheist and lover of whisky, he scorned social media and enjoyed writing music. He was an extra in Monty Python & The Holy Grail.

Asteroid Iainbanks resides in the Main Asteroid Belt of the Sol system; with a size of 6.1 km (3.8 miles), it takes 3.94 years to complete a revolution around the Sun. It is most likely of a stony composition. Here is an interactive 3D orbit diagram.

The Culture is an advanced society in whose midst most of Mr Banks’s Sci-Fi novels take place. Thanks to their technology they are able to hollow out asteroids and use them as ships capable of faster-than-light travel while providing a living habitat with centrifugally-generated gravity for their thousands of denizens. I’d like to think Mr Banks would have been amused to have his own rock.

Read the entire article here.

Image: Orbit Diagram of asteroid (5099) Iainbanks. Cyan ellipses represent the orbits of the planets (from closer to further from the Sun) Mercury, Venus, Earth, Mars and Jupiter. The black ellipse represents the orbit of asteroid Iainbanks. The shaded region lies below the ecliptic plane, the non shaded, above. Courtesy of Minor Planet Center.

Impossible Chemistry in Space

Combine the vastness of the universe with the probabilistic behavior of quantum mechanics and you get some rather odd chemical results. This includes the spontaneous creation of some complex organic molecules in interstellar space — previously believed to be far too inhospitable for all but the lowliest forms of matter.

From the New Scientist:

Quantum weirdness can generate a molecule in space that shouldn’t exist by the classic rules of chemistry. If interstellar space is really a kind of quantum chemistry lab, that might also account for a host of other organic molecules glimpsed in space.

Interstellar space should be too cold for most chemical reactions to occur, as the low temperature makes it tough for molecules drifting through space to acquire the energy needed to break their bonds. “There is a standard law that says as you lower the temperature, the rates of reactions should slow down,” says Dwayne Heard of the University of Leeds, UK.

Yet we know there are a host of complex organic molecules in space. Some reactions could occur when different molecules stick to the surface of cosmic dust grain. This might give them enough time together to acquire the energy needed to react, which doesn’t happen when molecules drift past each other in space.

Not all reactions can be explained in this way, though. Last year astronomers discovered methoxy molecules – containing carbon, hydrogen and oxygen – in the Perseus molecular cloud, around 600 light years from Earth. But researchers couldn’t produce this molecule in the lab by allowing reactants to condense on dust grains, leaving a puzzle as to how it could have formed.

Molecular hang-out

Another route to methoxy is to combine a hydroxyl radical and methanol gas, both present in space. But this reaction requires hurdling a significant energy barrier – and the energy to do that simply isn’t available in the cold expanse of space.

Heard and his colleagues wondered if the answer lay in quantum mechanics: a process called quantum tunnelling might give the hydroxyl radical a small chance to cheat by digging through the barrier instead of going over it, they reasoned.

So, in another attempt to replicate the production of methoxy in space, the team chilled gaseous hydroxyl and methanol to 63 kelvin – and were able to produce methoxy.

The idea is that at low temperatures, the molecules slow down, increasing the likelihood of tunnelling. “At normal temperatures they just collide off each other, but when you go down in temperature they hang out together long enough,” says Heard.

Impossible chemistry

The team also found that the reaction occurred 50 times faster via quantum tunnelling than if it occurred normally at room temperature by hurdling the energy barrier. Empty space is much colder than 63 kelvin, but dust clouds near stars can reach this temperature, adds Heard.

“We’re showing there is organic chemistry in space of the type of reactions where it was assumed these just wouldn’t happen,” says Heard.

That means the chemistry of space may be richer than we had imagined. “There is maybe a suite of chemical reactions we hadn’t yet considered occurring in interstellar space,” agrees Helen Fraser of the University of Strathclyde, UK, who was not part of the team.

Read the entire article here.

Image: Amino-1-methoxy-4-methylbenzol, featuring methoxy molecular structure, recently found in interstellar space. Courtesy of Wikipedia.

The Good and the Bad; The Black and the White

We humans are a most peculiar species — we are kind and we are dangerous. We can create the most sublime inventions with our minds, voices and hands, yet we are capable of the most heinous and destructive acts. We show empathy and compassion and grace, and yet, often just as easily, we wound and main and murder. In the face of a common threat or danger we reach out to help all others, yet under normal circumstances we are capable of the most despicable racism, discrimination and hatred for our fellows.

Two recent polarizing events show our enormous failings and our inherent goodness. These are two stories of quiet and heroic action in the face of harm, danger and injustice.

First, in Mississippi, Willie Manning a black man and convicted murderer had his execution stayed 4 hours prior to lethal injection. His team of attorneys fought, quite rightly, to have false evidence discarded and dubious evidence revisited. As one of his attorneys, Robert Mink, a white man, stated to the State Supreme Court, “To pass on this issue and sanction the execution of Willie Manning, even in light of these revelations, would be counter to fundamental due process, the eight and fourteenth amendments to the Constitution…”. Morality of the death penalty aside, we have a moral duty to fight injustice wherever it appears including  within our seemingly just judicial process. To date, the Innocence Project has recorded 306 post-conviction exonerations. These were innocent people scheduled to be put to death by our judicial system, and on average spent 13 years in prison. So, thank you to attorney Mr. Mink and his colleagues for keeping goodness alive in the face of an institutionalized rush to judgement and a corrupt process.

Read more on this story after the jump.

In the second case, from Cleveland Ohio, the dichotomy of human behavior was on full display following the release of three woman kidnapped, raped and imprisoned for close to 10 years. We’ll not discuss the actions of the accused, which should become clearer in due course. Rather, we focus on the actions of a neighbor. Charles Ramsey, a black man, who lived across the street from the crime scene helped the three white women escape their hellish ordeal. Like the attorneys above Mr. Ramsey took action and is rightly hailed a hero. When pressed by the media to explain his actions in rescuing the women, he said something quite poignant, “When a little, pretty white woman runs into the arms of a black man, you know something wrong.” Indeed.

Excerpts from an open letter to Charles Ramsey, put it in perspective:

From the Guardian:

Dear Mr Charles Ramsey,

First and foremost thank you. Thank you for being an up-stander versus a bystander. All too often we are quick to flee from the things that could land us in imminent danger, but you in your hearts of hearts knew that the right thing to do was to come to the aid of someone who was crying out. We as the members of this great city of Cleveland are forever beholden to you for finding three of our daughters who we thought we’d never see again. But through the grace of the Most High … they are now safe.

In plain speak, you said something so prolific. And I want to unpack the statement that you made: “When a little, pretty white woman runs into the arms of a black man, you know something wrong.”

What does this statement mean in 2013? For me, it spoke volumes. It says: in America, we are taught to fear black men. They are assumed to be violent, angry, and completely and utterly untrustworthy. This statement also says what we have always known to be true for this country: white women, specifically pretty white women, have no business in the same space as black men. For as long as we can remember American society has been the sustainer of white women and the slayer of black men.

We have seen it with the all too familiar story of Emmett Till. We have seen it with the less familiar story of George Stinney, the youngest person in the United States ever executed. At 14-years-old he was charged with the murder of two white girls in Alcolu, South Carolina. He was charged with this murder after being the last to see these two girls alive and even helping to search for them. With no evidence and no concrete witnesses he was sent to the electric chair, with a booster seat for his 90 pound body, his case never reopened despite a rumored culprit and so little evidence.

I write this letter with extreme gratefulness, because I know how this country has historically made a mockery of and torn down men like you. Black men who have been the fall guy, black men who are assumed guilty for wearing hoodies and having wallets that somehow get mistaken for guns. So we all know that you could have easily said that you would not put yourself in harm’s way.

And for your act of heroism, you are met with extreme scrutiny dredged in jest. Joke after joke for telling your truth, as plain as you knew how. You, Mr Ramsey, were made fun of for flinching when the sounds of police sirens struck an innate reaction of terror in you. We all know that the police weren’t made for the protection of black men. The 911 operator who engaged you with disdain, disbelief, and sheer aggravation reaffirmed that “you don’t have to be white to support white supremacy”. So if you don’t “look” like a hero, “speak” like a hero, “dress” like a hero, wear your “hair” like a hero … then you’re just another person used to build the comedic chops of aspiring YouTube/Twitter/Facebook/Instagram sensations.

Read the entire letter after the jump.

In what continues to be a sad repetition of our human history we see on the one hand there are those who perform unimaginable acts of cruelty or violence, and on the other are those who counteract the bad with good.

On the one hand are those who blindly or hastily follow orders or rules without questioning their morality, and on the other are those who seek to inject morality and to improve our lot. But between the two poles many of us are mere bystanders; we go about our hectic, daily lives, but we take no action. Some of us raise our arms and voices in righteous indignation, but take no action beyond words. Many of us turn a blind eye to intolerance and racism, preferring the cocoons of our couches and social distance of our Facebook accounts.

The majority of us are just too tired, too frazzled, too busy. This group requires the most work; we all need to become better at doing and at being involved, to improve our very human race.

Building a Liver

In yet another breakthrough for medical science, researchers have succeeded in growing a prototypical human liver in the lab.

From the New York Times:

Researchers in Japan have used human stem cells to create tiny human livers like those that arise early in fetal life. When the scientists transplanted the rudimentary livers into mice, the little organs grew, made human liver proteins, and metabolized drugs as human livers do.

They and others caution that these are early days and this is still very much basic research. The liver buds, as they are called, did not turn into complete livers, and the method would have to be scaled up enormously to make enough replacement liver buds to treat a patient. Even then, the investigators say, they expect to replace only 30 percent of a patient’s liver. What they are making is more like a patch than a full liver.

But the promise, in a field that has seen a great deal of dashed hopes, is immense, medical experts said.

“This is a major breakthrough of monumental significance,” said Dr. Hillel Tobias, director of transplantation at the New York University School of Medicine. Dr. Tobias is chairman of the American Liver Foundation’s national medical advisory committee.

“Very impressive,” said Eric Lagasse of the University of Pittsburgh, who studies cell transplantation and liver disease. “It’s novel and very exciting.”

The study was published on Wednesday in the journal Nature.

Although human studies are years away, said Dr. Leonard Zon, director of the stem cell research program at Boston Children’s Hospital, this, to his knowledge, is the first time anyone has used human stem cells, created from human skin cells, to make a functioning solid organ, like a liver, as opposed to bone marrow, a jellylike organ.

Ever since they discovered how to get human stem cells — first from embryos and now, more often, from skin cells — researchers have dreamed of using the cells for replacement tissues and organs. The stem cells can turn into any type of human cell, and so it seemed logical to simply turn them into liver cells, for example, and add them to livers to fill in dead or damaged areas.

But those studies did not succeed. Liver cells did not take up residence in the liver; they did not develop blood supplies or signaling systems. They were not a cure for disease.

Other researchers tried making livers or other organs by growing cells on scaffolds. But that did not work well either. Cells would fall off the scaffolds and die, and the result was never a functioning solid organ.

Researchers have made specialized human cells in petri dishes, but not three-dimensional structures, like a liver.

The investigators, led by Dr. Takanori Takebe of the Yokohama City University Graduate School of Medicine, began with human skin cells, turning them into stem cells. By adding various stimulators and drivers of cell growth, they then turned the stem cells into human liver cells and began trying to make replacement livers.

They say they stumbled upon their solution. When they grew the human liver cells in petri dishes along with blood vessel cells from human umbilical cords and human connective tissue, that mix of cells, to their surprise, spontaneously assembled itself into three-dimensional liver buds, resembling the liver at about five or six weeks of gestation in humans.

Then the researchers transplanted the liver buds into mice, putting them in two places: on the brain and into the abdomen. The brain site allowed them to watch the buds grow. The investigators covered the hole in each animal’s skull with transparent plastic, giving them a direct view of the developing liver buds. The buds grew and developed blood supplies, attaching themselves to the blood vessels of the mice.

The abdominal site allowed them to put more buds in — 12 buds in each of two places in the abdomen, compared with one bud in the brain — which let the investigators ask if the liver buds were functioning like human livers.

They were. They made human liver proteins and also metabolized drugs that human livers — but not mouse livers — metabolize.

The approach makes sense, said Kenneth Zaret, a professor of cellular and developmental biology at the University of Pennsylvania. His research helped establish that blood and connective tissue cells promote dramatic liver growth early in development and help livers establish their own blood supply. On their own, without those other types of cells, liver cells do not develop or form organs.

Read the entire article here.

Image: Diagram of the human liver. Courtesy of Encyclopedia Britannica.

The Myth of Martyrdom

Unfortunately our world is still populated by a few people who will willingly shed the blood of others while destroying themselves. Understanding the personalities and motivations of these people may one day help eliminate this scourge. In the meantime, psychologists ponder whether they are psychologically normal, but politically crazed fanatics or deeply troubled?

Adam Lankford, a criminal justice professor, asserts that suicide terrorists are merely unhappy, damaged individuals who want to die. In his book, The Myth of Martyrdom, Lankford rejects the popular view of suicide terrorists as calculating, radicalized individuals who will do anything for a cause.

From the New Scientist:

In the aftermath of 9/11, terrorism experts in the US made a bold and counter-intuitive claim: the suicide terrorists were psychologically normal. When it came to their state of mind, they were not so different from US Special Forces agents. Just because they deliberately crashed planes into buildings, that didn’t make them suicidal – it simply meant they were willing to die for a cause they believed in.

This argument was stated over and over and became the orthodoxy. “We’d like to believe these are crazed fanatics,” said CIA terror expert Jerrold Post in 2006. “Not true… as individuals, this is normal behaviour.”

I disagree. Far from being psychologically normal, suicide terrorists are suicidal. They kill themselves to escape crises or unbearable pain. Until we recognise this, attempts to stop the attacks are doomed to fail.

When I began studying suicide terrorists, I had no agenda, just curiosity. My hunch was that the official version was true, but I kept an open mind.

Then I began watching martyrdom videos and reading case studies, letters and diary entries. What I discovered was a litany of fear, failure, guilt, shame and rage. In my book The Myth of Martyrdom, I present evidence that far from being normal, these self-destructive killers have often suffered from serious mental trauma and always demonstrate at least a few behaviours on the continuum of suicidality, such as suicide ideation, a suicide plan or previous suicide attempts.

Why did so many scholars come to the wrong conclusions? One key reason is that they believe what the bombers, their relatives and friends, and their terrorist recruiters say, especially when their accounts are consistent.

In 2007, for example, Ellen Townsend of the University of Nottingham, UK, published an influential article called Suicide Terrorists: Are they suicidal? Her answer was a resounding no (Suicide and Life-Threatening Behavior, vol 37, p 35).

How did she come to this conclusion? By reviewing five empirical reports: three that depended largely upon interviews with deceased suicide terrorists’ friends and family, and two based on interviews of non-suicide terrorists. She took what they said at face value.

I think this was a serious mistake. All of these people have strong incentives to lie.

Take the failed Palestinian suicide bomber Wafa al-Biss, who attempted to blow herself up at an Israeli checkpoint in 2005. Her own account and those of her parents and recruiters tell the same story: that she acted for political and religious reasons.

These accounts are highly suspect. Terrorist leaders have strategic reasons for insisting that attackers are not suicidal, but instead are carrying out glorious martyrdom operations. Traumatised parents want to believe that their children were motivated by heroic impulses. And suicidal people commonly deny that they are suicidal and are often able to hide their true feelings from the world.

This is especially true of fundamentalist Muslims. Suicide is explicitly condemned in Islam and guarantees an eternity in hell. Martyrs, on the other hand, can go to heaven.

Most telling of all, it later emerged that al-Biss had suffered from mental health problems most of her life and had made two previous suicide attempts.

Her case is far from unique. Consider Qari Sami, who blew himself up in a café in Kabul, Afghanistan, in 2005. He walked in – and kept on walking, past crowded tables and into the bathroom at the back where he closed the door and detonated his belt. He killed himself and two others, but could easily have killed more. It later emerged that he was on antidepressants.

Read the entire article here.

MondayMap: U.S. Interstate Highway System

It’s summer, which means lots of people driving every-which-way for family vacations.

So, this is a good time to refresh you with the map of the arteries that distribute lifeblood across the United States — the U.S. Interstate Highway System. The network of highways stretching around 46,800 miles from coast to coast is sometimes referred to as the Eisenhower Interstate System. President Eisenhower signed the Federal-Aid Highway Act in June 29, 1956 making the current system possible.

Thus the father of the Interstate System is also responsible for the never-ending choruses of: “are we there yet?”, “how much further?”, “I need to go to the bathroom”, and “can we stop at the next Starbucks (from the adults) / McDonalds (from the kids)?”.

Get a full-size map here.

Map courtesy of WikiCommons.

Surveillance, British Style

While the revelations about the National Security Agency (NSA) snooping on private communications of U.S. citizens are extremely troubling, the situation could be much worse. Cast a sympathetic thought to the Her Majesty’s subjects in the United Kingdom of Great Britain and Northern Island, where almost everyone eavesdrops on everyone else. While the island nation of 60 million covers roughly the same area as Michigan, it is swathed in over 4 million CCTV (closed circuit television) surveillance cameras.

From Slate:

We adore the English here in the States. They’re just so precious! They call traffic circles “roundabouts,” prostitutes “prozzies,” and they have a queen. They’re ever so polite and carry themselves with such admirable poise. We love their accents so much, we use them in historical films to give them a bit more gravitas. (Just watch The Last Temptation of Christ to see what happens when we don’t: Judas doesn’t sound very intimidating with a Brooklyn accent.)

What’s not so cute is the surveillance society they’ve built—but the U.S. government seems pretty enamored with it.

The United Kingdom is home to an intense surveillance system. Most of the legal framework for this comes from the Regulation of Investigatory Powers Act, which dates all the way back to the year 2000. RIPA is meant to support criminal investigation, preventing disorder, public safety, public health, and, of course, “national security.” If this extremely broad application of law seems familiar, it should: The United States’ own PATRIOT Act is remarkably similar in scope and application. Why should the United Kingdom have the best toys, after all?

This is one of the problems with being the United Kingdom’s younger sibling. We always want what Big Brother has. Unless it’s soccer. Wiretaps, though? We just can’t get enough!

The PATRIOT Act, broad as it is, doesn’t match RIPA’s incredible wiretap allowances. In 1994, the United States passed the Communications Assistance for Law Enforcement Act, which mandated that service providers give the government “technical assistance” in the use of wiretaps. RIPA goes a step further and insists that wiretap capability be implemented right into the system. If you’re a service provider and can’t set up plug-and-play wiretap capability within a short time, Johnny English comes knocking at your door to say, ” ‘Allo, guvna! I ‘ear tell you ‘aven’t put in me wiretaps yet. Blimey! We’ll jus’ ‘ave to give you a hefty fine! Ods bodkins!” Wouldn’t that be awful (the law, not the accent)? It would, and it’s just what the FBI is hoping for. CALEA is getting a rewrite that, if it passes, would give the FBI that very capability.

I understand. Older siblings always get the new toys, and it’s only natural that we want to have them as well. But why does it have to be legal toys for surveillance? Why can’t it be chocolate? The United Kingdom enjoys chocolate that’s almost twice as good as American chocolate. Literally, they get 20 percent solid cocoa in their chocolate bars, while we suffer with a measly 11 percent. Instead, we’re learning to shut off the Internet for entire families.

That’s right. In the United Kingdom, if you are just suspected of having downloaded illegally obtained material three times (it’s known as the “three strikes” law), your Internet is cut off. Not just for you, but for your entire household. Life without the Internet, let’s face it, sucks. You’re not just missing out on videos of cats falling into bathtubs. You’re missing out of communication, jobs, and being a 21st-century citizen. Maybe this is OK in the United Kingdom because you can move up north, become a farmer, and enjoy a few pints down at the pub every night. Or you can just get a new ISP, because the United Kingdom actually has a competitive market for ISPs. The United States, as an homage, has developed the so-called “copyright alert system.” It works much the same way as the U.K. law, but it provides for six “strikes” instead of three and has a limited appeals system, in which the burden of proof lies on the suspected customer. In the United States, though, the rights-holders monitor users for suspected copyright infringement on their own, without the aid of ISPs. So far, we haven’t adopted the U.K. system in which ISPs are expected to monitor traffic and dole out their three strikes at their discretion.

These are examples of more targeted surveillance of criminal activities, though. What about untargeted mass surveillance? On June 21, one of Edward Snowden’s leaks revealed that the Government Communications Headquarters, the United Kingdom’s NSA equivalent, has been engaging in a staggering amount of data collection from civilians. This development generated far less fanfare than the NSA news, perhaps because the legal framework for this data collection has existed for a very long time under RIPA, and we expect surveillance in the United Kingdom. (Or maybe Americans were just living down to the stereotype of not caring about other countries.) The NSA models follow the GCHQ’s very closely, though, right down to the oversight, or lack thereof.

Media have labeled the FISA court that regulates the NSA’s surveillance as a “rubber-stamp” court, but it’s no match for the omnipotence of the Investigatory Powers Tribunal, which manages oversight for MI5, MI6, and the GCHQ. The Investigatory Powers Tribunal is exempt from the United Kingdom’s Freedom of Information Act, so it doesn’t have to share a thing about its activities (FISA apparently does not have this luxury—yet). On top of that, members of the tribunal are appointed by the queen. The queen. The one with the crown who has jubilees and a castle and probably a court wizard. Out of 956 complaints to the Investigatory Powers Tribunal, five have been upheld. Now that’s a rubber-stamp court we can aspire to!

Or perhaps not. The future of U.S. surveillance looks very grim if we’re set on following the U.K.’s lead. Across the United Kingdom, an estimated 4.2 million CCTV cameras, some with facial-recognition capability, keep watch on nearly the entire nation. (This can lead to some Monty Python-esque high jinks.) Washington, D.C., took its first step toward strong camera surveillance in 2008, when several thousand were installed ahead of President Obama’s inauguration.

Read the entire article here.

Image: Royal coat of arms of Queen Elizabeth II of the United Kingdom, as used in England and Wales, and Scotland. Courtesy of Wikipedia.

Bella Italia: It’s All in the Hands

[tube]DW91Ec4DYkU[/tube]

Italians are famous and infamous for their eloquent and vigorous hand gestures. Psychologist professor Isabella Poggi, of Roma Tre University, has cataloged about 250 hand gestures used by Italians in everyday conversation. The gestures are used to reinforce a simple statement or emotion or convey quite complex meaning. Italy would not be the same without them.

Our favorite hand gesture is fingers and thumb pinched in the form of a spire often used to mean “what on earth are you talking about?“; moving the hand slightly up and down while doing this adds emphasis and demands explanation.

For a visual lexicon of the most popular gestures jump here.

From the New York Times:

In the great open-air theater that is Rome, the characters talk with their hands as much as their mouths. While talking animatedly on their cellphones or smoking cigarettes or even while downshifting their tiny cars through rush-hour traffic, they gesticulate with enviably elegant coordination.

From the classic fingers pinched against the thumb that can mean “Whaddya want from me?” or “I wasn’t born yesterday” to a hand circled slowly, indicating “Whatever” or “That’ll be the day,” there is an eloquence to the Italian hand gesture. In a culture that prizes oratory, nothing deflates airy rhetoric more swiftly.

Some gestures are simple: the side of the hand against the belly means hungry; the index finger twisted into the cheek means something tastes good; and tapping one’s wrist is a universal sign for “hurry up.” But others are far more complex. They add an inflection — of fatalism, resignation, world-weariness — that is as much a part of the Italian experience as breathing.

Two open hands can ask a real question, “What’s happening?” But hands placed in prayer become a sort of supplication, a rhetorical question: “What do you expect me to do about it?” Ask when a Roman bus might arrive, and the universal answer is shrugged shoulders, an “ehh” that sounds like an engine turning over and two raised hands that say, “Only when Providence allows.”

To Italians, gesturing comes naturally. “You mean Americans don’t gesture? They talk like this?” asked Pasquale Guarrancino, a Roman taxi driver, freezing up and placing his arms flat against his sides. He had been sitting in his cab talking with a friend outside, each moving his hands in elaborate choreography. Asked to describe his favorite gesture, he said it was not fit for print.

In Italy, children and adolescents gesture. The elderly gesture. Some Italians joke that gesturing may even begin before birth. “In the ultrasound, I think the baby is saying, ‘Doctor, what do you want from me?’ ” said Laura Offeddu, a Roman and an elaborate gesticulator, as she pinched her fingers together and moved her hand up and down.

On a recent afternoon, two middle-aged men in elegant dark suits were deep in conversation outside the Giolitti ice cream parlor in downtown Rome, gesturing even as they held gelato in cones. One, who gave his name only as Alessandro, noted that younger people used a gesture that his generation did not: quotation marks to signify irony.

Sometimes gesturing can get out of hand. Last year, Italy’s highest court ruled that a man who inadvertently struck an 80-year-old woman while gesticulating in a piazza in the southern region Puglia was liable for civil damages. “The public street isn’t a living room,” the judges ruled, saying, “The habit of accompanying a conversation with gestures, while certainly licit, becomes illicit” in some contexts.

In 2008, Umberto Bossi, the colorful founder of the conservative Northern League, raised his middle finger during the singing of Italy’s national anthem. But prosecutors in Venice determined that the gesture, while obscene and the cause of widespread outrage, was not a crime.

Gestures have long been a part of Italy’s political spectacle. Former Prime Minister Silvio Berlusconi is a noted gesticulator. When he greeted President Obama and his wife, Michelle, at a meeting of the Group of 20 leaders in September 2009, he extended both hands, palms facing toward himself, and then pinched his fingers as he looked Mrs. Obama up and down — a gesture that might be interpreted as “va-va-voom.”

In contrast, Giulio Andreotti — Christian Democrat, seven-time prime minister and by far the most powerful politician of the Italian postwar era — was famous for keeping both hands clasped in front of him. The subtle, patient gesture functioned as a kind of deterrent, indicating the tremendous power he could deploy if he chose to.

Isabella Poggi, a professor of psychology at Roma Tre University and an expert on gestures, has identified around 250 gestures that Italians use in everyday conversation. “There are gestures expressing a threat or a wish or desperation or shame or pride,” she said. The only thing differentiating them from sign language is that they are used individually and lack a full syntax, Ms. Poggi added.

Far more than quaint folklore, gestures have a rich history. One theory holds that Italians developed them as an alternative form of communication during the centuries when they lived under foreign occupation — by Austria, France and Spain in the 14th through 19th centuries — as a way of communicating without their overlords understanding.

Another theory, advanced by Adam Kendon, the editor in chief of the journal Gesture, is that in overpopulated cities like Naples, gesturing became a way of competing, of marking one’s territory in a crowded arena. “To get attention, people gestured and used their whole bodies,” Ms. Poggi said, explaining the theory.

Read the entire article here.

Video courtesy of New York Times.

United States of Strange

With the United States turning another year older it reminds us to ponder some of the lesser known components of this beautiful yet paradoxical place. All nations have their esoteric cultural wonders and benign local oddities: the British (actually the Scots) have kilts, bowler hats, the Royal Family; Italians have Vespas, governments that last on average 8 months; the French, well they’re just French; the Germans love fast cars and lederhosen. But for sheer variety and volume the United States probably surpasses all for its extreme absurdity.

From the Telegraph:

Run by the improbably named Genghis Cohen, Machine Gun Vegas bills itself as the ‘world’s first luxury gun lounge’. It opened last year, and claims to combine “the look and feel of an ultra-lounge with the functionality of a state of the art indoor gun range”. The team of NRA-certified on-site instructors, however, may be its most unique appeal. All are female, and all are ex-US military personnel.

See other images and read the entire article here.

Image courtesy of the Telegraph.

Everywhere And Nowhere

Most physicists believe that dark matter exists, but have never seen it, only deduced its existence. This is a rather unsettling state of affairs since by most estimates dark matter (and possibly dark energy) accounts for 95 percent of the universe. The stuff we are made from, interact with and see on a daily basis — atoms, their constituents and their forces — is a mere 5 percent.

From the Atlantic:

Here’s a little experiment.

Hold up your hand.

Now put it back down.

In that window of time, your hand somehow interacted with dark matter — the mysterious stuff that comprises the vast majority of the universe. “Our best guess,” according to Dan Hooper, an astronomy professor at the University of Chicago and a theoretical astrophysicist at the Fermi National Accelerator Laboratory, “is that a million particles of dark matter passed through your hand just now.”

Dark matter, in other words, is not merely the stuff of black holes and deep space. It is all around us. Somehow. We’re pretty sure.

But if you did the experiment — as the audience at Hooper’s talk on dark matter and other cosmic mysteries did at the Aspen Ideas Festival today — you didn’t feel those million particles. We humans have no sense of their existence, Hooper said, in part because they don’t hew to the forces that regulate our movement in the world — gravity, electromagnetism, the forces we can, in some way, feel. Dark matter, instead, is “this ghostly, elusive stuff that dominates our universe,” Hooper said.

It’s everywhere. And it’s also, as far as human knowledge is concerned, nowhere.

And yet, despite its mysteries, we know it’s out there. “All astronomers are in complete conviction that there is dark matter,” said Richard Massey, the lead author of a recent study mapping the dark matter of the universe, and Hooper’s co-panelist. The evidence for its existence, Hooper agreed, is “overwhelming.” And yet it’s evidence based on deduction: through our examinations of the observable universe, we make assumptions about the unobservable version.

Dark matter, in other words, is aptly named. A full 95 percent of the universe — the dark matter, the stuff that both is and is not — is effectively unknown to us. “All the science that we’ve ever done only ever examines five percent of the universe,” Massey said. Which means that there are still mysteries to be unraveled, and dark truths to be brought to light.

And it also means, Massey pointed out, that for scientists, “the job security is great.”

You might be wondering, though: given how little we know about dark matter, how is it that Hooper knew that a million particles of the stuff passed through your hand as you raised and lowered it?

“I cheated a little,” Hooper admitted. He assumed a particular mass for the individual particles. “We know what the density of dark matter is on Earth from watching how the Milky Way rotates. And we know roughly how fast they’re going. So you take those two bits of information, and all you need to know is how much mass each individual particle has, and then I can get the million number. And I assumed a kind of traditional guess. But it could be 10,000 higher; it could be 10,000 lower.”

Read the entire article here.

Fifty Years After Gettysburg

In 1913 some 50,000 veterans from both sides of the U.S. Civil War gathered at Gettysburg in Pennsylvania to commemorate. Photographers of the time were on hand to capture some fascinating and moving images, which are now preserved in the U.S. Library of Congress.

See more images here.

Image: The Blue and the Gray at Gettysburg: a Union veteran and a Confederate veteran shake hands at the Assembly Tent. Courtesy of U.S. Library of Congress.

Pretending to be Smart

Have you ever taken a date to a cerebral movie or the opera? Have you ever taken a classic work of literature to read at the beach? If so, you are not alone. But why are you doing it?

From the Telegraph:

Men try to impress their friends almost twice as much as women do by quoting Shakespeare and pretending to like jazz to seem more clever.

A fifth of all adults admitted they have tried to impress others by making out they are more cultured than they really are, but this rises to 41 per cent in London.

Scotland is the least pretentious country as only 14 per cent of the 1,000 UK adults surveyed had faked their intelligence there, according to Ask Jeeves research.

Typical methods of trying to seem cleverer ranged from deliberately reading a ‘serious’ novel on the beach, passing off other people’s witty remarks as one’s own and talking loudly about politics in front of others.

Two thirds put on the pretensions for friends, while 36 per cent did it to seem smarter in their workplace and 32 per cent tried to impress a potential partner.

One in five swapped their usual holiday read for something more serious on the beach and one in four went to an art gallery to look more cultured.

When it came to music tastes, 20 per cent have pretended to prefer Beethoven to Beyonce and many have referenced operas they have never seen.

A spokesman for Ask Jeeves said: “We were surprised by just how many people think they should go to such lengths in order to impress someone else.

“They obviously think they will make a better impression if they pretend to like Beethoven rather than admit they listen to Beyonce or read The Spectator rather than Loaded.

“Social media and the internet means it is increasingly easy to present this kind of false image about themselves.

“But in the end, if they are really going to be liked then it is going to be for the person they really are rather than the person they are pretending to be.”

Social media also plays a large part with people sharing Facebook posts on politics or re-tweeting clever tweets to raise their intellectual profile.

Men were the biggest offenders, with 26 per cent of men admitting to the acts of pretence compared to 14 per cent of women.

Top things people have done to seem smarter:

Repeated someone else’s joke as your own

Gone to an art gallery

Listened to classical music in front of others

Read a ‘serious’ book on the beach

Re-tweeted a clever tweet

Talked loudly about politics in front of others

Read a ‘serious’ magazine on public transport

Shared an intellectual article on Facebook

Quoted Shakespeare

Pretended to know about wine

Worn glasses with clear lenses

Mentioned an opera you’d ‘seen’

Pretended to like jazz

Read the entire article here.

Image: Opera. Courtesy of the New York Times.