All posts by Mike

Fields from Dreams

It’s time to abandon the notion that you, and everything around you, is made up of tiny particles and their subatomic constituents. You are nothing more than perturbations in the field, or fields. Nothing more. Theoretical physicist Sean Carroll explains all.

From Symmetry:

When scientists talk to non-scientists about particle physics, they talk about the smallest building blocks of matter: what you get when you divide cells and molecules into tinier and tinier bits until you can’t divide them any more.

That’s one way of looking at things. But it’s not really the way things are, said Caltech theoretical physicist Sean Carroll in a lecture at Fermilab. And if physicists really want other people to appreciate the discovery of the Higgs boson, he said, it’s time to tell them the rest of the story.

“To understand what is going on, you actually need to give up a little bit on the notion of particles,” Carroll said in the June lecture.

Instead, think in terms of fields.

You’re already familiar with some fields. When you hold two magnets close together, you can feel their attraction or repulsion before they even touch—an interaction between two magnetic fields. Likewise, you know that when you jump in the air, you’re going to come back down. That’s because you live in Earth’s gravitational field.

Carroll’s stunner, at least to many non-scientists, is this: Every particle is actually a field. The universe is full of fields, and what we think of as particles are just excitations of those fields, like waves in an ocean. An electron, for example, is just an excitation of an electron field.

This may seem counterintuitive, but seeing the world in terms of fields actually helps make sense of some otherwise confusing facts of particle physics.

When a radioactive material decays, for example, we think of it as spitting out different kinds of particles. Neutrons decay into protons, electrons and neutrinos. Those protons, electrons and neutrinos aren’t hiding inside neutrons, waiting to get out. Yet they appear when neutrons decay.

If we think in terms of fields, this sudden appearance of new kinds of particles starts to make more sense. The energy and excitation of one field transfers to others as they vibrate against each other, making it seem like new types of particles are appearing.

Thinking in fields provides a clearer picture of how scientists are able to make massive particles like Higgs bosons in the Large Hadron Collider. The LHC smashes bunches of energetic protons into one another, and scientists study those collisions.

“There’s an analogy that’s often used here,” Carroll said, “that doing particle physics is like smashing two watches together and trying to figure out how watches work by watching all the pieces fall apart.

“This analogy is terrible for many reasons,” he said. “The primary one is that what’s coming out when you smash particles together is not what was inside the original particles. … [Instead,] it’s like you smash two Timex watches together and a Rolex pops out.”

What’s really happening in LHC collisions is that especially excited excitations of a field—the energetic protons—are vibrating together and transfering their energy to adjacent fields, forming new excitations that we see as new particles—such as Higgs bosons.

Thinking in fields can also better explain how the Higgs works. Higgs bosons themselves do not give other particles mass by, say, sticking to them in clumps. Instead, the Higgs field interacts with other fields, giving them—and, by extension, their particles—mass.

Read the entire article here.

Image: iron filing magnetic field lines between two bar magnets. Courtesy of Wikimedia.

Is Your Company Catholic or Baptist?

Is your business jewish? Does your corporation follow the book of tao or the book of mormon or those of shadows (wicca) or yasna (zoroastrianism)? Or, is your company baptist, muslim, hindu or atheist or a practitioner in one of the remaining estimated 4,200 belief systems?

In mid-2012 the U.S. Supreme Court affirmed that corporations are indeed people when it ruled for Citizens United against the State of Montana in allowing unlimited corporate spending in local elections. Now, we await another contentious and perplexing ruling from the justices that may assign spirituality to a corporation alongside personhood.

Inventors of board games take note: there is surely a game to be made from matching one’s favorite companies with religions of the world.

From Slate:

Remember the big dustup last summer over the contraception mandate in President Obama’s health reform initiative? It required companies with more than 50 employees to provide insurance, including for contraception, as part of their employees’ health care plans. The constitutional question was whether employers with religious objections to providing coverage for birth control could be forced to do so under the new law. The Obama administration tweaked the rules a few times to try to accommodate religious employers, first exempting some religious institutions—churches and ministries were always exempt—and then allowing companies that self-insure to use a separate insurance plan to pay and provide for the contraception. Still, religious employers objected, and lawsuits were filed, all 60 of them.

A year later, the courts have begun to weigh in, and the answer has slowly begun to emerge: maybe yes, maybe no. It all depends on whether corporations—which already enjoy significant free-speech rights—can also invoke religious freedom rights enshrined in the First Amendment.

Last Friday, the 3rd U.S. Circuit Court of Appeals upheld the contraception mandate, rejecting a challenge from a Pennsylvania-based cabinetmaker who claimed that as a Mennonite he should not be compelled to provide contraceptive coverage to his 950 employees because the mandate violates the company’s rights under the free exercise clause of the First Amendment and the Religious Freedom Restoration Act. The owner considers some of the contraception methods at issue—specifically, the morning-after and week-after pills—abortifacients.

The appeals court looked carefully to the precedent created by Citizens United—the 2010 case affording corporations free-speech rights when it came to election-related speech—to determine whether corporations also enjoy constitutionally protected religious freedom. Writing for the two judges in the majority, Judge Robert Cowen found that although there was “a long history of protecting corporations’ rights to free speech,” there was no similar history of protection for the free exercise of religion. “We simply cannot understand how a for-profit, secular corporation—apart from its owners—can exercise religion,” he concluded. “A holding to the contrary … would eviscerate the fundamental principle that a corporation is a legally distinct entity from its owners.”

Cowan also flagged the absolute novelty of the claims, noting that there was almost no case law suggesting that corporations can hold religious beliefs. “We are not aware of any case preceding the commencement of litigation about the Mandate, in which a for-profit, secular corporation was itself found to have free exercise rights.” Finally he took pains to distinguish the corporation, Conestoga, from its legal owners. “Since Conestoga is distinct from the Hahns, the Mandate does not actually require the Hahns to do anything. … It is Conestoga that must provide the funds to comply with the Mandate—not the Hahns.”

Judge Kent Jordan, dissenting at length in the case, said that for-profit, secular corporations can surely avail themselves of the protections of the religion clauses. “To recognize that religious convictions are a matter of individual experience cannot and does not refute the collective character of much religious belief and observance … Religious opinions and faith are in this respect akin to political opinions and passions, which are held and exercised both individually and collectively.”

The 3rd Circuit decision creates a significant split between the appeals courts, because a few short weeks earlier, the Colorado-based 10th U.S. Circuit Court of Appeals ruled in favor of Hobby Lobby Stores Inc., finding by a 5–3 margin that corporations can be persons entitled to assert religious rights. Hobby Lobby is a chain of crafts supply stores located in 41 states. The 10th Circuit upheld an injunction blocking the contraception requirement because it offended the company owners’ religious beliefs. The majority in the 3rd Circuit wrote that it “respectfully disagrees” with the 10th Circuit. A split of this nature makes Supreme Court review almost inevitable.

The Supreme Court has long held the free exercise clause of the First Amendment to prohibit governmental regulation of religious beliefs, but a long line of cases holds that not every regulation that inflects upon your religious beliefs is unconstitutional. The Religious Freedom Restoration Act bars the federal government from imposing a “substantial burden” on anyone’s “exercise of religion” unless it is “the least restrictive means of furthering [a] compelling governmental interest.” The Obama administration and the judges who have refused to grant injunctions contend that the burden here is insignificant, amounting to a few dollars borne indirectly by the employer to facilitate independent, private decisions made by their female employees. They also argue that they are promoting a compelling government interest in providing preventive health care to Americans. The employers and the judges who have enjoined the birth-control provision claim that they are being forced to choose between violating protected religious beliefs and facing crippling fines and that free or inexpensive birth control is available at community health centers and public clinics.

Basically, the constitutional question will come down to whether a for-profit, secular corporation can hold religious beliefs and convictions, or whether—as David Gans explains here —“the Court’s cases recognize a basic, common-sense difference between living, breathing, individuals—who think, possess a conscience, and a claim to human dignity—and artificial entities, which are created by the law for a specific purpose, such as to make running a business more efficient and lucrative.” Will Baude takes the opposite view, explaining that the 3rd Circuit’s reasoning—that “ ‘corporations have no consciences, no beliefs, no feelings, no thoughts, no desires’ … would all prove too much, because they are technically true of any organizational association, including … a church!” Baude likens the claim that corporations can never have religious freedom rights to the claim that corporations—including the New York Times—can never have free-speech rights.

Part of the problem, at least in the case of Hobby Lobby and Conestoga, is that neither corporation was designed to do business as religious entities. It has been clear since the nation’s founding that corporations enjoy rights in connection to the purposes for which they were created—which is why the administration already exempts religious employers whose purpose is to inculcate religious values and chiefly employ and serve people who share their religious tenets. This is about companies that don’t meet those criteria. As the dissenters at the 10th Circuit observed, the fact that some “spiritual corporations” have some religious purposes doesn’t make every corporation a religious entity. And as professor Elizabeth Sepper of Washington University puts it in a new law-review article on the subject: “Corporations, as conglomerate entities, exist indefinitely and independently of their shareholders. They carry out acts and affect individual lives, and have an identity that is larger than their constituent parts. Walmart is Walmart, even when Sam Walton resigns.”

The rest of the problem is self-evident. Where does it stop? Why does your boss’ religious freedom allow her to curtail your own? The dangers in allowing employers to exercise a religious veto over employee health care are obvious. Can an employer deny you access to psychiatric care if he opposes it on religious grounds? To AIDS medications? To gelatin-covered pills? Constitutional protections of a single employer’s individual rights of conscience and belief become a bludgeon by which he can dictate the most intimate health decisions of his workers, whose own religious rights and constitutional freedoms become immaterial.

Read the entire article here.

Image courtesy of ThinkProgress.

The View From Saturn

As Carl Sagan would no doubt have had us remember, we are still collectively residents of a very small, very pale blue dot. The image of planet Earth was taken by the Cassini spacecraft, which has been busy circuiting and mapping the Saturnian system over the last several years. Cassini turned the attention of its cameras to our home on July 19, 2013 for this portrait.

From NASA:

Color and black-and-white images of Earth taken by two NASA interplanetary spacecraft on July 19 show our planet and its moon as bright beacons from millions of miles away in space.

NASA’s Cassini spacecraft captured the color images of Earth and the moon from its perch in the Saturn system nearly 900 million miles (1.5 billion kilometers) away. MESSENGER, the first probe to orbit Mercury, took a black-and-white image from a distance of 61 million miles (98 million kilometers) as part of a campaign to search for natural satellites of the planet.

In the Cassini images Earth and the moon appear as mere dots — Earth a pale blue and the moon a stark white, visible between Saturn’s rings. It was the first time Cassini’s highest-resolution camera captured Earth and its moon as two distinct objects.

It also marked the first time people on Earth had advance notice their planet’s portrait was being taken from interplanetary distances. NASA invited the public to celebrate by finding Saturn in their part of the sky, waving at the ringed planet and sharing pictures over the Internet. More than 20,000 people around the world participated.

“We can’t see individual continents or people in this portrait of Earth, but this pale blue dot is a succinct summary of who we were on July 19,” said Linda Spilker, Cassini project scientist, at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “Cassini’s picture reminds us how tiny our home planet is in the vastness of space, and also testifies to the ingenuity of the citizens of this tiny planet to send a robotic spacecraft so far away from home to study Saturn and take a look-back photo of Earth.”

Pictures of Earth from the outer solar system are rare because from that distance, Earth appears very close to our sun. A camera’s sensitive detectors can be damaged by looking directly at the sun, just as a human being can damage his or her retina by doing the same. Cassini was able to take this image because the sun had temporarily moved behind Saturn from the spacecraft’s point of view and most of the light was blocked.

A wide-angle image of Earth will become part of a multi-image picture, or mosaic, of Saturn’s rings, which scientists are assembling. This image is not expected to be available for several weeks because of the time-consuming challenges involved in blending images taken in changing geometry and at vastly different light levels, with faint and extraordinarily bright targets side by side.

“It thrills me to no end that people all over the world took a break from their normal activities to go outside and celebrate the interplanetary salute between robot and maker that these images represent,” said Carolyn Porco, Cassini imaging team lead at the Space Science Institute in Boulder, Colo. “The whole event underscores for me our ‘coming of age’ as planetary explorers.”

Read the entire article here.

Image: In this rare image taken on July 19, 2013, the wide-angle camera on NASA’s Cassini spacecraft has captured Saturn’s rings and our planet Earth and its moon in the same frame. Courtesy: NASA/JPL-Caltech/Space Science Institute.

Our Beautiful Galaxy

We should post stunning images of the night sky like these more often. For most of us, unfortunately, light pollution from our surroundings hides beautiful vistas like these from the naked eye.

Image: Receiving the Galatic Beam. The Milky Way appears to line up with the giant 64-m dish of the radio telescope at Parkes Observatory in Australia. As can be seen from the artificial lights around the telescope, light pollution is not a problem for radio astronomers. Radio and microwave interference is a big issue however, as it masks the faint natural emissions from distant objects in space. For this reason many radio observatories ban mobile phone use on their premises. Courtesy: Wayne England / The Royal Observatory Greenwich / Telegraph.

Fifty Shades of Red

Many of us once in every while lose our marbles, go off our trolleys, join the funny farm. We are sometimes just plain wacko, bonkers, nuts, loony, certifiable, batty, bonzo, daft, as mad as a hatter. For proof, we turn to the London Fire Brigade (fire department, to our North American readers). The service has just issued its list of 1,300 unusual incidents since 2010 that get them called out on emergency, in addition to much more serious events such as building fires, and other man-made and natural disasters. The lists make for some very embarrassing reading, and includes: head stuck in toilet, hands stuck in blender, genitals (male) stuck in toaster.

From the Guardian:

It sounds barmy doesn’t it, the London Fire Brigade telling people about men putting their genitals where they shouldn’t? But the fact of the matter is people put body parts in strange places all the time, get stuck, and then call us out to release them. We’re not just talking one or two; our crews have been called out to over 1,300 “unusual” incidents since 2010 – that’s more than one a day.

Granted, they’re not all penis-related, but some are very silly: people with loo seats on their heads, a man with his arm trapped in a portable toilet, adults stuck in children’s toys, someone with a test tube on his finger. And a lot of handcuffs. More than 25 people call us out every year to release them from these. I don’t know whether it’s the Fifty Shades effect or not, but I can tell you this, most are Fifty Shades of Red by the time we turn up in a big, red fire engine with our equipment to cut them out.

We launched our campaign, #FiftyShadesofRed, in a bid to highlight some of the less conventional incidents we’ve attended over the past few years. We tweeted about the incidents from our account, @LondonFire, which certainly raised a few eyebrows, not least among some of my international firefighting colleagues who were surprised to see us putting it all out there, so to speak. This included nine instances of men with rings stuck in awkward places; nine people with their hands stuck in blenders and shredders; numerous people with their hands stuck in letterboxes; a child with a tambourine on its head … the list goes on. We’ve even been called out to rescue a man whose penis was stuck in a toaster. The mind boggles but the message is serious: use some common sense and remember we’re an emergency service and should be treated as such.

It all seems like a bit of fun, but actually when people call us out in these circumstances, they perhaps don’t realise that our firefighters are then not available to attend genuine emergencies, such as fires. Yes, accidents do happen, and sometimes situations can’t be avoided, but I think an awful lot of these incidents could be prevented if people applied some good, old-fashioned common sense. Using handcuffs? Wear the key round your neck. Potty training a toddler? Watch them like a hawk so they don’t end up with it stuck on their head. Like wearing rings? Lovely, but if they’re too small, don’t force them on.

As well as attending each call being time-consuming, it is also pretty expensive, with each costing just shy of £300 of public money. Yet despite many of these call-outs being a bit wacky, they can also be very stressful and painful to those trapped, and some are potentially life-threatening. People getting trapped in machinery, or falling on to fences and getting impaled spring to mind. I’d like to reassure everyone that if there is a genuine emergency, and someone’s in need of our help, we will of course always attend.

Read the entire article here.

Image: Handcuffs. Courtesy of Wikipedia.

 

 

Earth as the New Venus

New research models show just how precarious our planet’s climate really is. Runaway greenhouse warming would make a predicted 2-6 feet rise in average sea levels over the next 50-100 years seem like a puddle at the local splash pool.

From ars technica:

With the explosion of exoplanet discoveries, researchers have begun to seriously revisit what it takes to make a planet habitable, defined as being able to support liquid water. At a basic level, the amount of light a planet receives sets its temperature. But real worlds aren’t actually basic—they have atmospheres, reflect some of that light back into space, and experience various feedbacks that affect the temperature.

Attempts to incorporate all those complexities into models of other planets have produced some unexpected results. Some even suggest that Earth teeters on the edge of experiencing a runaway greenhouse, one that would see its oceans boil off. The fact that large areas of the planet are covered in ice may make that conclusion seem a bit absurd, but a second paper looks at the problem from a somewhat different angle—and comes to the same conclusion. If it weren’t for clouds and our nitrogen-rich atmosphere, the Earth might be an uninhabitable hell right now.

The new work focuses on a very simple model of an atmosphere: a linear column of nothing but water vapor. This clearly doesn’t capture the complex dynamics of weather and the different amounts of light to reach the poles, but it does include things like the amount of light scattered back out into space and the greenhouse impact of the water vapor. These sorts of calculations are simple enough that they were first done decades ago, but the authors note that this particular problem hadn’t been revisited in 25 years. Our knowledge of how water vapor absorbs both visible and infrared light has improved over that time.

Water vapor, like other greenhouse gasses, allows visible light to reach the surface of a planet, but it absorbs most of the infrared light that gets emitted back toward space. Only a narrow window, centered around 10 micrometer wavelengths, makes it back out to space. Once the incoming energy gets larger than the amount that can escape, the end result is a runaway greenhouse: heat evaporates more surface water, which absorbs more infrared, trapping even more heat. At some point, the atmosphere gets so filled with water vapor that light no longer even reaches the surface, instead getting absorbed by the atmosphere itself.

The model shows that, once temperatures reach 1,800K, a second window through the water vapor opens up at about four microns, which allows additional energy to escape into space. The authors suggest that this could be used when examining exoplanets, as high emissions in this region could be taken as an indication that the planet was undergoing a runaway greenhouse.

The authors also used the model to look at what Earth would be like if it had a cloud-free, water atmosphere. The surprise was that the updated model indicated that this alternate-Earth atmosphere would absorb 30 percent more energy than previous estimates suggested. That’s enough to make a runaway greenhouse atmosphere stable at the Earth’s distance from the Sun.

So, why is the Earth so relatively temperate? The authors added a few additional factors to their model to find out. Additional greenhouse gasses like carbon dioxide and methane made runaway heating more likely, while nitrogen scattered enough light to make it less likely. The net result is that, under an Earth-like atmosphere composition, our planet should experience a runaway greenhouse. (In fact, greenhouse gasses can lower the barrier between a temperate climate and a runaway greenhouse, although only at concentrations much higher than we’ll reach even if we burn all the fossil fuels available.) But we know it hasn’t. “A runaway greenhouse has manifestly not occurred on post-Hadean Earth,” the authors note. “It would have sterilized Earth (there is observer bias).”

So, what’s keeping us cool? The authors suggest two things. The first is that our atmosphere isn’t uniformly saturated with water; some areas are less humid and allow more heat to radiate out into space. The other factor is the existence of clouds. Depending on their properties, clouds can either insulate or reflect sunlight back into space. On balance, however, it appears they are key to keeping our planet’s climate moderate.

But clouds won’t help us out indefinitely. Long before the Sun expands and swallows the Earth, the amount of light it emits will rise enough to make a runaway greenhouse more likely. The authors estimate that, with an all-water atmosphere, we’ve got about 1.5 billion years until the Earth is sterilized by skyrocketing temperatures. If other greenhouse gasses are present, then that day will come even sooner.

The authors don’t expect that this will be the last word on exoplanet conditions—in fact, they revisited waterlogged atmospheres in the hopes of stimulating greater discussion of them. But the key to understanding exoplanets will ultimately involve adapting the planetary atmospheric models we’ve built to understand the Earth’s climate. With full, three-dimensional circulation of the atmosphere, these models can provide a far more complete picture of the conditions that could prevail under a variety of circumstances. Right now, they’re specialized to model the Earth, but work is underway to change that.

Read the entire article here.

Image: Venus shrouded in perennial clouds of carbon dioxide, sulfur dioxide and sulfuric acid, as seen by the Messenger probe, 2004. Courtesy of Wikipedia.

Night Owl? You Are Evil

New research — probably conducted by a group of early-risers — shows that people who prefer to stay up late, and rise late, are more likely to be narcissistic, insensitive, manipulative and psychopathic.

That said, previous research has suggested that night owls are generally more intelligent and wealthier than their early-rising, but nicer, cousins.

From the Telegraph:

Psychologists have found that people who are often described as “night owls” display more signs of narcissism, Machiavellianism and psychopathic tendencies than those who are “morning larks”.

The scientists suggest these reason for these traits, known as the Dark Triad, being more prevalent in those who do better in the night may be linked to our evolutionary past.

They claim that the hours of darkness may have helped to conceal those who adopted a “cheaters strategy” while living in groups.

Some social animals will use the cover of darkness to steal females away from more dominant males. This behaviour was also recently spotted in rhinos in Africa.

Dr Peter Jonason, a psychologist at the University of Western Sydney, said: “It could be adaptively effective for anyone pursuing a fast life strategy like that embodied in the Dark Triad to occupy and exploit a lowlight environment where others are sleeping and have diminished cognitive functioning.

“Such features of the night may facilitate the casual sex, mate-poaching, and risk-taking the Dark Triad traits are linked to.

“In short, those high on the Dark Triad traits, like many other predators such as lions, African hunting dogs and scorpions, are creatures of the night.”

Dr Jonason and his colleagues, whose research is published in the journal of Personality and Individual Differences, surveyed 263 students, asking them to complete a series of standard personality tests designed to test their score for the Dark Triad traits.

They were rated on scales for narcissism, the tendency to seek admiration and special treatment; Machiavellianism, a desire to manipulate others; and psychopathy, an inclination towards callousness and insensitivity.

To test each, they were asked to rate their agreement with statements like: “I have a natural talent for influencing people”, “I could beat a lie detector” and “people suffering from incurable diseases should have the choice of being put painlessly to death”.

The volunteers were also asked to complete a questionnaire about how alert they felt at different times of the day and how late they stayed up at night.

The study revealed that those with a darker personality score tended to say they functioned more effectively in the evening.

They also found that those who stayed up later tended to have a higher sense of entitlement and seemed to be more exploitative.

They could find no evidence, however, that the traits were linked to the participants gender, ruling out the possibility that the tendency to plot and act in the night time had its roots in sexual evolution.

Previous research has suggested that people who thrive at night tend also to be more intelligent.

Combined with the other darker personality traits, this could be a dangerous mix.

Read the entire article here.

Image: Portrait of Niccolò Machiavelli, by Santi di Tito. Courtesy of Wikipedia.

Carlos Danger and Other Pseudonyms

Your friendly editor at theDiagonal, also known as, Salvador Gamble, is always game for some sardonic wit. So, we are very proud to point you to Slate’s online pseudonym generator. If, like New York mayoral candidate and ex-U.S. Congressman, Anthony Weiner, you need a mysterious persona to protect your (lewd) stream of consciousness online, then this is the tool for you!

We used the generator to come up with online alter egos for a few of our favorite, trending personalities:

– Chris Froome: Ronaldo Stealth

– Lance Armstrong: Ignacio Death

– Vladimir Putin: Ronaldo Kill

Mitch McConnell: Inigo Peril

– Ben Bernanke: Pascual Menace

MondayMap: Feeding the Mississippi

The system of streams and tributaries that feeds the great Mississippi river is a complex interconnected web covering around half of the United States. A new mapping tool puts it all in one intricate chart.

From Slate:

A new online tool released by the Department of the Interior this week allows users to select any major stream and trace it up to its sources or down to its watershed. The above map, exported from the tool, highlights all the major tributaries that feed into the Mississippi River, illustrating the river’s huge catchment area of approximately 1.15 million square miles, or 37 percent of the land area of the continental U.S. Use the tool to see where the streams around you are getting their water (and pollution).

See a larger version of the map here.

Image: Map of the Mississippi river system. Courtesy of Nationalatlas.gov.

Warp Factor

To date the fastest speed ever traveled by humans is just under 25,000 miles per hour. This milestone was reached by the reentry capsule from the Apollo 10 moon mission — reaching 24,961 mph as it hurtled through Earth’s upper atmosphere. Yet this pales in comparison to the speed of light, which clocks in at 186,282 miles per second, in a vacuum. A quick visit to the calculator puts Apollo 10 at 6.93 miles per second, or 0.0037 percent speed of light!

Despite our very pedestrian speeds many dream of a future where humans might reach the stars, powered by some kind of “warp drive” (yes, Star Trek comes to mind). A handful of researchers at NASA are actively pondering this today. Though, our poor level of technology combined with our lack of understanding of the workings of the universe, suggests that an Alcubierre-like approach is still centuries away from our grasp.

From the New York Times:

Beyond the security gate at the Johnson Space Center’s 1960s-era campus here, inside a two-story glass and concrete building with winding corridors, there is a floating laboratory.

Harold G. White, a physicist and advanced propulsion engineer at NASA, beckoned toward a table full of equipment there on a recent afternoon: a laser, a camera, some small mirrors, a ring made of ceramic capacitors and a few other objects.

He and other NASA engineers have been designing and redesigning these instruments, with the goal of using them to slightly warp the trajectory of a photon, changing the distance it travels in a certain area, and then observing the change with a device called an interferometer. So sensitive is their measuring equipment that it was picking up myriad earthly vibrations, including people walking nearby. So they recently moved into this lab, which floats atop a system of underground pneumatic piers, freeing it from seismic disturbances.

The team is trying to determine whether faster-than-light travel — warp drive — might someday be possible.

Warp drive. Like on “Star Trek.”

“Space has been expanding since the Big Bang 13.7 billion years ago,” said Dr. White, 43, who runs the research project. “And we know that when you look at some of the cosmology models, there were early periods of the universe where there was explosive inflation, where two points would’ve went receding away from each other at very rapid speeds.”

“Nature can do it,” he said. “So the question is, can we do it?”

Einstein famously postulated that, as Dr. White put it, “thou shalt not exceed the speed of light,” essentially setting a galactic speed limit. But in 1994, a Mexican physicist, Miguel Alcubierre, theorized that faster-than-light speeds were possible in a way that did not contradict Einstein, though Dr. Alcubierre did not suggest anyone could actually construct the engine that could accomplish that.

His theory involved harnessing the expansion and contraction of space itself. Under Dr. Alcubierre’s hypothesis, a ship still couldn’t exceed light speed in a local region of space. But a theoretical propulsion system he sketched out manipulated space-time by generating a so-called “warp bubble” that would expand space on one side of a spacecraft and contract it on another.

“In this way, the spaceship will be pushed away from the Earth and pulled towards a distant star by space-time itself,” Dr. Alcubierre wrote. Dr. White has likened it to stepping onto a moving walkway at an airport.

But Dr. Alcubierre’s paper was purely theoretical, and suggested insurmountable hurdles. Among other things, it depended on large amounts of a little understood or observed type of “exotic matter” that violates typical physical laws.

Dr. White believes that advances he and others have made render warp speed less implausible. Among other things, he has redesigned the theoretical warp-traveling spacecraft — and in particular a ring around it that is key to its propulsion system — in a way that he believes will greatly reduce the energy requirements.

Read the entire article here.

Sounds of Extinction

Camera aficionados will find themselves lamenting the demise of the film advance. Now that the world has moved on from film to digital you will no longer hear that distinctive mechanical sound as you wind on the film, and hope the teeth on the spool engage the plastic of the film.

Hardcore computer buffs will no doubt miss the beep-beep-hiss sound of the 56K modem — that now seemingly ancient box that once connected us to… well, who knows what it actually connected us to at that speed.

Our favorite arcane sounds, soon to become relegated to the audio graveyard: the telephone handset slam, the click and carriage return of the typewriter, the whir of reel-to-reel tape, the crackle of the diamond stylus as it first hits an empty groove on a 33.

More sounds you may (or may not) miss below.

From Wired:

The forward march of technology has a drum beat. These days, it’s custom text-message alerts, or your friend saying “OK, Glass” every five minutes like a tech-drunk parrot. And meanwhile, some of the most beloved sounds are falling out of the marching band.

The boops and beeps of bygone technology can be used to chart its evolution. From the zzzzzzap of the Tesla coil to the tap-tap-tap of Morse code being sent via telegraph, what were once the most important nerd sounds in the world are now just historical signposts. But progress marches forward, and for every irritatingly smug Angry Pigs grunt we have to listen to, we move further away from the sound of the Defender ship exploding.

Let’s celebrate the dying cries of technology’s past. The follow sounds are either gone forever, or definitely on their way out. Bow your heads in silence and bid them a fond farewell.

The Telephone Slam

Ending a heated telephone conversation by slamming the receiver down in anger was so incredibly satisfying. There was no better way to punctuate your frustration with the person on the other end of the line. And when that receiver hit the phone, the clack of plastic against plastic was accompanied by a slight ringing of the phone’s internal bell. That’s how you knew you were really pissed — when you slammed the phone so hard, it rang.

There are other sounds we’ll miss from the phone. The busy signal died with the rise of voicemail (although my dad refuses to get voicemail or call waiting, so he’s still OG), and the rapid click-click-click of the dial on a rotary phone is gone. But none of those compare with hanging up the phone with a forceful slam.

Tapping a touchscreen just does not cut it. So the closest thing we have now is throwing the pitifully fragile smartphone against the wall.

The CRT Television

The only TVs left that still use cathode-ray tubes are stashed in the most depressing places — the waiting rooms of hospitals, used car dealerships, and the dusty guest bedroom at your grandparents’ house. But before we all fell prey to the magical resolution of zeros and ones, boxy CRT televisions warmed (literally) the living rooms of every home in America. The sounds they made when you turned them on warmed our hearts, too — the gentle whoosh of the degaussing coil as the set was brought to life with the heavy tug of a pull-switch, or the satisfying mechanical clunk of a power button. As the tube warmed up, you’d see the visuals slowly brighten on the screen, giving you ample time to settle into the couch to enjoy latest episode of Seinfeld.

Read the entire article here.

Image courtesy of Wired.

Dolphins Use Names

From Wired:

For decades, scientists have been fascinated by dolphins’ so-called signature whistles: distinctive vocal patterns learned early and used throughout life. The purpose of these whistles is a matter of debate, but new research shows that dolphins respond selectively to recorded versions of their personal signatures, much as a person might react to someone calling their name.

Combined with earlier findings, the results “present the first case of naming in mammals, providing a clear parallel between dolphin and human communication,” said biologist Stephanie King of Scotland’s University of St. Andrews, an author of the new study.

Earlier research by Janik and King showed that bottlenose dolphins call each other’s signature whistles while temporarily restrained in nets, but questions had remained over how dolphins used them at sea, in their everyday lives. King’s new experiment, conducted with fellow St. Andrews biologist Vincent Janik and described July 22 in Proceedings of the National Academy of Sciences, involved wild bottlenose groups off Scotland’s eastern coast.

Janik and King recorded their signature whistles, then broadcast computer-synthesized versions through a hydrophone. They also played back recordings of unfamiliar signature whistles. The dolphins ignored signatures belonging to other individuals in their groups, as well as unfamiliar whistles.

To their own signatures, however, they usually whistled back, suggesting that dolphins may use the signatures to address one another.

The new findings are “clearly a landmark,” said biologist Shane Gero of Dalhousie University, whose own research suggests that sperm whales have names. “I think this study puts to bed the argument of whether signature whistles are truly signatures.”

Gero is especially interested in the different ways that dolphins responded to hearing their signature called. Sometimes they simply repeated their signature — a bit, perhaps, like hearing your name called and shouting back, “Yes, I’m here!” Some dolphins, however, followed their signatures with a long string of other whistles.

“It opens the door to syntax, to how and when it’s ‘appropriate’ to address one another,” said Gero, who wonders if the different response types might be related to social roles or status. Referring to each other by name suggests that dolphins may recall past experiences with other individual dolphins, Gero said.

“The concept of ‘relationship’ as we know it may be more relevant than just a sequence of independent selfish interactions,” said Gero. “We likely underestimate the complexity of their communication system, cognitive abilities, and the depth of meaning in their actions.”

King and Janik have also observed that dolphins often make their signature whistles when groups encounter one another, as if to announce exactly who is present.

To Peter Tyack, a Woods Hole Oceanographic Institution biologist who has previously studied dolphin signature whistle-copying, the new findings support the possiblity of dolphin names, but more experiments would help illuminate the meanings they attach to their signatures.

Read the entire article here.

Image: Bottlenose dolphin with young. Courtesy of Wikipedia.

Portrait of a Royal Baby

Royal-watchers from all corners of the globe, especially the British one, have been agog over the arrival of the latest royal earlier this week. The overblown media circus got us thinking about baby pictures. Will the Prince of Cambridge be the first heir to the throne to have his portrait enshrined via Instagram? Or, as is more likely, will his royal essence be captured in oil on canvas, as with the 35 or more generations that preceded him?

From Jonathan Jones over at the Guardian:

Royal children have been portrayed by some of the greatest artists down the ages, preserving images of childhood that are still touching today. Will this royal baby fare better than its mother in the portraits that are sure to come? Are there any artists out there who can go head to head with the greats of royal child portraiture?

Agnolo Bronzino has to be first among those greats, because he painted small children in a way that set the tone for many royal images to come. Some might say the Medici rulers of Florence, for whom he worked, were not properly royal – but they definitely acted like a royal family, and the artists who worked for them set the tone of court art all over Europe. In Giovanni de’ Medici As a Child, Bronzino expresses the joy of children and the pleasure of parents in a way that was revolutionary in the 16th century. Chubby-cheeked and jolly, Giovanni clutches a pet goldfinch. In paintings of the Holy Family you know that if Jesus has a pet bird it probably has some dire symbolic meaning. But this pet is just a pet. Giovanni is just a happy kid. Actually, a happy baby: he was about 18 months old.

Hans Holbein took more care to clarify the regal uniqueness of his subject when he portrayed Edward, only son of King Henry VIII of England, in about 1538. Holbein, too, captures the face of early childhood brilliantly. But how old is Edward meant to be? In fact, he was two. Holbein expresses his infancy – his baby face, his baby hands – while having him stand holding out a majestic hand, dressed like his father, next to an inscription that praises the paternal glory of Henry. Who knows, perhaps he really stood like that for a second or two, long enough for Holbein to take a mental photograph.

Diego Velázquez recorded a more nuanced, even anxious, view of royal childhood in his paintings of the royal princesses of 17th-century Spain. In the greatest of them, Las Meninas, the five-year-old Infanta Margarita Teresa stands looking at us, accompanied by her ladies in waiting (meninas) and two dwarves, while Velázquez works on a portrait of her parents, the king and queen. The infanta is beautiful and confident, attended by her own micro-court – but as she looks out of the painting at her parents (who are standing where the spectator of the painting stands) she is performing. And she is under pressure to look and act like a little princess.

The 19th-century painter Stephen Poyntz Denning may not be in the league of these masters. In fact, let’s be blunt: he definitely isn’t. But his painting Queen Victoria, Aged 4 is a fascinating curiosity. Like the Infanta, this royal princess is not allowed to be childlike. She is dressed in an oppressively formal way, in dark clothes that anticipate her mature image – a childhood lost to royal destiny.

Read the entire article here.

Image: Princess Victoria aged Four, Denning, Stephen Poyntz (c. 1787 – 1864). Courtesy of Wikimedia.

Dopamine on the Mind

Dopamine is one of the brain’s key signalling chemicals. And, because of its central role in the risk-reward structures of the brain it often gets much attention — both in neuroscience research and in the public consciousness.

From Slate:

In a brain that people love to describe as “awash with chemicals,” one chemical always seems to stand out. Dopamine: the molecule behind all our most sinful behaviors and secret cravings. Dopamine is love. Dopamine is lust. Dopamine is adultery. Dopamine is motivation. Dopamine is attention. Dopamine is feminism. Dopamine is addiction.

My, dopamine’s been busy.

Dopamine is the one neurotransmitter that everyone seems to know about. Vaughn Bell once called it the Kim Kardashian of molecules, but I don’t think that’s fair to dopamine. Suffice it to say, dopamine’s big. And every week or so, you’ll see a new article come out all about dopamine.

So is dopamine your cupcake addiction? Your gambling? Your alcoholism? Your sex life? The reality is dopamine has something to do with all of these. But it is none of them. Dopamine is a chemical in your body. That’s all. But that doesn’t make it simple.

What is dopamine? Dopamine is one of the chemical signals that pass information from one neuron to the next in the tiny spaces between them. When it is released from the first neuron, it floats into the space (the synapse) between the two neurons, and it bumps against receptors for it on the other side that then send a signal down the receiving neuron. That sounds very simple, but when you scale it up from a single pair of neurons to the vast networks in your brain, it quickly becomes complex. The effects of dopamine release depend on where it’s coming from, where the receiving neurons are going and what type of neurons they are, what receptors are binding the dopamine (there are five known types), and what role both the releasing and receiving neurons are playing.

And dopamine is busy! It’s involved in many different important pathways. But when most people talk about dopamine, particularly when they talk about motivation, addiction, attention, or lust, they are talking about the dopamine pathway known as the mesolimbic pathway, which starts with cells in the ventral tegmental area, buried deep in the middle of the brain, which send their projections out to places like the nucleus accumbens and the cortex. Increases in dopamine release in the nucleus accumbens occur in response to sex, drugs, and rock and roll. And dopamine signaling in this area is changed during the course of drug addiction.  All abused drugs, from alcohol to cocaine to heroin, increase dopamine in this area in one way or another, and many people like to describe a spike in dopamine as “motivation” or “pleasure.” But that’s not quite it. Really, dopamine is signaling feedback for predicted rewards. If you, say, have learned to associate a cue (like a crack pipe) with a hit of crack, you will start getting increases in dopamine in the nucleus accumbens in response to the sight of the pipe, as your brain predicts the reward. But if you then don’t get your hit, well, then dopamine can decrease, and that’s not a good feeling. So you’d think that maybe dopamine predicts reward. But again, it gets more complex. For example, dopamine can increase in the nucleus accumbens in people with post-traumatic stress disorder when they are experiencing heightened vigilance and paranoia. So you might say, in this brain area at least, dopamine isn’t addiction or reward or fear. Instead, it’s what we call salience. Salience is more than attention: It’s a sign of something that needs to be paid attention to, something that stands out. This may be part of the mesolimbic role in attention deficit hyperactivity disorder and also a part of its role in addiction.

But dopamine itself? It’s not salience. It has far more roles in the brain to play. For example, dopamine plays a big role in starting movement, and the destruction of dopamine neurons in an area of the brain called the substantia nigra is what produces the symptoms of Parkinson’s disease. Dopamine also plays an important role as a hormone, inhibiting prolactin to stop the release of breast milk. Back in the mesolimbic pathway, dopamine can play a role in psychosis, and many antipsychotics for treatment of schizophrenia target dopamine. Dopamine is involved in the frontal cortex in executive functions like attention. In the rest of the body, dopamine is involved in nausea, in kidney function, and in heart function.

With all of these wonderful, interesting things that dopamine does, it gets my goat to see dopamine simplified to things like “attention” or “addiction.” After all, it’s so easy to say “dopamine is X” and call it a day. It’s comforting. You feel like you know the truth at some fundamental biological level, and that’s that. And there are always enough studies out there showing the role of dopamine in X to leave you convinced. But simplifying dopamine, or any chemical in the brain, down to a single action or result gives people a false picture of what it is and what it does. If you think that dopamine is motivation, then more must be better, right? Not necessarily! Because if dopamine is also “pleasure” or “high,” then too much is far too much of a good thing. If you think of dopamine as only being about pleasure or only being about attention, you’ll end up with a false idea of some of the problems involving dopamine, like drug addiction or attention deficit hyperactivity disorder, and you’ll end up with false ideas of how to fix them.

Read the entire article here.

Image: 3D model of dopamine. Courtesy of Wikipedia.

Gnarly Names

By most accounts the internet is home to around 650 million websites, of which around 200 million are active. About 8,000 new websites go live every hour of every day.

These are big numbers and the continued phenomenal growth means that it’s increasingly difficult to find a unique and unused domain name (think website). So, web entrepreneurs are getting creative with website and company names, with varying degrees of success.

From Wall Street Journal:

The New York cousins who started a digital sing-along storybook business have settled on the name Mibblio.

The Australian founder of a startup connecting big companies to big-data scientists has dubbed his service Kaggle.

The former toy executive behind a two-year-old mobile screen-sharing platform is going with the name Shodogg.

And the Missourian who founded a website giving customers access to local merchants and service providers? He thinks it should be called Zaarly.

Quirky names for startups first surfaced about 20 years ago in Silicon Valley, with the birth of search engines such as Yahoo, which stands for “Yet Another Hierarchical Officious Oracle,” and Google, a misspelling of googol,? the almost unfathomably high number represented by a 1 followed by 100 zeroes.

By the early 2000s, the trend had spread to startups outside the Valley, including the Vancouver-based photo-sharing site Flickr and New York-based blogging platform Tumblr, to name just two.

The current crop of startups boasts even wackier spellings. The reason, they say, is that practically every new business—be it a popsicle maker or a furniture retailer—needs its own website. With about 252 million domain names currently registered across the Internet, the short, recognizable dot-com Web addresses, or URLs, have long been taken.

The only practical solution, some entrepreneurs say, is to invent words, like Mibblio, Kaggle, Shodogg and Zaarly, to avoid paying as much as $2 million for a concise, no-nonsense dot-com URL.

The rights to Investing.com, for example, sold for about $2.5 million last year.

Choosing a name that’s a made-up word also helps entrepreneurs steer clear of trademark entanglements.

The challenge is to come up with something that conveys meaning, is memorable,?and isn’t just alphabet soup. Most founders don’t have the budget to hire naming advisers.

Founders tend to favor short names of five to seven letters, because they worry that potential customers might forget longer ones, according to Steve Manning, founder of Igor, a name-consulting company.

Linguistically speaking, there are only a few methods of forming new words. They include misspelling, compounding, blending and scrambling.

At Mibblio, the naming process was “the length of a human gestation period,” says the company’s 28-year-old co-founder David Leiberman, “but only more painful,” adds fellow co-founder Sammy Rubin, 35.

The two men made several trips back to the drawing board; early contenders included Babethoven, Yipsqueak and Canarytales, but none was a perfect fit. One they both loved, Squeakbox, was taken.

Read the entire article here.

Rewriting Memories

Important new research suggests that traumatic memories can be rewritten. Timing is critical.

From Technology Review:

It was a Saturday night at the New York Psychoanalytic Institute, and the second-floor auditorium held an odd mix of gray-haired, cerebral Upper East Side types and young, scruffy downtown grad students in black denim. Up on the stage, neuroscientist Daniela Schiller, a riveting figure with her long, straight hair and impossibly erect posture, paused briefly from what she was doing to deliver a mini-lecture about memory.

She explained how recent research, including her own, has shown that memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.

And then Schiller went back to what she had been doing, which was providing a slamming, rhythmic beat on drums and backup vocals for the Amygdaloids, a rock band composed of New York City neuroscientists. During their performance at the institute’s second annual “Heavy Mental Variety Show,” the band blasted out a selection of its greatest hits, including songs about cognition (“Theory of My Mind”), memory (“A Trace”), and psychopathology (“Brainstorm”).

“Just give me a pill,” Schiller crooned at one point, during the chorus of a song called “Memory Pill.” “Wash away my memories …”

The irony is that if research by Schiller and others holds up, you may not even need a pill to strip a memory of its power to frighten or oppress you.

Schiller, 40, has been in the vanguard of a dramatic reassessment of how human memory works at the most fundamental level. Her current lab group at Mount Sinai School of Medicine, her former colleagues at New York University, and a growing army of like-minded researchers have marshaled a pile of data to argue that we can alter the emotional impact of a memory by adding new information to it or recalling it in a different context. This hypothesis challenges 100 years of neuroscience and overturns cultural touchstones from Marcel Proust to best-selling memoirs. It changes how we think about the permanence of memory and identity, and it suggests radical nonpharmacological approaches to treating pathologies like post-traumatic stress disorder, other fear-based anxiety disorders, and even addictive behaviors.

In a landmark 2010 paper in Nature, Schiller (then a postdoc at New York University) and her NYU colleagues, including Joseph E. LeDoux and Elizabeth A. Phelps, published the results of human experiments indicating that memories are reshaped and rewritten every time we recall an event. And, the research suggested, if mitigating information about a traumatic or unhappy event is introduced within a narrow window of opportunity after its recall—during the few hours it takes for the brain to rebuild the memory in the biological brick and mortar of molecules—the emotional experience of the memory can essentially be rewritten.

“When you affect emotional memory, you don’t affect the content,” Schiller explains. “You still remember perfectly. You just don’t have the emotional memory.”

Fear training

The idea that memories are constantly being rewritten is not entirely new. Experimental evidence to this effect dates back at least to the 1960s. But mainstream researchers tended to ignore the findings for decades because they contradicted the prevailing scientific theory about how memory works.

That view began to dominate the science of memory at the beginning of the 20th century. In 1900, two German scientists, Georg Elias Müller and Alfons Pilzecker, conducted a series of human experiments at the University of Göttingen. Their results suggested that memories were fragile at the moment of formation but were strengthened, or consolidated, over time; once consolidated, these memories remained essentially static, permanently stored in the brain like a file in a cabinet from which they could be retrieved when the urge arose.

It took decades of painstaking research for neuroscientists to tease apart a basic mechanism of memory to explain how consolidation occurred at the level of neurons and proteins: an experience entered the neural landscape of the brain through the senses, was initially “encoded” in a central brain apparatus known as the hippocampus, and then migrated—by means of biochemical and electrical signals—to other precincts of the brain for storage. A famous chapter in this story was the case of “H.M.,” a young man whose hippocampus was removed during surgery in 1953 to treat debilitating epileptic seizures; although physiologically healthy for the remainder of his life (he died in 2008), H.M. was never again able to create new long-term memories, other than to learn new motor skills.

Subsequent research also made clear that there is no single thing called memory but, rather, different types of memory that achieve different biological purposes using different neural pathways. “Episodic” memory refers to the recollection of specific past events; “procedural” memory refers to the ability to remember specific motor skills like riding a bicycle or throwing a ball; fear memory, a particularly powerful form of emotional memory, refers to the immediate sense of distress that comes from recalling a physically or emotionally dangerous experience. Whatever the memory, however, the theory of consolidation argued that it was an unchanging neural trace of an earlier event, fixed in long-term storage. Whenever you retrieved the memory, whether it was triggered by an unpleasant emotional association or by the seductive taste of a madeleine, you essentially fetched a timeless narrative of an earlier event. Humans, in this view, were the sum total of their fixed memories. As recently as 2000 in Science, in a review article titled “Memory—A Century of Consolidation,” James L. McGaugh, a leading neuroscientist at the University of California, Irvine, celebrated the consolidation hypothesis for the way that it “still guides” fundamental research into the biological process of long-term memory.

As it turns out, Proust wasn’t much of a neuroscientist, and consolidation theory couldn’t explain everything about memory. This became apparent during decades of research into what is known as fear training.

Schiller gave me a crash course in fear training one afternoon in her Mount Sinai lab. One of her postdocs, Dorothee Bentz, strapped an electrode onto my right wrist in order to deliver a mild but annoying shock. She also attached sensors to several fingers on my left hand to record my galvanic skin response, a measure of physiological arousal and fear. Then I watched a series of images—blue and purple cylinders—flash by on a computer screen. It quickly became apparent that the blue cylinders often (but not always) preceded a shock, and my skin conductivity readings reflected what I’d learned. Every time I saw a blue cylinder, I became anxious in anticipation of a shock. The “learning” took no more than a couple of minutes, and Schiller pronounced my little bumps of anticipatory anxiety, charted in real time on a nearby monitor, a classic response of fear training. “It’s exactly the same as in the rats,” she said.

In the 1960s and 1970s, several research groups used this kind of fear memory in rats to detect cracks in the theory of memory consolidation. In 1968, for example, Donald J. Lewis of Rutgers University led a study showing that you could make the rats lose the fear associated with a memory if you gave them a strong electroconvulsive shock right after they were induced to retrieve that memory; the shock produced an amnesia about the previously learned fear. Giving a shock to animals that had not retrieved the memory, in contrast, did not cause amnesia. In other words, a strong shock timed to occur immediately after a memory was retrieved seemed to have a unique capacity to disrupt the memory itself and allow it to be reconsolidated in a new way. Follow-up work in the 1980s confirmed some of these observations, but they lay so far outside mainstream thinking that they barely received notice.

Moment of silence

At the time, Schiller was oblivious to these developments. A self-described skateboarding “science geek,” she grew up in Rishon LeZion, Israel’s fourth-largest city, on the coastal plain a few miles southeast of Tel Aviv. She was the youngest of four children of a mother from Morocco and a “culturally Polish” father from Ukraine—“a typical Israeli melting pot,” she says. As a tall, fair-skinned teenager with European features, she recalls feeling estranged from other neighborhood kids because she looked so German.

Schiller remembers exactly when her curiosity about the nature of human memory began. She was in the sixth grade, and it was the annual Holocaust Memorial Day in Israel. For a school project, she asked her father about his memories as a Holocaust survivor, and he shrugged off her questions. She was especially puzzled by her father’s behavior at 11 a.m., when a simultaneous eruption of sirens throughout Israel signals the start of a national moment of silence. While everyone else in the country stood up to honor the victims of genocide, he stubbornly remained seated at the kitchen table as the sirens blared, drinking his coffee and reading the newspaper.

“The Germans did something to my dad, but I don’t know what because he never talks about it,” Schiller told a packed audience in 2010 at The Moth, a storytelling event.

During her compulsory service in the Israeli army, she organized scientific and educational conferences, which led to studies in psychology and philosophy at Tel Aviv University; during that same period, she procured a set of drums and formed her own Hebrew rock band, the Rebellion Movement. Schiller went on to receive a PhD in psychobiology from Tel Aviv University in 2004. That same year, she recalls, she saw the movie Eternal Sunshine of the Spotless Mind, in which a young man undergoes treatment with a drug that erases all memories of a former girlfriend and their painful breakup. Schiller heard (mistakenly, it turns out) that the premise of the movie had been based on research conducted by Joe LeDoux, and she eventually applied to NYU for a postdoctoral fellowship.

In science as in memory, timing is everything. Schiller arrived in New York just in time for the second coming of memory reconsolidation in neuroscience.

Altering the story

The table had been set for Schiller’s work on memory modification in 2000, when Karim Nader, a postdoc in LeDoux’s lab, suggested an experiment testing the effect of a drug on the formation of fear memories in rats. LeDoux told Nader in no uncertain terms that he thought the idea was a waste of time and money. Nader did the experiment anyway. It ended up getting published in Nature and sparked a burst of renewed scientific interest in memory reconsolidation (see “Manipulating Memory,” May/June 2009).

The rats had undergone classic fear training—in an unpleasant twist on Pavlovian conditioning, they had learned to associate an auditory tone with an electric shock. But right after the animals retrieved the fearsome memory (the researchers knew they had done so because they froze when they heard the tone), Nader injected a drug that blocked protein synthesis directly into their amygdala, the part of the brain where fear memories are believed to be stored. Surprisingly, that appeared to pave over the fearful association. The rats no longer froze in fear of the shock when they heard the sound cue.

Decades of research had established that long-term memory consolidation requires the synthesis of proteins in the brain’s memory pathways, but no one knew that protein synthesis was required after the retrieval of a memory as well—which implied that the memory was being consolidated then, too. Nader’s experiments also showed that blocking protein synthesis prevented the animals from recalling the fearsome memory only if they received the drug at the right time, shortly after they were reminded of the fearsome event. If Nader waited six hours before giving the drug, it had no effect and the original memory remained intact. This was a big biochemical clue that at least some forms of memories essentially had to be neurally rewritten every time they were recalled.

When Schiller arrived at NYU in 2005, she was asked by Elizabeth Phelps, who was spearheading memory research in humans, to extend Nader’s findings and test the potential of a drug to block fear memories. The drug used in the rodent experiment was much too toxic for human use, but a class of antianxiety drugs known as beta-adrenergic antagonists (or, in common parlance, “beta blockers”) had potential; among these drugs was propranolol, which had previously been approved by the FDA for the treatment of panic attacks and stage fright. ­Schiller immediately set out to test the effect of propranolol on memory in humans, but she never actually performed the experiment because of prolonged delays in getting institutional approval for what was then a pioneering form of human experimentation. “It took four years to get approval,” she recalls, “and then two months later, they took away the approval again. My entire postdoc was spent waiting for this experiment to be approved.” (“It still hasn’t been approved!” she adds.)

While waiting for the approval that never came, Schiller began to work on a side project that turned out to be even more interesting. It grew out of an offhand conversation with a colleague about some anomalous data described at meeting of LeDoux’s lab: a group of rats “didn’t behave as they were supposed to” in a fear experiment, Schiller says.

The data suggested that a fear memory could be disrupted in animals even without the use of a drug that blocked protein synthesis. Schiller used the kernel of this idea to design a set of fear experiments in humans, while Marie-H. Monfils, a member of the LeDoux lab, simultaneously pursued a parallel line of experimentation in rats. In the human experiments, volunteers were shown a blue square on a computer screen and then given a shock. Once the blue square was associated with an impending shock, the fear memory was in place. Schiller went on to show that if she repeated the sequence that produced the fear memory the following day but broke the association within a narrow window of time—that is, showed the blue square without delivering the shock—this new information was incorporated into the memory.

Here, too, the timing was crucial. If the blue square that wasn’t followed by a shock was shown within 10 minutes of the initial memory recall, the human subjects reconsolidated the memory without fear. If it happened six hours later, the initial fear memory persisted. Put another way, intervening during the brief window when the brain was rewriting its memory offered a chance to revise the initial memory itself while diminishing the emotion (fear) that came with it. By mastering the timing, the NYU group had essentially created a scenario in which humans could rewrite a fearsome memory and give it an unfrightening ending. And this new ending was robust: when Schiller and her colleagues called their subjects back into the lab a year later, they were able to show that the fear associated with the memory was still blocked.

The study, published in Nature in 2010, made clear that reconsolidation of memory didn’t occur only in rats.

Read the entire article here.

Hyperloop: Not Your Father’s High-Speed Rail

Europe and Japan have been leading the way with their 200-300 mph bullet trains for several decades. While the United States still tries to play catch up, one serial entrepreneur has other ideas. For Elon Musk, the bullet train is so, well, yesterday. He has in mind a ground based system that would hurtle people around at speeds of 4,000 mph. Welcome to Hyperloop.

From Slate:

High-speed rail is so 20th century. Well, perhaps not in the United States, where we still haven’t gotten around to building any true bullet trains. After 30 years of dithering, California is finally working on one that would get people from Los Angeles to San Francisco in a little under 2 1/2 hours, but it could cost on the order of $100 billion and won’t be ready until at least 2028.

Enter Tesla and SpaceX visionary Elon Musk with one of the craziest-sounding ideas in transportation history. For a while now, Musk has been hinting at an idea he calls the Hyperloop—a ground-based transportation technology that would get people from Los Angeles to San Francisco in under half an hour, for less than 1/10 the cost of building the high-speed rail line. Oh, and this 800-mph system would be self-powered, immune to weather, and would never crash.

What is the Hyperloop? So far Musk hasn’t gotten very specific, though he once called it “a cross between a Concorde and a railgun and an air hockey table.” But we’ll soon find out more. On Monday, Musk tweeted that he will publish an “alpha design” for the Hyperloop by Aug. 12. Responding to questions on Twitter, he indicated that the plans would be open-source, and that he would consider a partnership with someone who shared his vision. Perhaps the best clue came when he responded to an engineer named John Gardi, who published a diagram of his best guess as to how the Hyperloop might work:

It sounds fanciful, and maybe it is. But Musk is not the only one working on ultra-fast land-based transportation systems. And if anyone can turn an idea like this into reality, it might just be the man who has spent the past decade revolutionizing electric cars and space transport. Don’t be surprised if the biggest obstacles to the Hyperloop turn out to be bureaucratic rather than technological. After all, we’ve known how to build bullet trains for half a century, and look how far that has gotten us. Still, a nation can dream—and as long as we’re dreaming, why not dream about something way cooler than what Japan and China are already working on?

Read the entire article here.

Highbrow or Lowbrow?

Do you prefer the Beatles to Beethoven? Do you prefer Rembrandt over the Sunday comics or the latest Marvel? Do you read Patterson or Proust? Gary Gutting professor of philosophy argues that the distinguishing value of aesthetics must drive us to appreciate fine art over popular work. So, you had better dust off those volumes of Shakespeare.

From the New York Times:

Our democratic society is uneasy with the idea that traditional “high culture” (symphonies, Shakespeare, Picasso) is superior to popular culture (rap music, TV dramas, Norman Rockwell). Our media often make a point of blurring the distinction: newspapers and magazines review rock concerts alongside the Met’s operas and “Batman” sequels next to Chekhov plays. Sophisticated academic critics apply the same methods of analysis and appreciation to Proust and to comic books. And at all levels, claims of objective artistic superiority are likely to be met with smug assertions that all such claims are merely relative to subjective individual preferences.

Our democratic unease is understandable, since the alleged superiority of high culture has often supported the pretensions of an aristocratic class claiming to have privileged access to it. For example, Virginia Woolf’s classic essay — arch, snobbish, and very funny — reserved the appreciation of great art to “highbrows”: those “thoroughbreds of the mind” who combine innate taste with sufficient inherited wealth to sustain a life entirely dedicated to art. Lowbrows were working-class people who had neither the taste nor the time for the artistic life. Woolf claimed to admire lowbrows, who did the work highbrows like herself could not and accepted their cultural inferiority. But she expresses only disdain for a third class — the “middlebrows”— who have earned (probably through trade) enough money to purchase the marks of a high culture that they could never properly appreciate. Middlebrows pursue “no single object, neither art itself nor life itself, but both mixed indistinguishably, and rather nastily, with money, fame, power, or prestige.”

There is, however, no need to tie a defense of high art to Woolf’s “snobocracy.” We can define the high/popular distinction directly in terms of aesthetic quality, without tendentious connections to social status or wealth. Moreover, we can appropriate Woolf’s term “middlebrow,” using it to refer to those, not “to the manner born,” who, admirably, employ the opportunities of a democratic society to reach a level of culture they were not born into.

At this point, however, we can no longer avoid the hovering relativist objection: How do we know that there are any objective criteria that authorize claims that one kind of art is better than another?

Centuries of unresolved philosophical debate show that there is, in fact, little hope of refuting someone who insists on a thoroughly relativist view of art. We should not expect, for example, to provide a definition of beauty (or some other criterion of artistic excellence) that we can use to prove to all doubters that, say, Mozart’s 40th Symphony is objectively superior as art to “I Want to Hold Your Hand.” But in practice there is no need for such a proof, since hardly anyone really holds the relativist view. We may say, “You can’t argue about taste,” but when it comes to art we care about, we almost always do.

For example, fans of popular music may respond to the elitist claims of classical music with a facile relativism. But they abandon this relativism when arguing, say, the comparative merits of the early Beatles and the Rolling Stones. You may, for example, maintain that the Stones were superior to the Beatles (or vice versa) because their music is more complex, less derivative, and has greater emotional range and deeper intellectual content. Here you are putting forward objective standards from which you argue for a band’s superiority. Arguing from such criteria implicitly rejects the view that artistic evaluations are simply matters of personal taste. You are giving reasons for your view that you think others ought to accept.

Further, given the standards fans use to show that their favorites are superior, we can typically show by those same standards that works of high art are overall superior to works of popular art. If the Beatles are better than the Stones in complexity, originality, emotional impact, and intellectual content, then Mozart’s operas are, by those standards, superior to the Beatles’ songs. Similarly, a case for the superiority of one blockbuster movie over another would most likely invoke standards of dramatic power, penetration into character, and quality of dialogue by which almost all blockbuster movies would pale in comparison to Sophocles or Shakespeare.

On reflection, it’s not hard to see why — keeping to the example of music —classical works are in general capable of much higher levels of aesthetic value than popular ones. Compared to a classical composer, someone writing a popular song can utilize only a very small range of musical possibilities: a shorter time span, fewer kinds of instruments, a lower level of virtuosity and a greatly restricted range of compositional techniques. Correspondingly, classical performers are able to supply whatever the composers need for a given piece; popular performers seriously restrict what composers can ask for. Of course, there are sublime works that make minimal performance demands. But constant restriction of resources reduces the opportunities for greater achievement.

Read the entire article here.

Image: Detail of the face of Wolfgang Amadeus Mozart. Cropped version of the painting where Mozart is seen with Anna Maria (Mozart’s sister) and father, Leopold, on the wall a portrait of his deceased mother, Anna Maria. By Johann Nepomuk della Croce (1736-1819). Courtesy of Wikipedia.

Atlas Shrugs

She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.

[tube]zkBnFPBV3f0[/tube]

Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.

From the New York Times:

Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.

C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.

Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.

Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.

“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.

The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.

Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.

Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.

“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”

Dr. Pratt equated the current version of Atlas to a 1-year-old.

“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”

But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.

The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.

Read the entire article here.

Helping the Honeybees

Agricultural biotechnology giant Monsanto is joining efforts to help the honeybee. Honeybees the world over have been suffering from a widespread and catastrophic condition often referred to a colony collapse disorder.

From Technology Review:

Beekeepers are desperately battling colony collapse disorder, a complex condition that has been killing bees in large swaths and could ultimately have a massive effect on people, since honeybees pollinate a significant portion of the food that humans consume.

A new weapon in that fight could be RNA molecules that kill a troublesome parasite by disrupting the way its genes are expressed. Monsanto and others are developing the molecules as a means to kill the parasite, a mite that feeds on honeybees.

The killer molecule, if it proves to be efficient and passes regulatory hurdles, would offer welcome respite. Bee colonies have been dying in alarming numbers for several years, and many factors are contributing to this decline. But while beekeepers struggle with malnutrition, pesticides, viruses, and other issues in their bee stocks, one problem that seems to be universal is the Varroa mite, an arachnid that feeds on the blood of developing bee larvae.

“Hives can survive the onslaught of a lot of these insults, but with Varroa, they can’t last,” says Alan Bowman, a University of Aberdeen molecular biologist in Scotland, who is studying gene silencing as a means to control the pest.

The Varroa mite debilitates colonies by hampering the growth of young bees and increasing the lethality of the viruses that it spreads. “Bees can quite happily survive with these viruses, but now, in the presence of Varroa, these viruses become lethal,” says Bowman. Once a hive is infested with Varroa, it will die within two to four years unless a beekeeper takes active steps to control it, he says.

One of the weapons beekeepers can use is a pesticide that kills mites, but “there’s always the concern that mites will become resistant to the very few mitocides that are available,” says Tom Rinderer, who leads research on honeybee genetics at the U.S. Department of Agriculture Research Service in Baton Rouge, Louisiana. And new pesticides to kill mites are not easy to come by, in part because mites and bees are found in neighboring branches of the animal tree. “Pesticides are really difficult for chemical companies to develop because of the relatively close relationship between the Varroa and the bee,” says Bowman.

RNA interference could be a more targeted and effective way to combat the mites. It is a natural process in plants and animals that normally defends against viruses and potentially dangerous bits of DNA that move within genomes. Based upon their nucleotide sequence, interfering RNAs signal the destruction of the specific gene products, thus providing a species-specific self-destruct signal. In recent years, biologists have begun to explore this process as a possible means to turn off unwanted genes in humans (see “Gene-Silencing Technique Targets Scarring”) and to control pests in agricultural plants (see “Crops that Shut Down Pests’ Genes”).  Using the technology to control pests in agricultural animals would be a new application.

In 2011 Monsanto, the maker of herbicides and genetically engineered seeds, bought an Israeli company called Beeologics, which had developed an RNA interference technology that can be fed to bees through sugar water. The idea is that when a nurse bee spits this sugar water into each cell of a honeycomb where a queen bee has laid an egg, the resulting larvae will consume the RNA interference treatment. With the right sequence in the interfering RNA, the treatment will be harmless to the larvae, but when a mite feeds on it, the pest will ingest its own self-destruct signal.

The RNA interference technology would not be carried from generation to generation. “It’s a transient effect; it’s not a genetically modified organism,” says Bowman.

Monsanto says it has identified a few self-destruct triggers to explore by looking at genes that are fundamental to the biology of the mite. “Something in reproduction or egg laying or even just basic housekeeping genes can be a good target provided they have enough difference from the honeybee sequence,” says Greg Heck, a researcher at Monsanto.

Read the entire article here.

Image: Honeybee, Apis mellifera. Courtesy of Wikipedia.

Of Mice and Men

Biomolecular and genetic engineering continue apace. This time researchers have inserted artificially constructed human genes into the cells of living mice.

From the Independent:

Scientists have created genetically-engineered mice with artificial human chromosomes in every cell of their bodies, as part of a series of studies showing that it may be possible to treat genetic diseases with a radically new form of gene therapy.

In one of the unpublished studies, researchers made a human artificial chromosome in the laboratory from chemical building blocks rather than chipping away at an existing human chromosome, indicating the increasingly powerful technology behind the new field of synthetic biology.

The development comes as the Government announces today that it will invest tens of millions of pounds in synthetic biology research in Britain, including an international project to construct all the 16 individual chromosomes of the yeast fungus in order to produce the first synthetic organism with a complex genome.

A synthetic yeast with man-made chromosomes could eventually be used as a platform for making new kinds of biological materials, such as antibiotics or vaccines, while human artificial chromosomes could be used to introduce healthy copies of genes into the diseased organs or tissues of people with genetic illnesses, scientists said.

Researchers involved in the synthetic yeast project emphasised at a briefing in London earlier this week that there are no plans to build human chromosomes and create synthetic human cells in the same way as the artificial yeast project. A project to build human artificial chromosomes is unlikely to win ethical approval in the UK, they said.

However, researchers in the US and Japan are already well advanced in making “mini” human chromosomes called HACs (human artificial chromosomes), by either paring down an existing human chromosome or making them “de novo” in the lab from smaller chemical building blocks.

Natalay Kouprina of the US National Cancer Institute in Bethesda, Maryland, is part of the team that has successfully produced genetically engineered mice with an extra human artificial chromosome in their cells. It is the first time such an advanced form of a synthetic human chromosome made “from scratch” has been shown to work in an animal model, Dr Kouprina said.

“The purpose of developing the human artificial chromosome project is to create a shuttle vector for gene delivery into human cells to study gene function in human cells,” she told The Independent. “Potentially it has applications for gene therapy, for correction of gene deficiency in humans. It is known that there are lots of hereditary diseases due to the mutation of certain genes.”

Read the entire article here.

Image courtesy of Science Daily.

Cosmic portrait

Make a note in your calendar if you are so inclined: you’ll be photographed from space on July 19, 2013, sometime between 9.27 and 9.42 pm (GMT).

No, this is not another wacky mapping stunt courtesy of Google. Rather, NASA’s Cassini spacecraft, which will be somewhere in the vicinity of Saturn, will train its cameras on us for a global family portrait.

From NASA:

NASA’s Cassini spacecraft, now exploring Saturn, will take a picture of our home planet from a distance of hundreds of millions of miles on July 19. NASA is inviting the public to help acknowledge the historic interplanetary portrait as it is being taken.

Earth will appear as a small, pale blue dot between the rings of Saturn in the image, which will be part of a mosaic, or multi-image portrait, of the Saturn system Cassini is composing.

“While Earth will be only about a pixel in size from Cassini’s vantage point 898 million [1.44 billion kilometers] away, the team is looking forward to giving the world a chance to see what their home looks like from Saturn,” said Linda Spilker, Cassini project scientist at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “We hope you’ll join us in waving at Saturn from Earth, so we can commemorate this special opportunity.”

Cassini will start obtaining the Earth part of the mosaic at 2:27 p.m. PDT (5:27 p.m. EDT or 21:27 UTC) and end about 15 minutes later, all while Saturn is eclipsing the sun from Cassini’s point of view. The spacecraft’s unique vantage point in Saturn’s shadow will provide a special scientific opportunity to look at the planet’s rings. At the time of the photo, North America and part of the Atlantic Ocean will be in sunlight.

Unlike two previous Cassini eclipse mosaics of the Saturn system in 2006, which captured Earth, and another in 2012, the July 19 image will be the first to capture the Saturn system with Earth in natural color, as human eyes would see it. It also will be the first to capture Earth and its moon with Cassini’s highest-resolution camera. The probe’s position will allow it to turn its cameras in the direction of the sun, where Earth will be, without damaging the spacecraft’s sensitive detectors.

“Ever since we caught sight of the Earth among the rings of Saturn in September 2006 in a mosaic that has become one of Cassini’s most beloved images, I have wanted to do it all over again, only better,” said Carolyn Porco, Cassini imaging team lead at the Space Science Institute in Boulder, Colo. “This time, I wanted to turn the entire event into an opportunity for everyone around the globe to savor the uniqueness of our planet and the preciousness of the life on it.”

Porco and her imaging team associates examined Cassini’s planned flight path for the remainder of its Saturn mission in search of a time when Earth would not be obstructed by Saturn or its rings. Working with other Cassini team members, they found the July 19 opportunity would permit the spacecraft to spend time in Saturn’s shadow to duplicate the views from earlier in the mission to collect both visible and infrared imagery of the planet and its ring system.

“Looking back towards the sun through the rings highlights the tiniest of ring particles, whose width is comparable to the thickness of hair and which are difficult to see from ground-based telescopes,” said Matt Hedman, a Cassini science team member based at Cornell University in Ithaca, N.Y., and a member of the rings working group. “We’re particularly interested in seeing the structures within Saturn’s dusty E ring, which is sculpted by the activity of the geysers on the moon Enceladus, Saturn’s magnetic field and even solar radiation pressure.”

This latest image will continue a NASA legacy of space-based images of our fragile home, including the 1968 “Earthrise” image taken by the Apollo 8 moon mission from about 240,000 miles (380,000 kilometers) away and the 1990 “Pale Blue Dot” image taken by Voyager 1 from about 4 billion miles (6 billion kilometers) away.

Read the entire article here.

Image: This simulated view from NASA’s Cassini spacecraft shows the expected positions of Saturn and Earth on July 19, 2013, around the time Cassini will take Earth’s picture. Cassini will be about 898 million miles (1.44 billion kilometers) away from Earth at the time. That distance is nearly 10 times the distance from the sun to Earth. Courtesy: NASA/JPL-Caltech

The Past is Good For You

From time to time there is no doubt that you will feel nostalgic over some past event or a special place or treasured object. Of course, our sentimental feelings vary tremendously from person to person. But, why do we feel this way, and why is nostalgia important? No too long ago nostalgia was commonly believed to be a neurological disorder (no doubt treatable with prescription medication). However, new research shows that feelings of sentimentality are indeed good for us, individually and as a group.

From the New York Times:

Not long after moving to the University of Southampton, Constantine Sedikides had lunch with a colleague in the psychology department and described some unusual symptoms he’d been feeling. A few times a week, he was suddenly hit with nostalgia for his previous home at the University of North Carolina: memories of old friends, Tar Heel basketball games, fried okra, the sweet smells of autumn in Chapel Hill.

His colleague, a clinical psychologist, made an immediate diagnosis. He must be depressed. Why else live in the past? Nostalgia had been considered a disorder ever since the term was coined by a 17th-century Swiss physician who attributed soldiers’ mental and physical maladies to their longing to return home — nostos in Greek, and the accompanying pain, algos.

But Dr. Sedikides didn’t want to return to any home — not to Chapel Hill, not to his native Greece — and he insisted to his lunch companion that he wasn’t in pain.

“I told him I did live my life forward, but sometimes I couldn’t help thinking about the past, and it was rewarding,” he says. “Nostalgia made me feel that my life had roots and continuity. It made me feel good about myself and my relationships. It provided a texture to my life and gave me strength to move forward.”

The colleague remained skeptical, but ultimately Dr. Sedikides prevailed. That lunch in 1999 inspired him to pioneer a field that today includes dozens of researchers around the world using tools developed at his social-psychology laboratory, including a questionnaire called the Southampton Nostalgia Scale. After a decade of study, nostalgia isn’t what it used to be — it’s looking a lot better.

Nostalgia has been shown to counteract loneliness, boredom and anxiety. It makes people more generous to strangers and more tolerant of outsiders. Couples feel closer and look happier when they’re sharing nostalgic memories. On cold days, or in cold rooms, people use nostalgia to literally feel warmer.

Nostalgia does have its painful side — it’s a bittersweet emotion — but the net effect is to make life seem more meaningful and death less frightening. When people speak wistfully of the past, they typically become more optimistic and inspired about the future.

“Nostalgia makes us a bit more human,” Dr. Sedikides says. He considers the first great nostalgist to be Odysseus, an itinerant who used memories of his family and home to get through hard times, but Dr. Sedikides emphasizes that nostalgia is not the same as homesickness. It’s not just for those away from home, and it’s not a sickness, despite its historical reputation.

Nostalgia was originally described as a “neurological disease of essentially demonic cause” by Johannes Hoffer, the Swiss doctor who coined the term in 1688. Military physicians speculated that its prevalence among Swiss mercenaries abroad was due to earlier damage to the soldiers’ ear drums and brain cells by the unremitting clanging of cowbells in the Alps.

A Universal Feeling

In the 19th and 20th centuries nostalgia was variously classified as an “immigrant psychosis,” a form of “melancholia” and a “mentally repressive compulsive disorder” among other pathologies. But when Dr. Sedikides, Tim Wildschut and other psychologists at Southampton began studying nostalgia, they found it to be common around the world, including in children as young as 7 (who look back fondly on birthdays and vacations).

“The defining features of nostalgia in England are also the defining features in Africa and South America,” Dr. Wildschut says. The topics are universal — reminiscences about friends and family members, holidays, weddings, songs, sunsets, lakes. The stories tend to feature the self as the protagonist surrounded by close friends.

Most people report experiencing nostalgia at least once a week, and nearly half experience it three or four times a week. These reported bouts are often touched off by negative events and feelings of loneliness, but people say the “nostalgizing” — researchers distinguish it from reminiscing — helps them feel better.

To test these effects in the laboratory, researchers at Southampton induced negative moods by having people read about a deadly disaster and take a personality test that supposedly revealed them to be exceptionally lonely. Sure enough, the people depressed about the disaster victims or worried about being lonely became more likely to wax nostalgic. And the strategy worked: They subsequently felt less depressed and less lonely.

Read the entire article here.

Image: Still from “I Love Lucy” U.S. television show. 1955. Courtesy of Wikipedia.

Asteroid 5099

Iain (M.) Banks is now where he rightfully belongs — hurtling through space. Though, we fear that he may well not be traveling as fast as he would have wished.

From the Minor Planet Center:

In early April of this year we learnt from Iain Banks himself that he was sick, very sick. Cancer that started in the gall bladder spread quickly and precluded any cure, though he still hoped to be around for a while and see his upcoming novel, The Quarry, hit store shelves in late June. He never did—Iain Banks died on June 9th.

I was introduced to Iain M. Banks’s Sci-Fi novels in graduate school by a good friend who also enjoyed Sci-Fi; he couldn’t believe I’d never even heard of him and remedied what he saw as a huge lapse in my Sci-Fi culture by lending me a couple of his novels. After that I read a few more novels of my own volition because Mr Banks truly was a gifted story teller.

When I heard of his sickness I immediately asked myself what I could do for Mr Banks, and the answer was obvious: Give him an asteroid!

The Minor Planet Center only has the authority to designate new asteroid discoveries (e.g., ’1971 TD1?) and assign numbers to those whose orbits are of a high enough accuracy (e.g., ’5099?), but names for numbered asteroids must be submitted to, and approved by, the Committee for Small Body Nomenclature (CSBN) of the IAU (International Astronomical Union). With the help of Dr Gareth Williams, the MPC’s representative on the CSBN, we submitted a request to name an asteroid after Iain Banks with the hope that it would be approved soon enough for Mr Banks to enjoy it. Sadly, that has not been possible. Nevertheless, I am here to announce that on June 23rd, 2013, asteroid (5099) was officially named Iainbanks by the IAU, and will be referred to as such for as long as Earth Culture may endure.

The official citation for the asteroid reads:

Iain M. Banks (1954-2013) was a Scottish writer best known for the Culture series of science ?ction novels; he also wrote ?ction as Iain Banks. An evangelical atheist and lover of whisky, he scorned social media and enjoyed writing music. He was an extra in Monty Python & The Holy Grail.

Asteroid Iainbanks resides in the Main Asteroid Belt of the Sol system; with a size of 6.1 km (3.8 miles), it takes 3.94 years to complete a revolution around the Sun. It is most likely of a stony composition. Here is an interactive 3D orbit diagram.

The Culture is an advanced society in whose midst most of Mr Banks’s Sci-Fi novels take place. Thanks to their technology they are able to hollow out asteroids and use them as ships capable of faster-than-light travel while providing a living habitat with centrifugally-generated gravity for their thousands of denizens. I’d like to think Mr Banks would have been amused to have his own rock.

Read the entire article here.

Image: Orbit Diagram of asteroid (5099) Iainbanks. Cyan ellipses represent the orbits of the planets (from closer to further from the Sun) Mercury, Venus, Earth, Mars and Jupiter. The black ellipse represents the orbit of asteroid Iainbanks. The shaded region lies below the ecliptic plane, the non shaded, above. Courtesy of Minor Planet Center.

Impossible Chemistry in Space

Combine the vastness of the universe with the probabilistic behavior of quantum mechanics and you get some rather odd chemical results. This includes the spontaneous creation of some complex organic molecules in interstellar space — previously believed to be far too inhospitable for all but the lowliest forms of matter.

From the New Scientist:

Quantum weirdness can generate a molecule in space that shouldn’t exist by the classic rules of chemistry. If interstellar space is really a kind of quantum chemistry lab, that might also account for a host of other organic molecules glimpsed in space.

Interstellar space should be too cold for most chemical reactions to occur, as the low temperature makes it tough for molecules drifting through space to acquire the energy needed to break their bonds. “There is a standard law that says as you lower the temperature, the rates of reactions should slow down,” says Dwayne Heard of the University of Leeds, UK.

Yet we know there are a host of complex organic molecules in space. Some reactions could occur when different molecules stick to the surface of cosmic dust grain. This might give them enough time together to acquire the energy needed to react, which doesn’t happen when molecules drift past each other in space.

Not all reactions can be explained in this way, though. Last year astronomers discovered methoxy molecules – containing carbon, hydrogen and oxygen – in the Perseus molecular cloud, around 600 light years from Earth. But researchers couldn’t produce this molecule in the lab by allowing reactants to condense on dust grains, leaving a puzzle as to how it could have formed.

Molecular hang-out

Another route to methoxy is to combine a hydroxyl radical and methanol gas, both present in space. But this reaction requires hurdling a significant energy barrier – and the energy to do that simply isn’t available in the cold expanse of space.

Heard and his colleagues wondered if the answer lay in quantum mechanics: a process called quantum tunnelling might give the hydroxyl radical a small chance to cheat by digging through the barrier instead of going over it, they reasoned.

So, in another attempt to replicate the production of methoxy in space, the team chilled gaseous hydroxyl and methanol to 63 kelvin – and were able to produce methoxy.

The idea is that at low temperatures, the molecules slow down, increasing the likelihood of tunnelling. “At normal temperatures they just collide off each other, but when you go down in temperature they hang out together long enough,” says Heard.

Impossible chemistry

The team also found that the reaction occurred 50 times faster via quantum tunnelling than if it occurred normally at room temperature by hurdling the energy barrier. Empty space is much colder than 63 kelvin, but dust clouds near stars can reach this temperature, adds Heard.

“We’re showing there is organic chemistry in space of the type of reactions where it was assumed these just wouldn’t happen,” says Heard.

That means the chemistry of space may be richer than we had imagined. “There is maybe a suite of chemical reactions we hadn’t yet considered occurring in interstellar space,” agrees Helen Fraser of the University of Strathclyde, UK, who was not part of the team.

Read the entire article here.

Image: Amino-1-methoxy-4-methylbenzol, featuring methoxy molecular structure, recently found in interstellar space. Courtesy of Wikipedia.

The Good and the Bad; The Black and the White

We humans are a most peculiar species — we are kind and we are dangerous. We can create the most sublime inventions with our minds, voices and hands, yet we are capable of the most heinous and destructive acts. We show empathy and compassion and grace, and yet, often just as easily, we wound and main and murder. In the face of a common threat or danger we reach out to help all others, yet under normal circumstances we are capable of the most despicable racism, discrimination and hatred for our fellows.

Two recent polarizing events show our enormous failings and our inherent goodness. These are two stories of quiet and heroic action in the face of harm, danger and injustice.

First, in Mississippi, Willie Manning a black man and convicted murderer had his execution stayed 4 hours prior to lethal injection. His team of attorneys fought, quite rightly, to have false evidence discarded and dubious evidence revisited. As one of his attorneys, Robert Mink, a white man, stated to the State Supreme Court, “To pass on this issue and sanction the execution of Willie Manning, even in light of these revelations, would be counter to fundamental due process, the eight and fourteenth amendments to the Constitution…”. Morality of the death penalty aside, we have a moral duty to fight injustice wherever it appears including  within our seemingly just judicial process. To date, the Innocence Project has recorded 306 post-conviction exonerations. These were innocent people scheduled to be put to death by our judicial system, and on average spent 13 years in prison. So, thank you to attorney Mr. Mink and his colleagues for keeping goodness alive in the face of an institutionalized rush to judgement and a corrupt process.

Read more on this story after the jump.

In the second case, from Cleveland Ohio, the dichotomy of human behavior was on full display following the release of three woman kidnapped, raped and imprisoned for close to 10 years. We’ll not discuss the actions of the accused, which should become clearer in due course. Rather, we focus on the actions of a neighbor. Charles Ramsey, a black man, who lived across the street from the crime scene helped the three white women escape their hellish ordeal. Like the attorneys above Mr. Ramsey took action and is rightly hailed a hero. When pressed by the media to explain his actions in rescuing the women, he said something quite poignant, “When a little, pretty white woman runs into the arms of a black man, you know something wrong.” Indeed.

Excerpts from an open letter to Charles Ramsey, put it in perspective:

From the Guardian:

Dear Mr Charles Ramsey,

First and foremost thank you. Thank you for being an up-stander versus a bystander. All too often we are quick to flee from the things that could land us in imminent danger, but you in your hearts of hearts knew that the right thing to do was to come to the aid of someone who was crying out. We as the members of this great city of Cleveland are forever beholden to you for finding three of our daughters who we thought we’d never see again. But through the grace of the Most High … they are now safe.

In plain speak, you said something so prolific. And I want to unpack the statement that you made: “When a little, pretty white woman runs into the arms of a black man, you know something wrong.”

What does this statement mean in 2013? For me, it spoke volumes. It says: in America, we are taught to fear black men. They are assumed to be violent, angry, and completely and utterly untrustworthy. This statement also says what we have always known to be true for this country: white women, specifically pretty white women, have no business in the same space as black men. For as long as we can remember American society has been the sustainer of white women and the slayer of black men.

We have seen it with the all too familiar story of Emmett Till. We have seen it with the less familiar story of George Stinney, the youngest person in the United States ever executed. At 14-years-old he was charged with the murder of two white girls in Alcolu, South Carolina. He was charged with this murder after being the last to see these two girls alive and even helping to search for them. With no evidence and no concrete witnesses he was sent to the electric chair, with a booster seat for his 90 pound body, his case never reopened despite a rumored culprit and so little evidence.

I write this letter with extreme gratefulness, because I know how this country has historically made a mockery of and torn down men like you. Black men who have been the fall guy, black men who are assumed guilty for wearing hoodies and having wallets that somehow get mistaken for guns. So we all know that you could have easily said that you would not put yourself in harm’s way.

And for your act of heroism, you are met with extreme scrutiny dredged in jest. Joke after joke for telling your truth, as plain as you knew how. You, Mr Ramsey, were made fun of for flinching when the sounds of police sirens struck an innate reaction of terror in you. We all know that the police weren’t made for the protection of black men. The 911 operator who engaged you with disdain, disbelief, and sheer aggravation reaffirmed that “you don’t have to be white to support white supremacy”. So if you don’t “look” like a hero, “speak” like a hero, “dress” like a hero, wear your “hair” like a hero … then you’re just another person used to build the comedic chops of aspiring YouTube/Twitter/Facebook/Instagram sensations.

Read the entire letter after the jump.

In what continues to be a sad repetition of our human history we see on the one hand there are those who perform unimaginable acts of cruelty or violence, and on the other are those who counteract the bad with good.

On the one hand are those who blindly or hastily follow orders or rules without questioning their morality, and on the other are those who seek to inject morality and to improve our lot. But between the two poles many of us are mere bystanders; we go about our hectic, daily lives, but we take no action. Some of us raise our arms and voices in righteous indignation, but take no action beyond words. Many of us turn a blind eye to intolerance and racism, preferring the cocoons of our couches and social distance of our Facebook accounts.

The majority of us are just too tired, too frazzled, too busy. This group requires the most work; we all need to become better at doing and at being involved, to improve our very human race.

Building a Liver

In yet another breakthrough for medical science, researchers have succeeded in growing a prototypical human liver in the lab.

From the New York Times:

Researchers in Japan have used human stem cells to create tiny human livers like those that arise early in fetal life. When the scientists transplanted the rudimentary livers into mice, the little organs grew, made human liver proteins, and metabolized drugs as human livers do.

They and others caution that these are early days and this is still very much basic research. The liver buds, as they are called, did not turn into complete livers, and the method would have to be scaled up enormously to make enough replacement liver buds to treat a patient. Even then, the investigators say, they expect to replace only 30 percent of a patient’s liver. What they are making is more like a patch than a full liver.

But the promise, in a field that has seen a great deal of dashed hopes, is immense, medical experts said.

“This is a major breakthrough of monumental significance,” said Dr. Hillel Tobias, director of transplantation at the New York University School of Medicine. Dr. Tobias is chairman of the American Liver Foundation’s national medical advisory committee.

“Very impressive,” said Eric Lagasse of the University of Pittsburgh, who studies cell transplantation and liver disease. “It’s novel and very exciting.”

The study was published on Wednesday in the journal Nature.

Although human studies are years away, said Dr. Leonard Zon, director of the stem cell research program at Boston Children’s Hospital, this, to his knowledge, is the first time anyone has used human stem cells, created from human skin cells, to make a functioning solid organ, like a liver, as opposed to bone marrow, a jellylike organ.

Ever since they discovered how to get human stem cells — first from embryos and now, more often, from skin cells — researchers have dreamed of using the cells for replacement tissues and organs. The stem cells can turn into any type of human cell, and so it seemed logical to simply turn them into liver cells, for example, and add them to livers to fill in dead or damaged areas.

But those studies did not succeed. Liver cells did not take up residence in the liver; they did not develop blood supplies or signaling systems. They were not a cure for disease.

Other researchers tried making livers or other organs by growing cells on scaffolds. But that did not work well either. Cells would fall off the scaffolds and die, and the result was never a functioning solid organ.

Researchers have made specialized human cells in petri dishes, but not three-dimensional structures, like a liver.

The investigators, led by Dr. Takanori Takebe of the Yokohama City University Graduate School of Medicine, began with human skin cells, turning them into stem cells. By adding various stimulators and drivers of cell growth, they then turned the stem cells into human liver cells and began trying to make replacement livers.

They say they stumbled upon their solution. When they grew the human liver cells in petri dishes along with blood vessel cells from human umbilical cords and human connective tissue, that mix of cells, to their surprise, spontaneously assembled itself into three-dimensional liver buds, resembling the liver at about five or six weeks of gestation in humans.

Then the researchers transplanted the liver buds into mice, putting them in two places: on the brain and into the abdomen. The brain site allowed them to watch the buds grow. The investigators covered the hole in each animal’s skull with transparent plastic, giving them a direct view of the developing liver buds. The buds grew and developed blood supplies, attaching themselves to the blood vessels of the mice.

The abdominal site allowed them to put more buds in — 12 buds in each of two places in the abdomen, compared with one bud in the brain — which let the investigators ask if the liver buds were functioning like human livers.

They were. They made human liver proteins and also metabolized drugs that human livers — but not mouse livers — metabolize.

The approach makes sense, said Kenneth Zaret, a professor of cellular and developmental biology at the University of Pennsylvania. His research helped establish that blood and connective tissue cells promote dramatic liver growth early in development and help livers establish their own blood supply. On their own, without those other types of cells, liver cells do not develop or form organs.

Read the entire article here.

Image: Diagram of the human liver. Courtesy of Encyclopedia Britannica.

The Myth of Martyrdom

Unfortunately our world is still populated by a few people who will willingly shed the blood of others while destroying themselves. Understanding the personalities and motivations of these people may one day help eliminate this scourge. In the meantime, psychologists ponder whether they are psychologically normal, but politically crazed fanatics or deeply troubled?

Adam Lankford, a criminal justice professor, asserts that suicide terrorists are merely unhappy, damaged individuals who want to die. In his book, The Myth of Martyrdom, Lankford rejects the popular view of suicide terrorists as calculating, radicalized individuals who will do anything for a cause.

From the New Scientist:

In the aftermath of 9/11, terrorism experts in the US made a bold and counter-intuitive claim: the suicide terrorists were psychologically normal. When it came to their state of mind, they were not so different from US Special Forces agents. Just because they deliberately crashed planes into buildings, that didn’t make them suicidal – it simply meant they were willing to die for a cause they believed in.

This argument was stated over and over and became the orthodoxy. “We’d like to believe these are crazed fanatics,” said CIA terror expert Jerrold Post in 2006. “Not true… as individuals, this is normal behaviour.”

I disagree. Far from being psychologically normal, suicide terrorists are suicidal. They kill themselves to escape crises or unbearable pain. Until we recognise this, attempts to stop the attacks are doomed to fail.

When I began studying suicide terrorists, I had no agenda, just curiosity. My hunch was that the official version was true, but I kept an open mind.

Then I began watching martyrdom videos and reading case studies, letters and diary entries. What I discovered was a litany of fear, failure, guilt, shame and rage. In my book The Myth of Martyrdom, I present evidence that far from being normal, these self-destructive killers have often suffered from serious mental trauma and always demonstrate at least a few behaviours on the continuum of suicidality, such as suicide ideation, a suicide plan or previous suicide attempts.

Why did so many scholars come to the wrong conclusions? One key reason is that they believe what the bombers, their relatives and friends, and their terrorist recruiters say, especially when their accounts are consistent.

In 2007, for example, Ellen Townsend of the University of Nottingham, UK, published an influential article called Suicide Terrorists: Are they suicidal? Her answer was a resounding no (Suicide and Life-Threatening Behavior, vol 37, p 35).

How did she come to this conclusion? By reviewing five empirical reports: three that depended largely upon interviews with deceased suicide terrorists’ friends and family, and two based on interviews of non-suicide terrorists. She took what they said at face value.

I think this was a serious mistake. All of these people have strong incentives to lie.

Take the failed Palestinian suicide bomber Wafa al-Biss, who attempted to blow herself up at an Israeli checkpoint in 2005. Her own account and those of her parents and recruiters tell the same story: that she acted for political and religious reasons.

These accounts are highly suspect. Terrorist leaders have strategic reasons for insisting that attackers are not suicidal, but instead are carrying out glorious martyrdom operations. Traumatised parents want to believe that their children were motivated by heroic impulses. And suicidal people commonly deny that they are suicidal and are often able to hide their true feelings from the world.

This is especially true of fundamentalist Muslims. Suicide is explicitly condemned in Islam and guarantees an eternity in hell. Martyrs, on the other hand, can go to heaven.

Most telling of all, it later emerged that al-Biss had suffered from mental health problems most of her life and had made two previous suicide attempts.

Her case is far from unique. Consider Qari Sami, who blew himself up in a café in Kabul, Afghanistan, in 2005. He walked in – and kept on walking, past crowded tables and into the bathroom at the back where he closed the door and detonated his belt. He killed himself and two others, but could easily have killed more. It later emerged that he was on antidepressants.

Read the entire article here.

MondayMap: U.S. Interstate Highway System

It’s summer, which means lots of people driving every-which-way for family vacations.

So, this is a good time to refresh you with the map of the arteries that distribute lifeblood across the United States — the U.S. Interstate Highway System. The network of highways stretching around 46,800 miles from coast to coast is sometimes referred to as the Eisenhower Interstate System. President Eisenhower signed the Federal-Aid Highway Act in June 29, 1956 making the current system possible.

Thus the father of the Interstate System is also responsible for the never-ending choruses of: “are we there yet?”, “how much further?”, “I need to go to the bathroom”, and “can we stop at the next Starbucks (from the adults) / McDonalds (from the kids)?”.

Get a full-size map here.

Map courtesy of WikiCommons.