Category Archives: BigBang

Ancient Aquifer

curiosity-rover

Mars Curiosity Rover is at it again. This time it has unearthed (or should it be “unmarsed”) compelling evidence of an ancient lake on the red planet.

From Wired:

The latest discovery of Nasa’s Mars Curiosity rover is evidence of an ancient freshwater lake on Mars that was part of an environment that could potentially have supported simple microbial life.

The lake is located inside the Gale Crater and is thought to have covered an area that is 31 miles long and three miles wide for more than 100,000 years.

According to a paper published yesterday in Science Magazine: “The Curiosity rover discovered fine-grained sedimentary rocks, which are inferred to represent an ancient lake and preserve evidence of an environment that would have been suited to support a Martian biosphere founded on chemolithoautotrophy.”

When analyzing two rock samples from an area known as Yellowknife Bay, researchers discovered smectite clay minerals, the chemical makeup of which showed that they had formed in water. Due to low salinity and the neutral pH, the water the minerals formed in was neither too acidic nor too alkaline for life to have once existed within it.

Chemolithoautotrophs, the form of life the researchers believed may have lived in the lake, can also be found on Earth, usually in caves or in vents on the ocean floor.

“If we put microbes from Earth and put them in this lake on Mars, would they survive? Would they survive and thrive? And the answer is yes,” the Washington Post is reporting John Grotzinger, a Caltech planetary geologist who is the chief scientist of the Curiosity rover mission, as saying at a press conference.

Evidence of water was first discovered in soil samples on Mars in September by Curiosity, which first landed on the Red Planet in August 2012 with the hope of discovering whether it may have once offered a habitable environment. Increasingly, as studies are finding evidence of the planet’s environment interacting at some point with water, researchers are believing that in the past Mars could have been a more Earth-like planet.

Curiosity cannot confirm whether or not these organisms definitely did exist on Mars, only that the environment was once ideal for them to flourish there.

Read the entire story here.

Image: Mars Curiosity Rover. Courtesy of NASA / JPL.

What of Consciousness?

google-search-holiday-feast

As we dig into the traditional holiday fare surrounded by family and friends it is useful to ponder whether any of it is actually real or is it all inside the mind. The in-laws may be a figment of the brain, but the wine probably is real.

From the New Scientist:

Descartes might have been onto something with “I think therefore I am”, but surely “I think therefore you are” is going a bit far? Not for some of the brightest minds of 20th-century physics as they wrestled mightily with the strange implications of the quantum world.

According to prevailing wisdom, a quantum particle such as an electron or photon can only be properly described as a mathematical entity known as a wave function. Wave functions can exist as “superpositions” of many states at once. A photon, for instance, can circulate in two different directions around an optical fibre; or an electron can simultaneously spin clockwise and anticlockwise or be in two positions at once.

When any attempt is made to observe these simultaneous existences, however, something odd happens: we see only one. How do many possibilities become one physical reality?

This is the central question in quantum mechanics, and has spawned a plethora of proposals, or interpretations. The most popular is the Copenhagen interpretation, which says nothing is real until it is observed, or measured. Observing a wave function causes the superposition to collapse.

However, Copenhagen says nothing about what exactly constitutes an observation. John von Neumann broke this silence and suggested that observation is the action of a conscious mind. It’s an idea also put forward by Max Planck, the founder of quantum theory, who said in 1931, “I regard consciousness as fundamental. I regard matter as derivative from consciousness.”

That argument relies on the view that there is something special about consciousness, especially human consciousness. Von Neumann argued that everything in the universe that is subject to the laws of quantum physics creates one vast quantum superposition. But the conscious mind is somehow different. It is thus able to select out one of the quantum possibilities on offer, making it real – to that mind, at least.

Henry Stapp of the Lawrence Berkeley National Laboratory in California is one of the few physicists that still subscribe to this notion: we are “participating observers” whose minds cause the collapse of superpositions, he says. Before human consciousness appeared, there existed a multiverse of potential universes, Stapp says. The emergence of a conscious mind in one of these potential universes, ours, gives it a special status: reality.

There are many objectors. One problem is that many of the phenomena involved are poorly understood. “There’s a big question in philosophy about whether consciousness actually exists,” says Matthew Donald, a philosopher of physics at the University of Cambridge. “When you add on quantum mechanics it all gets a bit confused.”

Donald prefers an interpretation that is arguably even more bizarre: “many minds”. This idea – related to the “many worlds” interpretation of quantum theory, which has each outcome of a quantum decision happen in a different universe – argues that an individual observing a quantum system sees all the many states, but each in a different mind. These minds all arise from the physical substance of the brain, and share a past and a future, but cannot communicate with each other about the present.

Though it sounds hard to swallow, this and other approaches to understanding the role of the mind in our perception of reality are all worthy of attention, Donald reckons. “I take them very seriously,” he says.

Read the entire article here.

Image courtesy of Google Search.

The Universe of Numbers

There is no doubt that mathematics — some very complex — has been able to explain much of what we consider the universe. In reality, and perhaps surprisingly, only a small subset of equations is required to explain everything around us from the atoms and their constituents to the vast cosmos. Why is that? And, what is the fundamental relationship between mathematics and our current physical understanding of all things great and small?

From the New Scientist:

When Albert Einstein finally completed his general theory of relativity in 1916, he looked down at the equations and discovered an unexpected message: the universe is expanding.

Einstein didn’t believe the physical universe could shrink or grow, so he ignored what the equations were telling him. Thirteen years later, Edwin Hubble found clear evidence of the universe’s expansion. Einstein had missed the opportunity to make the most dramatic scientific prediction in history.

How did Einstein’s equations “know” that the universe was expanding when he did not? If mathematics is nothing more than a language we use to describe the world, an invention of the human brain, how can it possibly churn out anything beyond what we put in? “It is difficult to avoid the impression that a miracle confronts us here,” wrote physicist Eugene Wigner in his classic 1960 paper “The unreasonable effectiveness of mathematics in the natural sciences” (Communications on Pure and Applied Mathematics, vol 13, p 1).

The prescience of mathematics seems no less miraculous today. At the Large Hadron Collider at CERN, near Geneva, Switzerland, physicists recently observed the fingerprints of a particle that was arguably discovered 48 years ago lurking in the equations of particle physics.

How is it possible that mathematics “knows” about Higgs particles or any other feature of physical reality? “Maybe it’s because math is reality,” says physicist Brian Greene of Columbia University, New York. Perhaps if we dig deep enough, we would find that physical objects like tables and chairs are ultimately not made of particles or strings, but of numbers.

“These are very difficult issues,” says philosopher of science James Ladyman of the University of Bristol, UK, “but it might be less misleading to say that the universe is made of maths than to say it is made of matter.”

Difficult indeed. What does it mean to say that the universe is “made of mathematics”? An obvious starting point is to ask what mathematics is made of. The late physicist John Wheeler said that the “basis of all mathematics is 0 = 0”. All mathematical structures can be derived from something called “the empty set”, the set that contains no elements. Say this set corresponds to zero; you can then define the number 1 as the set that contains only the empty set, 2 as the set containing the sets corresponding to 0 and 1, and so on. Keep nesting the nothingness like invisible Russian dolls and eventually all of mathematics appears. Mathematician Ian Stewart of the University of Warwick, UK, calls this “the dreadful secret of mathematics: it’s all based on nothing” (New Scientist, 19 November 2011, p 44). Reality may come down to mathematics, but mathematics comes down to nothing at all.

That may be the ultimate clue to existence – after all, a universe made of nothing doesn’t require an explanation. Indeed, mathematical structures don’t seem to require a physical origin at all. “A dodecahedron was never created,” says Max Tegmark of the Massachusetts Institute of Technology. “To be created, something first has to not exist in space or time and then exist.” A dodecahedron doesn’t exist in space or time at all, he says – it exists independently of them. “Space and time themselves are contained within larger mathematical structures,” he adds. These structures just exist; they can’t be created or destroyed.

That raises a big question: why is the universe only made of some of the available mathematics? “There’s a lot of math out there,” Greene says. “Today only a tiny sliver of it has a realisation in the physical world. Pull any math book off the shelf and most of the equations in it don’t correspond to any physical object or physical process.”

It is true that seemingly arcane and unphysical mathematics does, sometimes, turn out to correspond to the real world. Imaginary numbers, for instance, were once considered totally deserving of their name, but are now used to describe the behaviour of elementary particles; non-Euclidean geometry eventually showed up as gravity. Even so, these phenomena represent a tiny slice of all the mathematics out there.

Not so fast, says Tegmark. “I believe that physical existence and mathematical existence are the same, so any structure that exists mathematically is also real,” he says.

So what about the mathematics our universe doesn’t use? “Other mathematical structures correspond to other universes,” Tegmark says. He calls this the “level 4 multiverse”, and it is far stranger than the multiverses that cosmologists often discuss. Their common-or-garden multiverses are governed by the same basic mathematical rules as our universe, but Tegmark’s level 4 multiverse operates with completely different mathematics.

All of this sounds bizarre, but the hypothesis that physical reality is fundamentally mathematical has passed every test. “If physics hits a roadblock at which point it turns out that it’s impossible to proceed, we might find that nature can’t be captured mathematically,” Tegmark says. “But it’s really remarkable that that hasn’t happened. Galileo said that the book of nature was written in the language of mathematics – and that was 400 years ago.”

Read the entire article here.

Meta-Research: Discoveries From Research on Discoveries

Discoveries through scientific research don’t just happen in the lab. Many of course do. Some discoveries now come through data analysis of research papers. Here, sophisticated data mining tools and semantic software sift through hundreds of thousands of research papers looking for patterns and links that would otherwise escape the eye of human researchers.

From Technology Review:

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Software that read tens of thousands of research papers and then predicted new discoveries about the workings of a protein that’s key to cancer could herald a faster approach to developing new drugs.

The software, developed in a collaboration between IBM and Baylor College of Medicine, was set loose on more than 60,000 research papers that focused on p53, a protein involved in cell growth, which is implicated in most cancers. By parsing sentences in the documents, the software could build an understanding of what is known about enzymes called kinases that act on p53 and regulate its behavior; these enzymes are common targets for cancer treatments. It then generated a list of other proteins mentioned in the literature that were probably undiscovered kinases, based on what it knew about those already identified. Most of its predictions tested so far have turned out to be correct.

“We have tested 10,” Olivier Lichtarge of Baylor said Tuesday. “Seven seem to be true kinases.” He presented preliminary results of his collaboration with IBM at a meeting on the topic of Cognitive Computing held at IBM’s Almaden research lab.

Lichtarge also described an earlier test of the software in which it was given access to research literature published prior to 2003 to see if it could predict p53 kinases that have been discovered since. The software found seven of the nine kinases discovered after 2003.

“P53 biology is central to all kinds of disease,” says Lichtarge, and so it seemed to be the perfect way to show that software-generated discoveries might speed up research that leads to new treatments. He believes the results so far show that to be true, although the kinase-hunting experiments are yet to be reviewed and published in a scientific journal, and more lab tests are still planned to confirm the findings so far. “Kinases are typically discovered at a rate of one per year,” says Lichtarge. “The rate of discovery can be vastly accelerated.”

Lichtarge said that although the software was configured to look only for kinases, it also seems capable of identifying previously unidentified phosphatases, which are enzymes that reverse the action of kinases. It can also identify other types of protein that may interact with p53.

The Baylor collaboration is intended to test a way of extending a set of tools that IBM researchers already offer to pharmaceutical companies. Under the banner of accelerated discovery, text-analyzing tools are used to mine publications, patents, and molecular databases. For example, a company in search of a new malaria drug might use IBM’s tools to find molecules with characteristics that are similar to existing treatments. Because software can search more widely, it might turn up molecules in overlooked publications or patents that no human would otherwise find.

“We started working with Baylor to adapt those capabilities, and extend it to show this process can be leveraged to discover new things about p53 biology,” says Ying Chen, a researcher at IBM Research Almaden.

It typically takes between $500 million and $1 billion dollars to develop a new drug, and 90 percent of candidates that begin the journey don’t make it to market, says Chen. The cost of failed drugs is cited as one reason that some drugs command such high prices (see “A Tale of Two Drugs”).

Lawrence Hunter, director of the Center for Computational Pharmacology at the University of Colorado Denver, says that careful empirical confirmation is needed for claims that the software has made new discoveries. But he says that progress in this area is important, and that such tools are desperately needed.

The volume of research literature both old and new is now so large that even specialists can’t hope to read everything that might help them, says Hunter. Last year over one million new articles were added to the U.S. National Library of Medicine’s Medline database of biomedical research papers, which now contains 23 million items. Software can crunch through massive amounts of information and find vital clues in unexpected places. “Crucial bits of information are sometimes isolated facts that are only a minor point in an article but would be really important if you can find it,” he says.

Read the entire article here.

Bert and Ernie and Friends

The universe is a very strange place, stranger than Washington D.C., stranger than most reality TV shows.

And, it keep getting stranger as astronomers and cosmologists continue to make ever more head-scratching discoveries. The latest, a pair of super-high energy neutrinos, followed by another 28. It seems that these tiny, almost massless, particles are reaching Earth from an unknown source, or sources, of immense power outside of our own galaxy.

The neutrinos were spotted by the IceCube detector, which is buried beneath about a mile of solid ice in an Antarctic glacier.

From i09:

By drilling a 1.5 mile hole deep into an Antarctic glacier, physicists working at the IceCube South Pole Observatory have captured 28 extraterrestrial neutrinos — those mysterious and extremely powerful subatomic particles that can pass straight through solid matter. Welcome to an entirely new age of astronomy.

Back in April of this year, the same team of physicists captured the highest energy neutrinos ever detected. Dubbed Bert and Ernie, the elusive subatomic particles likely originated from beyond our solar system, and possibly even our galaxy.

Neutrinos are extremely tiny and prolific subatomic particles that are born in nuclear reactions, including those that occur inside of stars. And because they’re practically massless (together they contain only a tiny fraction of the mass of a single electron), they can pass through normal matter, which is why they’re dubbed ‘ghost particles.’ Neutrinos are able to do this because they don’t carry an electric charge, so they’re immune to electromagnetic forces that influence charged particles like electrons and protons.

A Billion Times More Powerful

But not all neutrinos are the same. The ones discovered by the IceCube team are about a billion times more energetic than the ones coming out of our sun. A pair of them had energies above an entire petaelectron volt. That’s more than 1,000 times the energy produced by protons smashed at CERN’s Large Hadron Collider.

So whatever created them must have been extremely powerful. Like, mindboggingly powerful — probably the remnants of supernova explosions. Indeed, as a recent study has shown, these cosmic explosions are more powerful than we could have ever imagined — to the point where they’re defying known physics.

Other candidates for neutrino production include black holes, pulsars, galactic nuclei — or even the cataclysmic merger of two black holes.

That’s why the discovery of these 28 new neutrinos, and the construction of the IceCube facility, is so important. It’s still a mystery, but these new findings, and the new detection technique, will help.

Back in April, the IceCube project looked for neutrinos above one petaelectronvolt, which is how Bert and Ernie were detected. But the team went back and searched through their data and found 26 neutrinos with slightly lower energies, though still above 30 teraelectronvolts that were detected between May 2010 and May 2012. While it’s possible that some of these less high-energy neutrinos could have been produced by cosmic rays in the Earth’s atmosphere, the researchers say that most of them likely came from space. And in fact, the data was analyzed in such a way as to exclude neutrinos that didn’t come from space and other types of particles that may have tripped off the detector.

The Dawn of a New Field

“This is a landmark discovery — possibly a Nobel Prize in the making,” said Alexander Kusenko, a UCLA astroparticle physicist who was not involved in the IceCube collaboration. Thanks to the remarkable IceCube facility, where neutrinos are captured in holes drilled 1.5 miles down into the Antarctic glacier, astronomers have a completely new way to scope out the cosmos. It’s both literally and figuratively changing the way we see the universe.

“It really is the dawn of a new field,” said Darren Grant, a University of Alberta physicist, and a member of the IceCube team.

Read the entire article here.

You May Be Just a Line of Code

Some very logical and rational people — scientists and philosophers — argue that we are no more than artificial constructs. They suggest that it is more likely that we are fleeting constructions in a simulated universe rather than organic beings in a real cosmos; that we are, in essence, like the oblivious Neo in the classic sci-fi movie The Matrix. One supposes that the minds proposing this notion are themselves simulations…

From Discovery:

In the 1999 sci-fi film classic The Matrix, the protagonist, Neo, is stunned to see people defying the laws of physics, running up walls and vanishing suddenly. These superhuman violations of the rules of the universe are possible because, unbeknownst to him, Neo’s consciousness is embedded in the Matrix, a virtual-reality simulation created by sentient machines.

The action really begins when Neo is given a fateful choice: Take the blue pill and return to his oblivious, virtual existence, or take the red pill to learn the truth about the Matrix and find out “how deep the rabbit hole goes.”

Physicists can now offer us the same choice, the ability to test whether we live in our own virtual Matrix, by studying radiation from space. As fanciful as it sounds, some philosophers have long argued that we’re actually more likely to be artificial intelligences trapped in a fake universe than we are organic minds in the “real” one.

But if that were true, the very laws of physics that allow us to devise such reality-checking technology may have little to do with the fundamental rules that govern the meta-universe inhabited by our simulators. To us, these programmers would be gods, able to twist reality on a whim.

So should we say yes to the offer to take the red pill and learn the truth — or are the implications too disturbing?

Worlds in Our Grasp

The first serious attempt to find the truth about our universe came in 2001, when an effort to calculate the resources needed for a universe-size simulation made the prospect seem impossible.

Seth Lloyd, a quantum-mechanical engineer at MIT, estimated the number of “computer operations” our universe has performed since the Big Bang — basically, every event that has ever happened. To repeat them, and generate a perfect facsimile of reality down to the last atom, would take more energy than the universe has.

“The computer would have to be bigger than the universe, and time would tick more slowly in the program than in reality,” says Lloyd. “So why even bother building it?”

But others soon realized that making an imperfect copy of the universe that’s just good enough to fool its inhabitants would take far less computational power. In such a makeshift cosmos, the fine details of the microscopic world and the farthest stars might only be filled in by the programmers on the rare occasions that people study them with scientific equipment. As soon as no one was looking, they’d simply vanish.

In theory, we’d never detect these disappearing features, however, because each time the simulators noticed we were observing them again, they’d sketch them back in.

That realization makes creating virtual universes eerily possible, even for us. Today’s supercomputers already crudely model the early universe, simulating how infant galaxies grew and changed. Given the rapid technological advances we’ve witnessed over past decades — your cell phone has more processing power than NASA’s computers had during the moon landings — it’s not a huge leap to imagine that such simulations will eventually encompass intelligent life.

“We may be able to fit humans into our simulation boxes within a century,” says Silas Beane, a nuclear physicist at the University of Washington in Seattle. Beane develops simulations that re-create how elementary protons and neutrons joined together to form ever larger atoms in our young universe.

Legislation and social mores could soon be all that keeps us from creating a universe of artificial, but still feeling, humans — but our tech-savvy descendants may find the power to play God too tempting to resist.

They could create a plethora of pet universes, vastly outnumbering the real cosmos. This thought led philosopher Nick Bostrom at the University of Oxford to conclude in 2003 that it makes more sense to bet that we’re delusional silicon-based artificial intelligences in one of these many forgeries, rather than carbon-based organisms in the genuine universe. Since there seemed no way to tell the difference between the two possibilities, however, bookmakers did not have to lose sleep working out the precise odds.

Learning the Truth

That changed in 2007 when John D. Barrow, professor of mathematical sciences at Cambridge University, suggested that an imperfect simulation of reality would contain detectable glitches. Just like your computer, the universe’s operating system would need updates to keep working.

As the simulation degrades, Barrow suggested, we might see aspects of nature that are supposed to be static — such as the speed of light or the fine-structure constant that describes the strength of the electromagnetic force — inexplicably drift from their “constant” values.

Last year, Beane and colleagues suggested a more concrete test of the simulation hypothesis. Most physicists assume that space is smooth and extends out infinitely. But physicists modeling the early universe cannot easily re-create a perfectly smooth background to house their atoms, stars and galaxies. Instead, they build up their simulated space from a lattice, or grid, just as television images are made up from multiple pixels.

The team calculated that the motion of particles within their simulation, and thus their energy, is related to the distance between the points of the lattice: the smaller the grid size, the higher the energy particles can have. That means that if our universe is a simulation, we’ll observe a maximum energy amount for the fastest particles. And as it happens, astronomers have noticed that cosmic rays, high-speed particles that originate in far-flung galaxies, always arrive at Earth with a specific maximum energy of about 1020 electron volts.

The simulation’s lattice has another observable effect that astronomers could pick up. If space is continuous, then there is no underlying grid that guides the direction of cosmic rays — they should come in from every direction equally. If we live in a simulation based on a lattice, however, the team has calculated that we wouldn’t see this even distribution. If physicists do see an uneven distribution, it would be a tough result to explain if the cosmos were real.

Astronomers need much more cosmic ray data to answer this one way or another. For Beane, either outcome would be fine. “Learning we live in a simulation would make no more difference to my life than believing that the universe was seeded at the Big Bang,” he says. But that’s because Beane imagines the simulators as driven purely to understand the cosmos, with no desire to interfere with their simulations.

Unfortunately, our almighty simulators may instead have programmed us into a universe-size reality show — and are capable of manipulating the rules of the game, purely for their entertainment. In that case, maybe our best strategy is to lead lives that amuse our audience, in the hope that our simulator-gods will resurrect us in the afterlife of next-generation simulations.

The weird consequences would not end there. Our simulators may be simulations themselves — just one rabbit hole within a linked series, each with different fundamental physical laws. “If we’re indeed a simulation, then that would be a logical possibility, that what we’re measuring aren’t really the laws of nature, they’re some sort of attempt at some sort of artificial law that the simulators have come up with. That’s a depressing thought!” says Beane.

This cosmic ray test may help reveal whether we are just lines of code in an artificial Matrix, where the established rules of physics may be bent, or even broken. But if learning that truth means accepting that you may never know for sure what’s real — including yourself — would you want to know?

There is no turning back, Neo: Do you take the blue pill, or the red pill?

Read the entire article here.

Image: The Matrix, promotional poster for the movie. Courtesy of Silver Pictures / Warner Bros. Entertainment Inc.

Perovskites

No, these are not a new form of luxury cut glass from Europe, but something much more significant. First discovered in the mid-1800s in the Ural mountain range of Russia, Perovskite materials could lay the foundation for a significant improvement in the efficiency of solar power systems.

From Technology Review:

A new solar cell material has properties that might lead to solar cells more than twice as efficient as the best on the market today. An article this week in the journal Nature describes the materials—a modified form of a class of compounds called perovskites, which have a particular crystalline structure.

The researchers haven’t yet demonstrated a high efficiency solar cell with the material. But their work adds to a growing body of evidence suggesting perovskite materials could change the face of solar power. Researchers are making new perovskites using combinations of elements and molecules not seen in nature; many researchers see the materials as the next great hope for making solar power cheap enough to compete with fossil fuels.

Perovskite-based solar cells have been improving at a remarkable pace. It took a decade or more for the major solar cell materials used today—silicon and cadmium telluride—to reach efficiency levels that have been demonstrated with perovskites in just four years. The rapid success of the material has impressed even veteran solar researchers who have learned to be cautious about new materials after seeing many promising ones come to nothing (see “A Material that Could Make Solar Power ‘Dirt Cheap’”).

The perovskite material described in Nature has properties that could lead to solar cells that can convert over half of the energy in sunlight directly into electricity, says Andrew Rappe, co-director of Pennergy, a center for energy innovation at the University of Pennsylvania, and one of the new report’s authors. That’s more than twice as efficient as conventional solar cells. Such high efficiency would cut in half the number of solar cells needed to produce a given amount of power. Besides reducing the cost of solar panels, this would greatly reduce installation costs, which now account for most of the cost of a new solar system.

Unlike conventional solar cell materials, the new material doesn’t require an electric field to produce an electrical current. This reduces the amount of material needed and produces higher voltages, which can help increase power output, Rappe says. While other materials have been shown to produce current without the aid of an electric field, the new material is the first to also respond well to visible light, making it relevant for solar cells, he says.

The researchers also showed that it is relatively easy to modify the material so that it efficiently converts different wavelengths of light into electricity. It could be possible to form a solar cell with different layers, each designed for a specific part of the solar spectrum, something that could greatly improve efficiency compared to conventional solar cells (see “Ultra-Efficient Solar Power” and “Manipulating Light to Double Solar Power Output”).

Other solar cell experts note that while these properties are interesting, Rappe and his colleagues have a long way to go before they can produce viable solar cells. For one thing, the electrical current it produces so far is very low. Ramamoorthy Ramesh, a professor of materials science and engineering at Berkeley, says, “This is nice work, but really early stage. To make a solar cell, a lot of other things are needed.”

Perovskites remain a promising solar material. Michael McGehee, a materials science and engineering professor at Stanford University, recently wrote, “The fact that multiple teams are making such rapid progress suggests that the perovskites have extraordinary potential, and might elevate the solar cell industry to new heights.”

Read the entire article here.

Image: Perovskite mined in Magnet Cove, Arkansas. Courtesy of Wikimedia.

Biological Transporter

Molecular-biology entrepreneur and genomics engineering pioneer, Craig Venter, is at it again. In his new book, Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, Venter explains his grand ideas and the coming era of discovery.

From ars technica:

J Craig Venter has been a molecular-biology pioneer for two decades. After developing expressed sequence tags in the 90s, he led the private effort to map the human genome, publishing the results in 2001. In 2010, the J Craig Venter Institute manufactured the entire genome of a bacterium, creating the first synthetic organism.

Now Venter, author of Life at the Speed of Light: From the Double Helix to the Dawn of Digital Life, explains the coming era of discovery.

Wired: In Life at the Speed of Light, you argue that humankind is entering a new phase of evolution. How so?

J Craig Venter: As the industrial age is drawing to a close, I think that we’re witnessing the dawn of the era of biological design. DNA, as digitized information, is accumulating in computer databases. Thanks to genetic engineering, and now the field of synthetic biology, we can manipulate DNA to an unprecedented extent, just as we can edit software in a computer. We can also transmit it as an electromagnetic wave at or near the speed of light and, via a “biological teleporter,” use it to recreate proteins, viruses, and living cells at another location, changing forever how we view life.

So you view DNA as the software of life?

All the information needed to make a living, self-replicating cell is locked up within the spirals of DNA’s double helix. As we read and interpret that software of life, we should be able to completely understand how cells work, then change and improve them by writing new cellular software.

The software defines the manufacture of proteins that can be viewed as its hardware, the robots and chemical machines that run a cell. The software is vital because the cell’s hardware wears out. Cells will die in minutes to days if they lack their genetic-information system. They will not evolve, they will not replicate, and they will not live.

Of all the experiments you have done over the past two decades involving the reading and manipulation of the software of life, which are the most important?

I do think the synthetic cell is my most important contribution. But if I were to select a single study, paper, or experimental result that has really influenced my understanding of life more than any other, I would choose one that my team published in 2007, in a paper with the title Genome Transplantation in Bacteria: Changing One Species to Another. The research that led to this paper in the journal Science not only shaped my view of the fundamentals of life but also laid the groundwork to create the first synthetic cell. Genome transplantation not only provided a way to carry out a striking transformation, converting one species into another, but would also help prove that DNA is the software of life.

What has happened since your announcement in 2010 that you created a synthetic cell, JCVI-syn1.0?

At the time, I said that the synthetic cell would give us a better understanding of the fundamentals of biology and how life works, help develop techniques and tools for vaccine and pharmaceutical development, enable development of biofuels and biochemicals, and help to create clean water, sources of food, textiles, bioremediation. Three years on that vision is being borne out.

Your book contains a dramatic account of the slog and setbacks that led to the creation of this first synthetic organism. What was your lowest point?

When we started out creating JCVI-syn1.0 in the lab, we had selected M. genitalium because of its extremely small genome. That decision we would come to really regret: in the laboratory, M. genitalium grows slowly. So whereas E. coli divides into daughter cells every 20 minutes, M. genitalium requires 12 hours to make a copy of itself. With logarithmic growth, it’s the difference between having an experimental result in 24 hours versus several weeks. It felt like we were working really hard to get nowhere at all. I changed the target to the M. mycoides genome. It’s twice as large as that of genitalium, but it grows much faster. In the end, that move made all the difference.

Some of your peers were blown away by the synthetic cell; others called it a technical tour de force. But there were also those who were underwhelmed because it was not “life from scratch.”

They haven’t thought much about what they are actually trying to say when they talk about “life from scratch.” How about baking a cake “from scratch”? You could buy one and then ice it at home. Or buy a cake mix, to which you add only eggs, water and oil. Or combining the individual ingredients, such as baking powder, sugar, salt, eggs, milk, shortening and so on. But I doubt that anyone would mean formulating his own baking powder by combining sodium, hydrogen, carbon, and oxygen to produce sodium bicarbonate, or producing homemade corn starch. If we apply the same strictures to creating life “from scratch,” it could mean producing all the necessary molecules, proteins, lipids, organelles, DNA, and so forth from basic chemicals or perhaps even from the fundamental elements carbon, hydrogen, oxygen, nitrogen, phosphate, iron, and so on.

There’s a parallel effort to create virtual life, which you go into in the book. How sophisticated are these models of cells in silico?

In the past year we have really seen how virtual cells can help us understand the real things. This work dates back to 1996 when Masaru Tomita and his students at the Laboratory for Bioinformatics at Keio started investigating the molecular biology of Mycoplasma genitalium—which we had sequenced in 1995—and by the end of that year had established the E-Cell Project. The most recent work on Mycoplasma genitalium has been done in America, by the systems biologist Markus W Covert, at Stanford University. His team used our genome data to create a virtual version of the bacterium that came remarkably close to its real-life counterpart.

You’ve discussed the ethics of synthetic organisms for a long time—where is the ethical argument today?

The Janus-like nature of innovation—its responsible use and so on—was evident at the very birth of human ingenuity, when humankind first discovered how to make fire on demand. (Do I use it burn down a rival’s settlement, or to keep warm?) Every few months, another meeting is held to discuss how powerful technology cuts both ways. It is crucial that we invest in underpinning technologies, science, education, and policy in order to ensure the safe and efficient development of synthetic biology. Opportunities for public debate and discussion on this topic must be sponsored, and the lay public must engage. But it is important not to lose sight of the amazing opportunities that this research presents. Synthetic biology can help address key challenges facing the planet and its population. Research in synthetic biology may lead to new things such as programmed cells that self-assemble at the sites of disease to repair damage.

What worries you more: bioterror or bioerror?

I am probably more concerned about an accidental slip. Synthetic biology increasingly relies on the skills of scientists who have little experience in biology, such as mathematicians and electrical engineers. The democratization of knowledge and the rise of “open-source biology;” the availability of kitchen-sink versions of key laboratory tools, such as the DNA-copying method PCR, make it easier for anyone—including those outside the usual networks of government, commercial, and university laboratories and the culture of responsible training and biosecurity—to play with the software of life.

Following the precautionary principle, should we abandon synthetic biology?

My greatest fear is not the abuse of technology, but that we will not use it at all, and turn our backs to an amazing opportunity at a time when we are over-populating our planet and changing environments forever.

You’re bullish about where this is headed.

I am—and a lot of that comes from seeing the next generation of synthetic biologists. We can get a view of what the future holds from a series of contests that culminate in a yearly event in Cambridge, Massachusetts—the International Genetically Engineered Machine (iGEM) competition. High-school and college students shuffle a standard set of DNA subroutines into something new. It gives me hope for the future.

You’ve been working to convert DNA into a digital signal that can be transmitted to a unit which then rebuilds an organism.

At Synthetic Genomics, Inc [which Venter founded with his long-term collaborator, the Nobel laureate Ham Smith], we can feed digital DNA code into a program that works out how to re-synthesize the sequence in the lab. This automates the process of designing overlapping pieces of DNA base-pairs, called oligonucleotides, adding watermarks, and then feeding them into the synthesizer. The synthesizer makes the oligonucleotides, which are pooled and assembled using what we call our Gibson-assembly robot (named after my talented colleague Dan Gibson). NASA has funded us to carry out experiments at its test site in the Mojave Desert. We will be using the JCVI mobile lab, which is equipped with soil-sampling, DNA-isolation and DNA sequencing equipment, to test the steps for autonomously isolating microbes from soil, sequencing their DNA and then transmitting the information to the cloud with what we call a “digitized-life-sending unit”. The receiving unit, where the transmitted DNA information can be downloaded and reproduced anew, has a number of names at present, including “digital biological converter,” “biological teleporter,” and—the preference of former US Wired editor-in-chief and CEO of 3D Robotics, Chris Anderson—”life replicator”.

Read the entire article here.

Image: J Craig Venter. Courtesy of Wikipedia.

The Large Hadron Collider is So Yesterday

CERN’s Large Hadron Collider (LHC) smashed countless particles into one another to reveal the Higgs Boson. A great achievement for all concerned. Yet what of the still remaining “big questions” of physics, and how will we find the answers?

From Wired:

The current era of particle physics is over. When scientists at CERN  announced last July that they had found the Higgs boson — which is responsible for giving all other particles their mass — they uncovered the final missing piece in the framework that accounts for the interactions of all known particles and forces, a theory known as the Standard Model.

And that’s a good thing, right? Maybe not.

The prized Higgs particle, physicists assumed, would help steer them toward better theories, ones that fix the problems known to plague the Standard Model. Instead, it has thrown the field  into a confusing situation.

“We’re sitting on a puzzle that is difficult to explain,” said particle physicist Maria Spiropulu of Caltech, who works on one of the LHC’s main Higgs-finding experiments, CMS.

It may sound strange, but physicists were hoping, maybe even expecting, that the Higgs would not turn out to be like they predicted it would be. At the very least, scientists hoped the properties of the Higgs would be different enough from those predicted under the Standard Model that they could show researchers how to build new models. But the Higgs’ mass  proved stubbornly normal, almost exactly in the place the Standard Model said it would be.

To make matters worse, scientists had hoped to find evidence for other strange particles. These could have pointed in the direction of theories beyond the Standard Model, such as the current favorite  supersymmetry, which posits the existence of a heavy doppelganger to all the known subatomic bits like electrons, quarks, and photons.

Instead, they were disappointed by being right. So how do we get out of this mess? More data!

Over the next few years, experimentalists will be churning out new results, which may be able to answer questions about dark matter, the properties of neutrinos, the nature of the Higgs, and perhaps what the next era of physics will look like. Here we take a look at the experiments that you should be paying attention to. These are the ones scientists are the most excited about because they might just form the next cracks in modern physics.

ALTAS and CMS
The Large Hadron Collider isn’t smashing protons right now. Instead, engineers are installing upgrades to help it search at even higher energies. The machine may be closed for business until 2015 but the massive amounts of data it has already collected is still wide open. The two main Higgs-searching experiments, ATLAS and CMS, could have plenty of surprises in store.

“We looked for the low-hanging fruit,” said particle physicist David Miller of the University of Chicago, who works on ATLAS. “All that we found was the Higgs, and now we’re going back for the harder stuff.”

What kind of other stuff might be lurking in the data? Nobody knows for sure but the collaborations will spend the next two years combing through the data they collected in 2011 and 2012, when the Higgs was found. Scientists are hoping to see hints of other, more exotic particles, such as those predicted under a theory known as supersymmetry. They will also start to understand the Higgs better.

See, scientists don’t have some sort of red bell that goes “ding” every time their detector finds a Higgs boson. In fact, ATLAS and CMS can’t actually see the Higgs at all. What they look for instead are the different particles that the Higgs decays into. The easiest-to-detect channels include when the Higgs decays to things like a quark and an anti-quark or two photons. What scientists are now trying to find out is exactly what percent of the time it decays to various different particle combinations, which will help them further pin down its properties.

It’s also possible that, with careful analysis, physicists would add up the percentages for each of the different decays and notice that they haven’t quite gotten to 100. There might be just a tiny remainder, indicating that the Higgs is decaying to particles that the detectors can’t see.

“We call that invisible decay,” said particle physicist Maria Spiropulu. The reason that might be exciting is that the Higgs could be turning into something really strange, like a dark matter particle.

We know from cosmological observations that dark matter has mass and, because the Higgs gives rise to mass, it probably has to somehow interact with dark matter. So the LHC data could tell scientists just how strong the connection is between the Higgs and dark matter. If found, these invisible decays could open up a whole new world of exploration.

“It’s fashionable to call it the ‘dark matter portal’ right now,” said Spiropulu.

NOVA and T2K
Neutrinos are oddballs in the Standard Model. They are tiny, nearly massless, and barely like interacting with any other members of the subatomic zoo. Historically, they have been the subject of  many surprising results and the future will probably reveal them to be even stranger. Physicists are currently trying to figure out some of their properties, which remain open questions.

“A very nice feature of these open questions is we know they all have answers that are accessible in the next round of experiments,” said physicist Maury Goodman of Argonne National Laboratory.

The US-based NOvA experiment will hopefully pin down some neutrino characteristics, in particular their masses. There are three types of neutrinos: electron, muon, and tau. We know that they have a very tiny mass — at least 10 billion times smaller than an electron — but we don’t know exactly what it is nor which of the three different types is heaviest or lightest.

NOvA will attempt to figure out this mass hierarchy by shooting a beam of neutrinos from Fermilab near Chicago 810 kilometers away to a detector in Ash River, Minnesota. A similar experiment in Japan called T2K is also sending neutrinos across 295 kilometers. As they pass through the Earth, neutrinos oscillate between their three different types. By comparing how the neutrinos look when they are first shot out versus how they appear at the distant detector, NOvA and T2K will be able to determine their properties with high precision.

T2K has been running for a couple years while NOvA is expected to begin taking data in 2014 and will run six years. Scientists hope that they will help answer some of the last remaining questions about neutrinos.

Read the entire article here.

Image: A simulation of the decay of a Higgs boson in a linear collider detector. Courtesy of Norman Graf / CERN.

Chromosomal Chronometer

Researchers find possible evidence of DNA mechanism that keep track of age. It is too early to tell if changes over time in specific elements of our chromosomes result in or are a consequence of aging. Yet, this is a tantalizing discovery that bodes well for a better understanding into the genetic and biological systems that underlie the aging process.

From the Guardian:

A US scientist has discovered an internal body clock based on DNA that measures the biological age of our tissues and organs.

The clock shows that while many healthy tissues age at the same rate as the body as a whole, some of them age much faster or slower. The age of diseased organs varied hugely, with some many tens of years “older” than healthy tissue in the same person, according to the clock.

Researchers say that unravelling the mechanisms behind the clock will help them understand the ageing process and hopefully lead to drugs and other interventions that slow it down.

Therapies that counteract natural ageing are attracting huge interest from scientists because they target the single most important risk factor for scores of incurable diseases that strike in old age.

“Ultimately, it would be very exciting to develop therapy interventions to reset the clock and hopefully keep us young,” said Steve Horvath, professor of genetics and biostatistics at the University of California in Los Angeles.

Horvath looked at the DNA of nearly 8,000 samples of 51 different healthy and cancerous cells and tissues. Specifically, he looked at how methylation, a natural process that chemically modifies DNA, varied with age.

Horvath found that the methylation of 353 DNA markers varied consistently with age and could be used as a biological clock. The clock ticked fastest in the years up to around age 20, then slowed down to a steadier rate. Whether the DNA changes cause ageing or are caused by ageing is an unknown that scientists are now keen to work out.

“Does this relate to something that keeps track of age, or is a consequence of age? I really don’t know,” Horvath told the Guardian. “The development of grey hair is a marker of ageing, but nobody would say it causes ageing,” he said.

The clock has already revealed some intriguing results. Tests on healthy heart tissue showed that its biological age – how worn out it appears to be – was around nine years younger than expected. Female breast tissue aged faster than the rest of the body, on average appearing two years older.

Diseased tissues also aged at different rates, with cancers speeding up the clock by an average of 36 years. Some brain cancer tissues taken from children had a biological age of more than 80 years.

“Female breast tissue, even healthy tissue, seems to be older than other tissues of the human body. That’s interesting in the light that breast cancer is the most common cancer in women. Also, age is one of the primary risk factors of cancer, so these types of results could explain why cancer of the breast is so common,” Horvath said.

Healthy tissue surrounding a breast tumour was on average 12 years older than the rest of the woman’s body, the scientist’s tests revealed.

Writing in the journal Genome Biology, Horvath showed that the biological clock was reset to zero when cells plucked from an adult were reprogrammed back to a stem-cell-like state. The process for converting adult cells into stem cells, which can grow into any tissue in the body, won the Nobel prize in 2012 for Sir John Gurdon at Cambridge University and Shinya Yamanaka at Kyoto University.

“It provides a proof of concept that one can reset the clock,” said Horvath. The scientist now wants to run tests to see how neurodegenerative and infectious diseases affect, or are affected by, the biological clock.

Read the entire article here.

Image: Artist rendition of DNA fragment. Courtesy of Zoonar GmbH/Alamy.

Left Brain, Right Brain or Top Brain, Bottom Brain?

Are you analytical and logical? If so, you are likely to be labeled as being “left-brained”. On the other hand, if you are emotional and creative, you are more likely to be labeled “right-brained”. And so the popular narrative of brain function continues. But this generalized distinction is a myth. Our brains’ hemispheres do specialize, but not in such an overarching way. Recent research points to another distinction: top brain and bottom brain.

From WSJ:

Who hasn’t heard that people are either left-brained or right-brained—either analytical and logical or artistic and intuitive, based on the relative “strengths” of the brain’s two hemispheres? How often do we hear someone remark about thinking with one side or the other?

A flourishing industry of books, videos and self-help programs has been built on this dichotomy. You can purportedly “diagnose” your brain, “motivate” one or both sides, indulge in “essence therapy” to “restore balance” and much more. Everyone from babies to elders supposedly can benefit. The left brain/right brain difference seems to be a natural law.

Except that it isn’t. The popular left/right story has no solid basis in science. The brain doesn’t work one part at a time, but rather as a single interactive system, with all parts contributing in concert, as neuroscientists have long known. The left brain/right brain story may be the mother of all urban legends: It sounds good and seems to make sense—but just isn’t true.

The origins of this myth lie in experimental surgery on some very sick epileptics a half-century ago, conducted under the direction of Roger Sperry, a renowned neuroscientist at the California Institute of Technology. Seeking relief for their intractable epilepsy, and encouraged by Sperry’s experimental work with animals, 16 patients allowed the Caltech team to cut the corpus callosum, the massive bundle of nerve fibers that connects the two sides of the brain. The patients’ suffering was alleviated, and Sperry’s postoperative studies of these volunteers confirmed that the two halves do, indeed, have distinct cognitive capabilities.

But these capabilities are not the stuff of popular narrative: They reflect very specific differences in function—such as attending to overall shape versus details during perception—not sweeping distinctions such as being “logical” versus “intuitive.” This important fine print got buried in the vast mainstream publicity that Sperry’s research generated.

There is a better way to understand the functioning of the brain, based on another, ordinarily overlooked anatomical division—between its top and bottom parts. We call this approach “the theory of cognitive modes.” Built on decades of unimpeachable research that has largely remained inside scientific circles, it offers a new way of viewing thought and behavior that may help us understand the actions of people as diverse as Oprah Winfrey, the Dalai Lama, Tiger Woods and Elizabeth Taylor.

Our theory has emerged from the field of neuropsychology, the study of higher cognitive functioning—thoughts, wishes, hopes, desires and all other aspects of mental life. Higher cognitive functioning is seated in the cerebral cortex, the rind-like outer layer of the brain that consists of four lobes. Illustrations of this wrinkled outer brain regularly show a top-down view of the two hemispheres, which are connected by thick bundles of neuronal tissue, notably the corpus callosum, an impressive structure consisting of some 250 million nerve fibers.

If you move the view to the side, however, you can see the top and bottom parts of the brain, demarcated largely by the Sylvian fissure, the crease-like structure named for the 17th-century Dutch physician who first described it. The top brain comprises the entire parietal lobe and the top (and larger) portion of the frontal lobe. The bottom comprises the smaller remainder of the frontal lobe and all of the occipital and temporal lobes.

Our theory’s roots lie in a landmark report published in 1982 by Mortimer Mishkin and Leslie G. Ungerleider of the National Institute of Mental Health. Their trailblazing research examined rhesus monkeys, which have brains that process visual information in much the same way as the human brain. Hundreds of subsequent studies in several fields have helped to shape our theory, by researchers such as Gregoire Borst of Paris Descartes University, Martha Farah of the University of Pennsylvania, Patricia Goldman-Rakic of Yale University, Melvin Goodale of the University of Western Ontario and Maria Kozhevnikov of the National University of Singapore.

This research reveals that the top-brain system uses information about the surrounding environment (in combination with other sorts of information, such as emotional reactions and the need for food or drink) to figure out which goals to try to achieve. It actively formulates plans, generates expectations about what should happen when a plan is executed and then, as the plan is being carried out, compares what is happening with what was expected, adjusting the plan accordingly.

The bottom-brain system organizes signals from the senses, simultaneously comparing what is being perceived with all the information previously stored in memory. It then uses the results of such comparisons to classify and interpret the object or event, allowing us to confer meaning on the world.

The top- and bottom-brain systems always work together, just as the hemispheres always do. Our brains are not engaged in some sort of constant cerebral tug of war, with one part seeking dominance over another. (What a poor evolutionary strategy that would have been!) Rather, they can be likened roughly to the parts of a bicycle: the frame, seat, wheels, handlebars, pedals, gears, brakes and chain that work together to provide transportation.

But here’s the key to our theory: Although the top and bottom parts of the brain are always used during all of our waking lives, people do not rely on them to an equal degree. To extend the bicycle analogy, not everyone rides a bike the same way. Some may meander, others may race.

Read the entire article here.

Image: Left-brain, right-brain cartoon. Courtesy of HuffingtonPost.

Why Sleep?

There are more theories on why we sleep than there are cable channels in the U.S. But that hasn’t prevented researchers from proposing yet another one — it’s all about flushing waste.

From the Guardian:

Scientists in the US claim to have a new explanation for why we sleep: in the hours spent slumbering, a rubbish disposal service swings into action that cleans up waste in the brain.

Through a series of experiments on mice, the researchers showed that during sleep, cerebral spinal fluid is pumped around the brain, and flushes out waste products like a biological dishwasher.

The process helps to remove the molecular detritus that brain cells churn out as part of their natural activity, along with toxic proteins that can lead to dementia when they build up in the brain, the researchers say.

Maiken Nedergaard, who led the study at the University of Rochester, said the discovery might explain why sleep is crucial for all living organisms. “I think we have discovered why we sleep,” Nedergaard said. “We sleep to clean our brains.”

Writing in the journal Science, Nedergaard describes how brain cells in mice shrank when they slept, making the space between them on average 60% greater. This made the cerebral spinal fluid in the animals’ brains flow ten times faster than when the mice were awake.

The scientists then checked how well mice cleared toxins from their brains by injecting traces of proteins that are implicated in Alzheimer’s disease. These amyloid beta proteins were removed faster from the brains of sleeping mice, they found.

Nedergaard believes the clean-up process is more active during sleep because it takes too much energy to pump fluid around the brain when awake. “You can think of it like having a house party. You can either entertain the guests or clean up the house, but you can’t really do both at the same time,” she said in a statement.

According to the scientist, the cerebral spinal fluid flushes the brain’s waste products into what she calls the “glymphatic system” which carries it down through the body and ultimately to the liver where it is broken down.

Other researchers were sceptical of the study, and said it was too early to know if the process goes to work in humans, and how to gauge the importance of the mechanism. “It’s very attractive, but I don’t think it’s the main function of sleep,” said Raphaelle Winsky-Sommerer, a specialist on sleep and circadian rhythms at Surrey University. “Sleep is related to everything: your metabolism, your physiology, your digestion, everything.” She said she would like to see other experiments that show a build up of waste in the brains of sleep-deprived people, and a reduction of that waste when they catch up on sleep.

Vladyslav Vyazovskiy, another sleep expert at Surrey University, was also sceptical. “I’m not fully convinced. Some of the effects are so striking they are hard to believe. I would like to see this work replicated independently before it can be taken seriously,” he said.

Jim Horne, professor emeritus and director of the sleep research centre at Loughborough University, cautioned that what happened in the fairly simple mouse brain might be very different to what happened in the more complex human brain. “Sleep in humans has evolved far more sophisticated functions for our cortex than that for the mouse, even though the present findings may well be true for us,” he said.

But Nedergaard believes she will find the same waste disposal system at work in humans. The work, she claims, could pave the way for medicines that slow the onset of dementias caused by the build-up of waste in the brain, and even help those who go without enough sleep. “It may be that we can reduce the need at least, because it’s so annoying to waste so much time sleeping,” she said.

Read the entire article here.

Image courtesy of Telegraph.

Me, Myself and I

It’s common sense — the frequency with which you use the personal pronoun “I” tells a lot about you. Now there’s some great research that backs this up, but not in a way that you would have expected.

From WSJ:

You probably don’t think about how often you say the word “I.”

You should. Researchers say that your usage of the pronoun says more about you than you may realize.

Surprising new research from the University of Texas suggests that people who often say “I” are less powerful and less sure of themselves than those who limit their use of the word. Frequent “I” users subconsciously believe they are subordinate to the person to whom they are talking.

Pronouns, in general, tell us a lot about what people are paying attention to, says James W. Pennebaker, chair of the psychology department at the University of Texas at Austin and an author on the study. Pronouns signal where someone’s internal focus is pointing, says Dr. Pennebaker, who has pioneered this line of research. Often, people using “I” are being self-reflective. But they may also be self-conscious or insecure, in physical or emotional pain, or simply trying to please.

Dr. Pennebaker and colleagues conducted five studies of the way relative rank is revealed by the use of pronouns. The research was published last month in the Journal of Language and Social Psychology. In each experiment, people deemed to have higher status used “I” less.

The findings go against the common belief that people who say “I” a lot are full of themselves, maybe even narcissists.

“I” is more powerful than you may realize. It drives perceptions in a conversation so much so that marriage therapists have long held that people should use “I” instead of “you” during a confrontation with a partner or when discussing something emotional. (“I feel unheard.” Not: “You never listen.”) The word “I” is considered less accusatory.

“There is a misconception that people who are confident, have power, have high-status tend to use ‘I’ more than people who are low status,” says Dr. Pennebaker, author of “The Secret Life of Pronouns.” “That is completely wrong. The high-status person is looking out at the world and the low-status person is looking at himself.”

So, how often should you use “I”? More—to sound humble (and not critical when speaking to your spouse)? Or less—to come across as more assured and authoritative?

The answer is “mostly more,” says Dr. Pennebaker. (Although he does say you should try and say it at the same rate as your spouse or partner, to keep the power balance in the relationship.)

In the first language-analysis study Dr. Pennebaker led, business-school students were divided into 41 four-person, mixed-sex groups and asked to work as a team to improve customer service for a fictitious company. One person in each group was randomly assigned to be the leader. The result: The leaders used “I” in 4.5% of their words. Non-leaders used the word 5.6%. (The leaders also used “we” more than followers did.)

In the second study, 112 psychology students were assigned to same-sex groups of two. The pairs worked to solve a series of complex problems. All interaction took place online. No one was assigned to a leadership role, but participants were asked at the end of the experiment who they thought had power and status. Researchers found that the higher the person’s perceived power, the less he or she used “I.”

In study three, 50 pairs of people chatted informally face-to-face, asking questions to get to know one another, as if at a cocktail party. When asked which person had more status or power, they tended to agree—and that person had used “I” less.

Study four looked at emails. Nine people turned over their incoming and outgoing emails with about 15 other people. They rated how much status they had in relation to each correspondent. In each exchange, the person with the higher status used “I” less.

The fifth study was the most unusual. Researchers looked at email communication that the U.S. government had collected (and translated) from the Iraqi military, made public for a period of time as the Iraqi Perspectives Project. They randomly selected 40 correspondences. In each case, the person with higher military rank used “I” less.

People curb their use of “I” subconsciously, Dr. Pennebaker says. “If I am the high-status person, I am thinking of what you need to do. If I am the low-status person, I am more humble and am thinking, ‘I should be doing this.’ “

Dr. Pennebaker has found heavy “I” users across many people: Women (who are typically more reflective than men), people who are more at ease with personal topics, younger people, caring people as well as anxious and depressed people. (Surprisingly, he says, narcissists do not use “I” more than others, according to a meta-analysis of a large number of studies.)

And who avoids using “I,” other than the high-powered? People who are hiding the truth. Avoiding the first-person pronoun is distancing.

Read the entire article here.

Mr. Higgs

A fascinating profile of Peter Higgs, the theoretical physicist whose name has become associated with the most significant scientific finding of recent times.

From the Guardian:

For scientists of a certain calibre, these early days of October can bring on a bad case of the jitters. The nominations are in. The reports compiled. All that remains is for the Nobel committees to cast their final votes. There are no sure bets on who will win the most prestigious prize in science this year, but there are expectations aplenty. Speak to particle physicists, for example, and one name comes up more than any other. Top of their wishlist of winners – the awards are announced next Tuesday – is the self-deprecating British octagenarian, Peter Higgs.

Higgs, 84, is no household name, but he is closer to being one than any Nobel physics laureate since Richard Feynman, the Manhattan project scientist, who accepted the award reluctantly in 1964. But while Feynman was a showman who adored attention, Higgs is happy when eclipsed by the particle that bears his name, the elusive boson that scientists at Cern’s Large Hadron Collider triumphantly discovered last year.

“He’s modest and actually almost to a fault,” said Alan Walker, a fellow physicist at Edinburgh University, who sat next to Higgs at Cern when scientists revealed they had found the particle.

“You meet many physicists who will tell you how good they are. Peter doesn’t do that.”

Higgs, now professor emeritus at Edinburgh, made his breakthrough the same year Feynman won the Nobel. It was an era when the tools of the trade were pencil and paper. He outlined what came to be known as the Higgs mechanism, an explanation for how elementary particles, which make up all that is around us, gained their masses in the earliest moments after the big bang. Before 1964, the question of why the simplest particles weighed anything at all was met with an embarrassed but honest shrug.

Higgs plays down his role in developing the idea, but there is no dismissing the importance of the theory itself. “He didn’t produce a great deal, but what he did produce is actually quite profound and is one of the keystones of what we now understand as the fundamental building blocks of nature,” Walker said.

Higgs was born in Newcastle in 1929. His father, a BBC sound engineer, brought the family south to Birmingham and then onwards to Bristol. There, Higgs enrolled at what is now Cotham School. He got off to a bad start. One of the first things he did was tumble into a crater left by a second world war bomb in the playground and fracture his left arm. But he was a brilliant student. He won prizes in a haul of subjects – although not, as it happens, in physics.

To the teenage Higgs, physics lacked excitement. The best teachers were off at war, and that no doubt contributed to his attitude. It changed through a chance encounter. While standing around at the back of morning assembly Higgs noticed a name that appeared more than once on the school’s honours board. Higgs wondered who PAM Dirac was and read up on the former pupil. He learned that Paul Dirac was a founding father of quantum theory, and the closest Britain had to an Einstein. Through Dirac, Higgs came to relish the arcane world of theoretical physics.

Higgs found that he was not cut out for experiments, a fact driven home by a series of sometimes dramatic mishaps, but at university he proved himself a formidable theorist. He was the first to sit a six-hour theory exam at Kings College London, and for the want of a better idea, his tutors posed him a question that had recently been solved in a leading physics journal.

“Peter sailed ahead, took it seriously, thought about it, and in that six-hour time scale had managed to solve it, had written it up and presented it,” said Michael Fisher, a friend from Kings.

But getting the right answer was only the start. “In the long run it turned out, when it was actually graded, that Peter had done a better paper than the original they took from the literature.”

Higgs’s great discovery came at Edinburgh University, where he was considered an outsider for plugging away at ideas that many physicists had abandoned. But his doggedness paid off.

At the time an argument was raging in the field over a way that particles might gain their masses. The theory in question was clearly wrong, but Higgs saw why and how to fix it. He published a short note in September 1964 and swiftly wrote a more expansive follow-up paper.

To his dismay the article was rejected, ironically by an editor at Cern. Indignant at the decision, Higgs added two paragraphs to the paper and published it in a rival US journal instead. In the penultimate sentence was the first mention of what became known as the Higgs boson.

At first, there was plenty of resistance to Higgs’s theory. Before giving a talk at Harvard in 1966, a senior physicist, the late Sidney Coleman, told his class some idiot was coming to see them. “And you’re going to tear him to shreds.” Higgs stuck to his guns. Eventually he won them over.

Ken Peach, an Oxford physics professor who worked with Higgs in Edinburgh, said the determination was classic Peter: “There is an inner toughness, some steely resolve, which is not quite immediately apparent,” he said.

It was on display again when Stephen Hawking suggested the Higgs boson would never be found. Higgs hit back, saying that Hawking’s celebrity status meant he got away with pronouncements that others would not.

Higgs was at one time deeply involved in the Campaign for Nuclear Disarmament, but left when the organisation extended its protests to nuclear power. He felt CND had confused controlled and uncontrolled release of nuclear energy. He also joined Greenpeace but quit that organisation, too, when he felt its ideologies had started to trump its science.

“The one thing you get from Peter is that he is his own person,” said Walker.

Higgs was not the only scientist to come up with the theory of particle masses in 1964. François Englert and Robert Brout at the Free University in Brussels beat him into print by two weeks, but failed to mention the crucial new particle that scientists would need to prove the theory right. Three others, Gerry Guralnik, , Dick Hagen and Tom Kibble, had worked out the theory too, and published a month later.

Higgs is not comfortable taking all the credit for the work, and goes to great pains to list all the others whose work he built on. But in the community he is revered. When Higgs walked into the Cern auditorium last year to hear scientists tell the world about the discovery, he was welcomed with a standing ovation. He nodded off during the talks, but was awake at the end, when the crowd erupted as the significance of the achievement became clear. At that moment, he was caught on camera reaching for a handkerchief and dabbing his eyes. “He was tearful,” said Walker. “He was really deeply moved. I think he was absolutely surprised by the atmosphere of the room.”

Read the entire article here.

Image: Ken Currie, Portrait of Peter Higgs, 2008. Courtesy of Wikipedia.

Night Owls, Beware!

A new batch of research points to a higher incidence of depression in night owls than in early risers. Further studies will be required to determine a true causal link, but initial evidence seems to suggest that those who stay up late have structural differences in the brain leading to a form of chronic jet lag.

From Washington Post:

They say the early bird catches the worm, but night owls may be missing far more than just a tasty snack. Researchers have discovered evidence of structural brain differences that distinguish early risers from people who like to stay up late. The differences might help explain why night owls seem to be at greater risk of depression.

About 10 percent of people are morning people, or larks, and 20 percent are night owls, with the rest falling in between. Your status is called your chronotype.

Previous studies have suggested that night owls experience worse sleep, feel more tiredness during the day and consume greater amounts of tobacco and alcohol. This has prompted some to suggest that they are suffering from a form of chronic jet lag.

Jessica Rosenberg at RWTH Aachen University in Germany and colleagues used a technique called diffusion tensor imaging to scan the brains of 16 larks, 23 night owls and 20 people with intermediate chronotypes. They found a reduction in the integrity of night owls’ white matter — brain tissue largely made up of fatty insulating material that speeds up the transmission of nerve signals — in areas associated with depression.

“We think this could be caused by the fact that late chronotypes suffer from this permanent jet lag,” Rosenberg says, although she cautions that further studies are needed to confirm cause and effect.

Read the entire article here.

Image courtesy of Google search.

Water in Them Thar Hills

Curiosity, the latest rover to explore Mars, has found lots of water in the martian soil. Now, it doesn’t run freely, but is chemically bound to other substances. Yet the large volume of H2O bodes well for future human exploration (and settlement).

From the Guardian:

Water has been discovered in the fine-grained soil on the surface of Mars, which could be a useful resource for future human missions to the red planet, according to measurements made by Nasa’s Curiosity rover.

Each cubic foot of Martian soil contains around two pints of liquid water, though the molecules are not freely accessible, but rather bound to other minerals in the soil.

The Curiosity rover has been on Mars since August 2012, landing in an area near the equator of the planet known as Gale Crater. Its target is to circle and climb Mount Sharp, which lies at the centre of the crater, a five-kilometre-high mountain of layered rock that will help scientists unravel the history of the planet.

On Thursday Nasa scientists published a series of five papers in the journal Science, which detail the experiments carried out by the various scientific instruments aboard Curiosity in its first four months on the martian surface. Though highlights from the year-long mission have been released at conferences and Nasa press conferences, these are the first set of formal, peer-reviewed results from the Curiosity mission.

“We tend to think of Mars as this dry place – to find water fairly easy to get out of the soil at the surface was exciting to me,” said Laurie Leshin, dean of science at Rensselaer Polytechnic Institute and lead author on the Science paper which confirmed the existence of water in the soil. “If you took about a cubic foot of the dirt and heated it up, you’d get a couple of pints of water out of that – a couple of water bottles’ worth that you would take to the gym.”

About 2% of the soil, by weight, was water. Curiosity made the measurement by scooping up a sample of the Martian dirt under its wheels, sieving it and dropping tiny samples into an oven in its belly, an instrument called Sample Analysis at Mars. “We heat [the soil] up to 835C and drive off all the volatiles and measure them,” said Leshin. “We have a very sensitive way to sniff those and we can detect the water and other things that are released.”

Aside from water, the heated soil released sulphur dioxide, carbon dioxide and oxygen as the various minerals within it were decomposed as they warmed up.

One of Curiosity’s main missions is to look for signs of habitability on Mars, places where life might once have existed. “The rocks and minerals are a record of the processes that have occurred and [Curiosity is] trying to figure out those environments that were around and to see if they were habitable,” said Peter Grindrod, a planetary scientist at University College London who was not involved in the analyses of Curiosity data.

Flowing water is once thought to have been abundant on the surface of Mars, but it has now all but disappeared. The only direct sources of water found so far have been as ice at the poles of the planet.

Read the entire article here.

Image: NASA’s Curiosity rover on the surface of Mars. Courtesy: Nasa/Getty Images

Biological Gears

We humans think they’re so smart. After all, we’ve invented, designed, built and continuously re-defined our surroundings. But, if we look closely at nature’s wonderful inventions we’ll find that it more often than not beat us to it. Now biologists have found insects with working gears.

[tube]sNw5FwNd4GU[/tube]

From New Scientist:

For a disconcerting experience, consider how mechanical you are. Humans may be conscious beings with higher feelings, but really we’re just fancy machines with joints, motors, valves, and a whole lot of plumbing.

All animals are the same. Hundreds of gizmos have evolved in nature, many of which our engineers merely reinvented. Nature had rotating axles billions of years ago, in the shape of bacterial flagella. And weevil legs beat us to the screw-and-nut mechanism.

The insect Issus coleoptratus is another animal with an unexpected bit of machinery hidden in its body. Its larvae are the first animals known to have interlocking gears, just like in the gearbox of a car.

In high gear

I. coleoptratus is a type of planthoppers – a group of insects known for their prodigious jumping. It takes off in just 2 milliseconds, and moves at 3.9 metres per second. “This is a phenomenal performance,” says Malcolm Burrows of the University of Cambridge. “How on earth do they do it?”

Burrows first ran into the larvae of I. coleoptratus in a colleague’s garden. “We were poking around and there were these bugs, jumping around like crazy.” He took a closer look, and noticed that each larva had meshing gears connecting its two hind legs. The gears had been seen before, by a German biologist called K. Sander, but his 1957 paper isn’t even on the internet.

The bulb at the top of each hind leg has 10 to 12 teeth, each between 15 and 30 micrometres long. Effectively, each hind leg is topped by a biological cog, allowing the pair to interlock, and move in unison.

Working with Gregory Sutton of the University of Bristol, UK, Burrows filmed the gears at 5000 frames per second and confirmed that they mesh with each other (see video, top).

Great timing

The two hind legs moved within 30 microseconds of each other during a jump. Burrows and Sutton suspect that the gears evolved because they can synchronise the leg movements better and faster than neurons can.

Other animals have gears, but not gears that mesh, says Chris Lyal of the Natural History Museum in London. “When you look at [I. coleoptratus‘s gears], you wonder, why can’t anything else do that?” he says.

The German study from 1957 claims that all 2000-odd planthoppers have gears. “I’ve looked at about half a dozen, and they all have them,” says Burrows. “I’d be hesitant to say no other animal has them,” says Burrows. “But they haven’t been described.”

Read the entire article here.

Video courtesy of New Scientist.

Above and Beyond

According to NASA, Voyager 1 officially left the protection of the solar system on or about August 25, 2013, and is now heading into interstellar space. It is now the first and only human-made object to leave the solar system.

Perhaps, one day in the distant future real human voyagers — or their android cousins — will come across the little probe as it continues on its lonely journey.

From Space:

A spacecraft from Earth has left its cosmic backyard and taken its first steps in interstellar space.

After streaking through space for nearly 35 years, NASA’s robotic Voyager 1 probe finally left the solar system in August 2012, a study published today (Sept. 12) in the journal Science reports.

“Voyager has boldly gone where no probe has gone before, marking one of the most significant technological achievements in the annals of the history of science, and as it enters interstellar space, it adds a new chapter in human scientific dreams and endeavors,” NASA science chief John Grunsfeld said in a statement. “Perhaps some future deep-space explorers will catch up with Voyager, our first interstellar envoy, and reflect on how this intrepid spacecraft helped enable their future.”

A long and historic journey

Voyager 1 launched on Sept. 5, 1977, about two weeks after its twin, Voyager 2. Together, the two probes conducted a historic “grand tour” of the outer planets, giving scientists some of their first up-close looks at Jupiter, Saturn, Uranus, Neptune and the moons of these faraway worlds.

The duo completed its primary mission in 1989, and then kept on flying toward the edge of the heliosphere, the huge bubble of charged particles and magnetic fields that the sun puffs out around itself. Voyager 1 has now popped free of this bubble into the exotic and unexplored realm of interstellar space, scientists say.

They reached this historic conclusion with a little help from the sun. A powerful solar eruption caused electrons in Voyager 1’s location to vibrate signficantly between April 9 and May 22 of this year. The probe’s plasma wave instrument detected these oscillations, and researchers used the measurements to figure out that Voyager 1’s surroundings contained about 1.3 electrons per cubic inch (0.08 electrons per cubic centimeter).

That’s far higher than the density observed in the outer regions of the heliosphere (roughly 0.03 electrons per cubic inch, or 0.002 electrons per cubic cm) and very much in line with the 1.6 electrons per cubic inch (0.10 electrons per cubic cm) or so expected in interstellar space. [Photos from NASA’s Voyager 1 and 2 Probes]

“We literally jumped out of our seats when we saw these oscillations in our data — they showed us that the spacecraft was in an entirely new region, comparable to what was expected in interstellar space, and totally different than in the solar bubble,” study lead author Don Gurnett of the University of Iowa, the principal investigator of Voyager 1’s plasma wave instrument, said in a statement.

It may seem surprising that electron density is higher beyond the solar system than in its extreme outer reaches. Interstellar space is, indeed, emptier than the regions in Earth’s neighborhood, but the density inside the solar bubble drops off dramatically at great distances from the sun, researchers said.

Calculating a departure date

The study team wanted to know if Voyager 1 left the solar system sometime before April 2013, so they combed through some of the probe’s older data. They found a monthlong period of electron oscillations in October-November 2012 that translated to a density of 0.004 electrons per cubic inch (0.006 electrons per cubic cm).

Using these numbers and the amount of ground that Voyager 1 covers — about 325 million miles (520 million kilometers) per year — the researchers calculated that the spacecraft likely left the solar system in August 2012.

That time frame matches up well with several other important changes Voyager 1 observed. On Aug. 25, 2012, the probe recorded a 1,000-fold drop in the number of charged solar particles while also measuring a 9 percent increase in fast-moving galactic cosmic rays, which originate beyond the solar system.

“These results, and comparison with previous heliospheric radio measurements, strongly support the view that Voyager 1 crossed the heliopause into the interstellar plasma on or about Aug. 25, 2012,” Gurnett and his colleagues write in the new study.

At that point, Voyager 1 was about 11.25 billion miles (18.11 billion km) from the sun, or roughly 121 times the distance between Earth and the sun. The probe is now 11.66 billion miles (18.76 billion km) from the sun. (Voyager 2, which took a different route through the solar system, is currently 9.54 billion miles, or 15.35 billion km, from the sun.)

Read the entire article here.

Image: Voyager Gold Disk. Courtesy of Wikipedia.

Interstellar Winds of Change

First measured in the early-seventies, the interstellar wind is far from a calm, consistent breeze. Rather, as new detailed measurements show, it’s a blustery, fickle gale.

From ars technica:

Interstellar space—the region between stars in our galaxy—is fairly empty. There are still enough atoms in that space to produce a measurable effect as the Sun orbits the galactic center, however. The flow of these atoms, known as the interstellar wind, provides a way to study interstellar gas, which moves independently of the Sun’s motion.

A new analysis of 40 years of data showed that the interstellar wind has changed direction and speed over time, demonstrating that the environment surrounding the Solar System changes measurably as well. Priscilla Frisch and colleagues compared the results from several spacecraft, both in Earth orbit and interplanetary probes. The different positions and times in which these instruments operated revealed that the interstellar wind has increased slightly in speed. Additional measurements revealed that the flow of atoms has shifted somewhere between 4.4 degrees and 9.2 degrees. Both these results indicate that the Sun is traveling through a changing environment, perhaps one shaped by turbulence in interstellar space.

The properties of the Solar System are dominated by the Sun’s gravity, magnetic field, and the flow of charged particles outward from its surface. However, a small number of electrically neutral particles—mostly light atoms—pass through the Solar System. These particles are part of the local interstellar cloud (LIC), a relatively hot region of space governed by its internal processes.

Neutral helium is the most useful product of the interstellar wind flowing through the Solar System. Helium is abundant, comprising roughly 25 percent of all interstellar atoms. In its electrically neutral form, helium is largely unaffected by magnetic fields, both from the Sun and within the LIC. The present study also considered neutral oxygen and nitrogen atoms, which are far less abundant, but more massive and therefore less strongly jostled even than helium.

When helium atoms flow through the Solar System, their paths are curved by the Sun’s gravity depending on how quickly they are moving. Slower atoms are more strongly affected than faster ones, so the effect is a cone of particle trajectories. The axis of that focusing cone is the dominant direction of the interstellar wind, while the width of the cone indicates how much variation in particle speeds is present, a measure of the speed and turbulence in the LIC.

The interstellar wind was first measured in the 1970s by missions such as the Mariner 10 (which flew by Venus and Mercury) from the United States and the Prognoz 6 satellite from the Soviet Union. More recently, the Ulysses spacecraft in solar orbit, the MESSENGER probe studying Mercury, and the IBEX (Interstellar Boundary EXplorer) mission collected data from several perspectives within the Solar System.

Read the entire article here.

Image: Local interstellar cloud. Courtesy of NASA.

Growing a Brain Outside of the Body

‘Tis the stuff of science fiction. And, it’s also quite real and happening in a lab near you.

From Technology Review:

Scientists at the Institute of Molecular Biotechnology in Vienna, Austria, have grown three-dimensional human brain tissues from stem cells. The tissues form discrete structures that are seen in the developing brain.

The Vienna researchers found that immature brain cells derived from stem cells self-organize into brain-like tissues in the right culture conditions. The “cerebral organoids,” as the researchers call them, grew to about four millimeters in size and could survive as long as 10 months. For decades, scientists have been able to take cells from animals including humans and grow them in a petri dish, but for the most part this has been done in two dimensions, with the cells grown in a thin layer in petri dishes. But in recent years, researchers have advanced tissue culture techniques so that three-dimensional brain tissue can grow in the lab. The new report from the Austrian team demonstrates that allowing immature brain cells to self-organize yields some of the largest and most complex lab-grown brain tissue, with distinct subregions and signs of functional neurons.

The work, published in Nature on Wednesday, is the latest advance in a field focused on creating more lifelike tissue cultures of neurons and related cells for studying brain function, disease, and repair. With a cultured cell model system that mimics the brain’s natural architecture, researchers would be able to look at how certain diseases occur and screen potential medications for toxicity and efficacy in a more natural setting, says Anja Kunze, a neuroengineer at the University of California, Los Angeles, who has developed three-dimensional brain tissue cultures to study Alzheimer’s disease.

The Austrian researchers coaxed cultured neurons to take on a three-dimensional organization using cell-friendly scaffolding materials in the cultures. The team also let the neuron progenitors control their own fate. “Stem cells have an amazing ability to self-organize,” said study first author Madeline Lancaster at a press briefing on Tuesday. Others groups have also recently seen success in allowing progenitor cells to self-organize, leading to reports of primitive eye structures, liver buds, and more (see “Growing Eyeballs” and “A Rudimentary Liver Is Grown from Stem Cells”).

The brain tissue formed discrete regions found in the early developing human brain, including regions that resemble parts of the cortex, the retina, and structures that produce cerebrospinal fluid. At the press briefing, senior author Juergen Knoblich said that while there have been numerous attempts to model human brain tissue in a culture using human cells, the complex human organ has proved difficult to replicate. Knoblich says the proto-brain resembles the developmental stage of a nine-week-old fetus’s brain.

While Knoblich’s group is focused on developmental questions, other groups are developing three-dimensional brain tissue cultures with the hopes of treating degenerative diseases or brain injury. A group at Georgia Institute of Technology has developed a three-dimensional neural culture to study brain injury, with the goal of identifying biomarkers that could be used to diagnose brain injury and potential drug targets for medications that can repair injured neurons. “It’s important to mimic the cellular architecture of the brain as much as possible because the mechanical response of that tissue is very dependent on its 3-D structure,” says biomedical engineer Michelle LaPlaca of Georgia Tech. Physical insults on cells in a three-dimensional culture will put stress on connections between cells and supporting material known as the extracellular matrix, she says.

Read the entire article here.

Image: Cerebral organoid derived from stem cells containing different brain regions. Courtesy of Japan Times.

Name a Planet

So, you’d like to name a planet, perhaps after your grandmother or a current girlfriend or boyfriend. Here’s how below. But, forget trying to name a celestial object after your pet. So, “Mr.Tiddles”, “Snowy” and “Rex” are out.

From the Independent:

The international institute responsible for naming planets, stars and other celestial bodies has announced that the public will now be able to submit their own suggestions on what to call new discoveries in space.

Founded in 1919, the Paris-based International Astronomical Union (IAU) has more than 11,000 members in more than 90 countries, making it the de facto authority in the field.

Without any official laws enforcing the use of planetary names, the decisions on what to call new discoveries are usually a matter of consensus.

The changes announced by IAU hope to make public’s involvement more streamlined, asking that submissions are “sent to iaupublic@iap.fr” and promising that they will be “handled on a case-by-case basis”.

“The IAU fully supports the involvement of the general public, whether directly or through an independent organised vote, in the naming of planetary satellites, newly discovered planets, and their host stars,” says the statement.

The following guidelines have been offered for submission by would-be planet-namers:

? 16 characters or less in length,

? preferably one word,

? pronounceable (in as many languages as possible),

? non-offensive in any language or culture,

? not too similar to an existing name of an astronomical object.

? names of pet animals are discouraged,

? names of a purely or principally commercial nature are not allowed.

Despite this nod towards a democratic process, the IAU recently vetoed naming a newly discovered moon orbiting Pluto after Vulcan, the home-planet of Spock from the Star Trek franchise.

William Shatner, the actor who played Captain James Kirk in the show, launched a campaign via Twitter after the Seti institute discovered the new moons and created an online poll to name them.

Submitted names had to be picked from classical mythology and have an association with the underworld.‘Vulcan’ easily won the contest with 174,062 votes, followed by ‘Cerberus’ with 99,432 votes, and ‘Styx’ with 87,858 votes.

However, the IAU chose ‘Kerberus’ and ‘Styx’ as the names for the new moons, rejecting Vulcan as it “had already been used for a hypothetical planet between Mercury and the Sun.”

This planet was later found not to exist, but the term ‘vulcanoid’ is still used to refer to asteroids within the orbit of Mercury. Shatner responded to the IAU’s decision by tweeting, “They didn’t name the moon Vulcan. I’m sad. Who’d ever thought I’d be betrayed by geeks and nerds?”

 

Read the entire article here.

Fields from Dreams

It’s time to abandon the notion that you, and everything around you, is made up of tiny particles and their subatomic constituents. You are nothing more than perturbations in the field, or fields. Nothing more. Theoretical physicist Sean Carroll explains all.

From Symmetry:

When scientists talk to non-scientists about particle physics, they talk about the smallest building blocks of matter: what you get when you divide cells and molecules into tinier and tinier bits until you can’t divide them any more.

That’s one way of looking at things. But it’s not really the way things are, said Caltech theoretical physicist Sean Carroll in a lecture at Fermilab. And if physicists really want other people to appreciate the discovery of the Higgs boson, he said, it’s time to tell them the rest of the story.

“To understand what is going on, you actually need to give up a little bit on the notion of particles,” Carroll said in the June lecture.

Instead, think in terms of fields.

You’re already familiar with some fields. When you hold two magnets close together, you can feel their attraction or repulsion before they even touch—an interaction between two magnetic fields. Likewise, you know that when you jump in the air, you’re going to come back down. That’s because you live in Earth’s gravitational field.

Carroll’s stunner, at least to many non-scientists, is this: Every particle is actually a field. The universe is full of fields, and what we think of as particles are just excitations of those fields, like waves in an ocean. An electron, for example, is just an excitation of an electron field.

This may seem counterintuitive, but seeing the world in terms of fields actually helps make sense of some otherwise confusing facts of particle physics.

When a radioactive material decays, for example, we think of it as spitting out different kinds of particles. Neutrons decay into protons, electrons and neutrinos. Those protons, electrons and neutrinos aren’t hiding inside neutrons, waiting to get out. Yet they appear when neutrons decay.

If we think in terms of fields, this sudden appearance of new kinds of particles starts to make more sense. The energy and excitation of one field transfers to others as they vibrate against each other, making it seem like new types of particles are appearing.

Thinking in fields provides a clearer picture of how scientists are able to make massive particles like Higgs bosons in the Large Hadron Collider. The LHC smashes bunches of energetic protons into one another, and scientists study those collisions.

“There’s an analogy that’s often used here,” Carroll said, “that doing particle physics is like smashing two watches together and trying to figure out how watches work by watching all the pieces fall apart.

“This analogy is terrible for many reasons,” he said. “The primary one is that what’s coming out when you smash particles together is not what was inside the original particles. … [Instead,] it’s like you smash two Timex watches together and a Rolex pops out.”

What’s really happening in LHC collisions is that especially excited excitations of a field—the energetic protons—are vibrating together and transfering their energy to adjacent fields, forming new excitations that we see as new particles—such as Higgs bosons.

Thinking in fields can also better explain how the Higgs works. Higgs bosons themselves do not give other particles mass by, say, sticking to them in clumps. Instead, the Higgs field interacts with other fields, giving them—and, by extension, their particles—mass.

Read the entire article here.

Image: iron filing magnetic field lines between two bar magnets. Courtesy of Wikimedia.

The View From Saturn

As Carl Sagan would no doubt have had us remember, we are still collectively residents of a very small, very pale blue dot. The image of planet Earth was taken by the Cassini spacecraft, which has been busy circuiting and mapping the Saturnian system over the last several years. Cassini turned the attention of its cameras to our home on July 19, 2013 for this portrait.

From NASA:

Color and black-and-white images of Earth taken by two NASA interplanetary spacecraft on July 19 show our planet and its moon as bright beacons from millions of miles away in space.

NASA’s Cassini spacecraft captured the color images of Earth and the moon from its perch in the Saturn system nearly 900 million miles (1.5 billion kilometers) away. MESSENGER, the first probe to orbit Mercury, took a black-and-white image from a distance of 61 million miles (98 million kilometers) as part of a campaign to search for natural satellites of the planet.

In the Cassini images Earth and the moon appear as mere dots — Earth a pale blue and the moon a stark white, visible between Saturn’s rings. It was the first time Cassini’s highest-resolution camera captured Earth and its moon as two distinct objects.

It also marked the first time people on Earth had advance notice their planet’s portrait was being taken from interplanetary distances. NASA invited the public to celebrate by finding Saturn in their part of the sky, waving at the ringed planet and sharing pictures over the Internet. More than 20,000 people around the world participated.

“We can’t see individual continents or people in this portrait of Earth, but this pale blue dot is a succinct summary of who we were on July 19,” said Linda Spilker, Cassini project scientist, at NASA’s Jet Propulsion Laboratory in Pasadena, Calif. “Cassini’s picture reminds us how tiny our home planet is in the vastness of space, and also testifies to the ingenuity of the citizens of this tiny planet to send a robotic spacecraft so far away from home to study Saturn and take a look-back photo of Earth.”

Pictures of Earth from the outer solar system are rare because from that distance, Earth appears very close to our sun. A camera’s sensitive detectors can be damaged by looking directly at the sun, just as a human being can damage his or her retina by doing the same. Cassini was able to take this image because the sun had temporarily moved behind Saturn from the spacecraft’s point of view and most of the light was blocked.

A wide-angle image of Earth will become part of a multi-image picture, or mosaic, of Saturn’s rings, which scientists are assembling. This image is not expected to be available for several weeks because of the time-consuming challenges involved in blending images taken in changing geometry and at vastly different light levels, with faint and extraordinarily bright targets side by side.

“It thrills me to no end that people all over the world took a break from their normal activities to go outside and celebrate the interplanetary salute between robot and maker that these images represent,” said Carolyn Porco, Cassini imaging team lead at the Space Science Institute in Boulder, Colo. “The whole event underscores for me our ‘coming of age’ as planetary explorers.”

Read the entire article here.

Image: In this rare image taken on July 19, 2013, the wide-angle camera on NASA’s Cassini spacecraft has captured Saturn’s rings and our planet Earth and its moon in the same frame. Courtesy: NASA/JPL-Caltech/Space Science Institute.

Our Beautiful Galaxy

We should post stunning images of the night sky like these more often. For most of us, unfortunately, light pollution from our surroundings hides beautiful vistas like these from the naked eye.

Image: Receiving the Galatic Beam. The Milky Way appears to line up with the giant 64-m dish of the radio telescope at Parkes Observatory in Australia. As can be seen from the artificial lights around the telescope, light pollution is not a problem for radio astronomers. Radio and microwave interference is a big issue however, as it masks the faint natural emissions from distant objects in space. For this reason many radio observatories ban mobile phone use on their premises. Courtesy: Wayne England / The Royal Observatory Greenwich / Telegraph.

Earth as the New Venus

New research models show just how precarious our planet’s climate really is. Runaway greenhouse warming would make a predicted 2-6 feet rise in average sea levels over the next 50-100 years seem like a puddle at the local splash pool.

From ars technica:

With the explosion of exoplanet discoveries, researchers have begun to seriously revisit what it takes to make a planet habitable, defined as being able to support liquid water. At a basic level, the amount of light a planet receives sets its temperature. But real worlds aren’t actually basic—they have atmospheres, reflect some of that light back into space, and experience various feedbacks that affect the temperature.

Attempts to incorporate all those complexities into models of other planets have produced some unexpected results. Some even suggest that Earth teeters on the edge of experiencing a runaway greenhouse, one that would see its oceans boil off. The fact that large areas of the planet are covered in ice may make that conclusion seem a bit absurd, but a second paper looks at the problem from a somewhat different angle—and comes to the same conclusion. If it weren’t for clouds and our nitrogen-rich atmosphere, the Earth might be an uninhabitable hell right now.

The new work focuses on a very simple model of an atmosphere: a linear column of nothing but water vapor. This clearly doesn’t capture the complex dynamics of weather and the different amounts of light to reach the poles, but it does include things like the amount of light scattered back out into space and the greenhouse impact of the water vapor. These sorts of calculations are simple enough that they were first done decades ago, but the authors note that this particular problem hadn’t been revisited in 25 years. Our knowledge of how water vapor absorbs both visible and infrared light has improved over that time.

Water vapor, like other greenhouse gasses, allows visible light to reach the surface of a planet, but it absorbs most of the infrared light that gets emitted back toward space. Only a narrow window, centered around 10 micrometer wavelengths, makes it back out to space. Once the incoming energy gets larger than the amount that can escape, the end result is a runaway greenhouse: heat evaporates more surface water, which absorbs more infrared, trapping even more heat. At some point, the atmosphere gets so filled with water vapor that light no longer even reaches the surface, instead getting absorbed by the atmosphere itself.

The model shows that, once temperatures reach 1,800K, a second window through the water vapor opens up at about four microns, which allows additional energy to escape into space. The authors suggest that this could be used when examining exoplanets, as high emissions in this region could be taken as an indication that the planet was undergoing a runaway greenhouse.

The authors also used the model to look at what Earth would be like if it had a cloud-free, water atmosphere. The surprise was that the updated model indicated that this alternate-Earth atmosphere would absorb 30 percent more energy than previous estimates suggested. That’s enough to make a runaway greenhouse atmosphere stable at the Earth’s distance from the Sun.

So, why is the Earth so relatively temperate? The authors added a few additional factors to their model to find out. Additional greenhouse gasses like carbon dioxide and methane made runaway heating more likely, while nitrogen scattered enough light to make it less likely. The net result is that, under an Earth-like atmosphere composition, our planet should experience a runaway greenhouse. (In fact, greenhouse gasses can lower the barrier between a temperate climate and a runaway greenhouse, although only at concentrations much higher than we’ll reach even if we burn all the fossil fuels available.) But we know it hasn’t. “A runaway greenhouse has manifestly not occurred on post-Hadean Earth,” the authors note. “It would have sterilized Earth (there is observer bias).”

So, what’s keeping us cool? The authors suggest two things. The first is that our atmosphere isn’t uniformly saturated with water; some areas are less humid and allow more heat to radiate out into space. The other factor is the existence of clouds. Depending on their properties, clouds can either insulate or reflect sunlight back into space. On balance, however, it appears they are key to keeping our planet’s climate moderate.

But clouds won’t help us out indefinitely. Long before the Sun expands and swallows the Earth, the amount of light it emits will rise enough to make a runaway greenhouse more likely. The authors estimate that, with an all-water atmosphere, we’ve got about 1.5 billion years until the Earth is sterilized by skyrocketing temperatures. If other greenhouse gasses are present, then that day will come even sooner.

The authors don’t expect that this will be the last word on exoplanet conditions—in fact, they revisited waterlogged atmospheres in the hopes of stimulating greater discussion of them. But the key to understanding exoplanets will ultimately involve adapting the planetary atmospheric models we’ve built to understand the Earth’s climate. With full, three-dimensional circulation of the atmosphere, these models can provide a far more complete picture of the conditions that could prevail under a variety of circumstances. Right now, they’re specialized to model the Earth, but work is underway to change that.

Read the entire article here.

Image: Venus shrouded in perennial clouds of carbon dioxide, sulfur dioxide and sulfuric acid, as seen by the Messenger probe, 2004. Courtesy of Wikipedia.

Dopamine on the Mind

Dopamine is one of the brain’s key signalling chemicals. And, because of its central role in the risk-reward structures of the brain it often gets much attention — both in neuroscience research and in the public consciousness.

From Slate:

In a brain that people love to describe as “awash with chemicals,” one chemical always seems to stand out. Dopamine: the molecule behind all our most sinful behaviors and secret cravings. Dopamine is love. Dopamine is lust. Dopamine is adultery. Dopamine is motivation. Dopamine is attention. Dopamine is feminism. Dopamine is addiction.

My, dopamine’s been busy.

Dopamine is the one neurotransmitter that everyone seems to know about. Vaughn Bell once called it the Kim Kardashian of molecules, but I don’t think that’s fair to dopamine. Suffice it to say, dopamine’s big. And every week or so, you’ll see a new article come out all about dopamine.

So is dopamine your cupcake addiction? Your gambling? Your alcoholism? Your sex life? The reality is dopamine has something to do with all of these. But it is none of them. Dopamine is a chemical in your body. That’s all. But that doesn’t make it simple.

What is dopamine? Dopamine is one of the chemical signals that pass information from one neuron to the next in the tiny spaces between them. When it is released from the first neuron, it floats into the space (the synapse) between the two neurons, and it bumps against receptors for it on the other side that then send a signal down the receiving neuron. That sounds very simple, but when you scale it up from a single pair of neurons to the vast networks in your brain, it quickly becomes complex. The effects of dopamine release depend on where it’s coming from, where the receiving neurons are going and what type of neurons they are, what receptors are binding the dopamine (there are five known types), and what role both the releasing and receiving neurons are playing.

And dopamine is busy! It’s involved in many different important pathways. But when most people talk about dopamine, particularly when they talk about motivation, addiction, attention, or lust, they are talking about the dopamine pathway known as the mesolimbic pathway, which starts with cells in the ventral tegmental area, buried deep in the middle of the brain, which send their projections out to places like the nucleus accumbens and the cortex. Increases in dopamine release in the nucleus accumbens occur in response to sex, drugs, and rock and roll. And dopamine signaling in this area is changed during the course of drug addiction.  All abused drugs, from alcohol to cocaine to heroin, increase dopamine in this area in one way or another, and many people like to describe a spike in dopamine as “motivation” or “pleasure.” But that’s not quite it. Really, dopamine is signaling feedback for predicted rewards. If you, say, have learned to associate a cue (like a crack pipe) with a hit of crack, you will start getting increases in dopamine in the nucleus accumbens in response to the sight of the pipe, as your brain predicts the reward. But if you then don’t get your hit, well, then dopamine can decrease, and that’s not a good feeling. So you’d think that maybe dopamine predicts reward. But again, it gets more complex. For example, dopamine can increase in the nucleus accumbens in people with post-traumatic stress disorder when they are experiencing heightened vigilance and paranoia. So you might say, in this brain area at least, dopamine isn’t addiction or reward or fear. Instead, it’s what we call salience. Salience is more than attention: It’s a sign of something that needs to be paid attention to, something that stands out. This may be part of the mesolimbic role in attention deficit hyperactivity disorder and also a part of its role in addiction.

But dopamine itself? It’s not salience. It has far more roles in the brain to play. For example, dopamine plays a big role in starting movement, and the destruction of dopamine neurons in an area of the brain called the substantia nigra is what produces the symptoms of Parkinson’s disease. Dopamine also plays an important role as a hormone, inhibiting prolactin to stop the release of breast milk. Back in the mesolimbic pathway, dopamine can play a role in psychosis, and many antipsychotics for treatment of schizophrenia target dopamine. Dopamine is involved in the frontal cortex in executive functions like attention. In the rest of the body, dopamine is involved in nausea, in kidney function, and in heart function.

With all of these wonderful, interesting things that dopamine does, it gets my goat to see dopamine simplified to things like “attention” or “addiction.” After all, it’s so easy to say “dopamine is X” and call it a day. It’s comforting. You feel like you know the truth at some fundamental biological level, and that’s that. And there are always enough studies out there showing the role of dopamine in X to leave you convinced. But simplifying dopamine, or any chemical in the brain, down to a single action or result gives people a false picture of what it is and what it does. If you think that dopamine is motivation, then more must be better, right? Not necessarily! Because if dopamine is also “pleasure” or “high,” then too much is far too much of a good thing. If you think of dopamine as only being about pleasure or only being about attention, you’ll end up with a false idea of some of the problems involving dopamine, like drug addiction or attention deficit hyperactivity disorder, and you’ll end up with false ideas of how to fix them.

Read the entire article here.

Image: 3D model of dopamine. Courtesy of Wikipedia.

Rewriting Memories

Important new research suggests that traumatic memories can be rewritten. Timing is critical.

From Technology Review:

It was a Saturday night at the New York Psychoanalytic Institute, and the second-floor auditorium held an odd mix of gray-haired, cerebral Upper East Side types and young, scruffy downtown grad students in black denim. Up on the stage, neuroscientist Daniela Schiller, a riveting figure with her long, straight hair and impossibly erect posture, paused briefly from what she was doing to deliver a mini-lecture about memory.

She explained how recent research, including her own, has shown that memories are not unchanging physical traces in the brain. Instead, they are malleable constructs that may be rebuilt every time they are recalled. The research suggests, she said, that doctors (and psychotherapists) might be able to use this knowledge to help patients block the fearful emotions they experience when recalling a traumatic event, converting chronic sources of debilitating anxiety into benign trips down memory lane.

And then Schiller went back to what she had been doing, which was providing a slamming, rhythmic beat on drums and backup vocals for the Amygdaloids, a rock band composed of New York City neuroscientists. During their performance at the institute’s second annual “Heavy Mental Variety Show,” the band blasted out a selection of its greatest hits, including songs about cognition (“Theory of My Mind”), memory (“A Trace”), and psychopathology (“Brainstorm”).

“Just give me a pill,” Schiller crooned at one point, during the chorus of a song called “Memory Pill.” “Wash away my memories …”

The irony is that if research by Schiller and others holds up, you may not even need a pill to strip a memory of its power to frighten or oppress you.

Schiller, 40, has been in the vanguard of a dramatic reassessment of how human memory works at the most fundamental level. Her current lab group at Mount Sinai School of Medicine, her former colleagues at New York University, and a growing army of like-minded researchers have marshaled a pile of data to argue that we can alter the emotional impact of a memory by adding new information to it or recalling it in a different context. This hypothesis challenges 100 years of neuroscience and overturns cultural touchstones from Marcel Proust to best-selling memoirs. It changes how we think about the permanence of memory and identity, and it suggests radical nonpharmacological approaches to treating pathologies like post-traumatic stress disorder, other fear-based anxiety disorders, and even addictive behaviors.

In a landmark 2010 paper in Nature, Schiller (then a postdoc at New York University) and her NYU colleagues, including Joseph E. LeDoux and Elizabeth A. Phelps, published the results of human experiments indicating that memories are reshaped and rewritten every time we recall an event. And, the research suggested, if mitigating information about a traumatic or unhappy event is introduced within a narrow window of opportunity after its recall—during the few hours it takes for the brain to rebuild the memory in the biological brick and mortar of molecules—the emotional experience of the memory can essentially be rewritten.

“When you affect emotional memory, you don’t affect the content,” Schiller explains. “You still remember perfectly. You just don’t have the emotional memory.”

Fear training

The idea that memories are constantly being rewritten is not entirely new. Experimental evidence to this effect dates back at least to the 1960s. But mainstream researchers tended to ignore the findings for decades because they contradicted the prevailing scientific theory about how memory works.

That view began to dominate the science of memory at the beginning of the 20th century. In 1900, two German scientists, Georg Elias Müller and Alfons Pilzecker, conducted a series of human experiments at the University of Göttingen. Their results suggested that memories were fragile at the moment of formation but were strengthened, or consolidated, over time; once consolidated, these memories remained essentially static, permanently stored in the brain like a file in a cabinet from which they could be retrieved when the urge arose.

It took decades of painstaking research for neuroscientists to tease apart a basic mechanism of memory to explain how consolidation occurred at the level of neurons and proteins: an experience entered the neural landscape of the brain through the senses, was initially “encoded” in a central brain apparatus known as the hippocampus, and then migrated—by means of biochemical and electrical signals—to other precincts of the brain for storage. A famous chapter in this story was the case of “H.M.,” a young man whose hippocampus was removed during surgery in 1953 to treat debilitating epileptic seizures; although physiologically healthy for the remainder of his life (he died in 2008), H.M. was never again able to create new long-term memories, other than to learn new motor skills.

Subsequent research also made clear that there is no single thing called memory but, rather, different types of memory that achieve different biological purposes using different neural pathways. “Episodic” memory refers to the recollection of specific past events; “procedural” memory refers to the ability to remember specific motor skills like riding a bicycle or throwing a ball; fear memory, a particularly powerful form of emotional memory, refers to the immediate sense of distress that comes from recalling a physically or emotionally dangerous experience. Whatever the memory, however, the theory of consolidation argued that it was an unchanging neural trace of an earlier event, fixed in long-term storage. Whenever you retrieved the memory, whether it was triggered by an unpleasant emotional association or by the seductive taste of a madeleine, you essentially fetched a timeless narrative of an earlier event. Humans, in this view, were the sum total of their fixed memories. As recently as 2000 in Science, in a review article titled “Memory—A Century of Consolidation,” James L. McGaugh, a leading neuroscientist at the University of California, Irvine, celebrated the consolidation hypothesis for the way that it “still guides” fundamental research into the biological process of long-term memory.

As it turns out, Proust wasn’t much of a neuroscientist, and consolidation theory couldn’t explain everything about memory. This became apparent during decades of research into what is known as fear training.

Schiller gave me a crash course in fear training one afternoon in her Mount Sinai lab. One of her postdocs, Dorothee Bentz, strapped an electrode onto my right wrist in order to deliver a mild but annoying shock. She also attached sensors to several fingers on my left hand to record my galvanic skin response, a measure of physiological arousal and fear. Then I watched a series of images—blue and purple cylinders—flash by on a computer screen. It quickly became apparent that the blue cylinders often (but not always) preceded a shock, and my skin conductivity readings reflected what I’d learned. Every time I saw a blue cylinder, I became anxious in anticipation of a shock. The “learning” took no more than a couple of minutes, and Schiller pronounced my little bumps of anticipatory anxiety, charted in real time on a nearby monitor, a classic response of fear training. “It’s exactly the same as in the rats,” she said.

In the 1960s and 1970s, several research groups used this kind of fear memory in rats to detect cracks in the theory of memory consolidation. In 1968, for example, Donald J. Lewis of Rutgers University led a study showing that you could make the rats lose the fear associated with a memory if you gave them a strong electroconvulsive shock right after they were induced to retrieve that memory; the shock produced an amnesia about the previously learned fear. Giving a shock to animals that had not retrieved the memory, in contrast, did not cause amnesia. In other words, a strong shock timed to occur immediately after a memory was retrieved seemed to have a unique capacity to disrupt the memory itself and allow it to be reconsolidated in a new way. Follow-up work in the 1980s confirmed some of these observations, but they lay so far outside mainstream thinking that they barely received notice.

Moment of silence

At the time, Schiller was oblivious to these developments. A self-described skateboarding “science geek,” she grew up in Rishon LeZion, Israel’s fourth-largest city, on the coastal plain a few miles southeast of Tel Aviv. She was the youngest of four children of a mother from Morocco and a “culturally Polish” father from Ukraine—“a typical Israeli melting pot,” she says. As a tall, fair-skinned teenager with European features, she recalls feeling estranged from other neighborhood kids because she looked so German.

Schiller remembers exactly when her curiosity about the nature of human memory began. She was in the sixth grade, and it was the annual Holocaust Memorial Day in Israel. For a school project, she asked her father about his memories as a Holocaust survivor, and he shrugged off her questions. She was especially puzzled by her father’s behavior at 11 a.m., when a simultaneous eruption of sirens throughout Israel signals the start of a national moment of silence. While everyone else in the country stood up to honor the victims of genocide, he stubbornly remained seated at the kitchen table as the sirens blared, drinking his coffee and reading the newspaper.

“The Germans did something to my dad, but I don’t know what because he never talks about it,” Schiller told a packed audience in 2010 at The Moth, a storytelling event.

During her compulsory service in the Israeli army, she organized scientific and educational conferences, which led to studies in psychology and philosophy at Tel Aviv University; during that same period, she procured a set of drums and formed her own Hebrew rock band, the Rebellion Movement. Schiller went on to receive a PhD in psychobiology from Tel Aviv University in 2004. That same year, she recalls, she saw the movie Eternal Sunshine of the Spotless Mind, in which a young man undergoes treatment with a drug that erases all memories of a former girlfriend and their painful breakup. Schiller heard (mistakenly, it turns out) that the premise of the movie had been based on research conducted by Joe LeDoux, and she eventually applied to NYU for a postdoctoral fellowship.

In science as in memory, timing is everything. Schiller arrived in New York just in time for the second coming of memory reconsolidation in neuroscience.

Altering the story

The table had been set for Schiller’s work on memory modification in 2000, when Karim Nader, a postdoc in LeDoux’s lab, suggested an experiment testing the effect of a drug on the formation of fear memories in rats. LeDoux told Nader in no uncertain terms that he thought the idea was a waste of time and money. Nader did the experiment anyway. It ended up getting published in Nature and sparked a burst of renewed scientific interest in memory reconsolidation (see “Manipulating Memory,” May/June 2009).

The rats had undergone classic fear training—in an unpleasant twist on Pavlovian conditioning, they had learned to associate an auditory tone with an electric shock. But right after the animals retrieved the fearsome memory (the researchers knew they had done so because they froze when they heard the tone), Nader injected a drug that blocked protein synthesis directly into their amygdala, the part of the brain where fear memories are believed to be stored. Surprisingly, that appeared to pave over the fearful association. The rats no longer froze in fear of the shock when they heard the sound cue.

Decades of research had established that long-term memory consolidation requires the synthesis of proteins in the brain’s memory pathways, but no one knew that protein synthesis was required after the retrieval of a memory as well—which implied that the memory was being consolidated then, too. Nader’s experiments also showed that blocking protein synthesis prevented the animals from recalling the fearsome memory only if they received the drug at the right time, shortly after they were reminded of the fearsome event. If Nader waited six hours before giving the drug, it had no effect and the original memory remained intact. This was a big biochemical clue that at least some forms of memories essentially had to be neurally rewritten every time they were recalled.

When Schiller arrived at NYU in 2005, she was asked by Elizabeth Phelps, who was spearheading memory research in humans, to extend Nader’s findings and test the potential of a drug to block fear memories. The drug used in the rodent experiment was much too toxic for human use, but a class of antianxiety drugs known as beta-adrenergic antagonists (or, in common parlance, “beta blockers”) had potential; among these drugs was propranolol, which had previously been approved by the FDA for the treatment of panic attacks and stage fright. ­Schiller immediately set out to test the effect of propranolol on memory in humans, but she never actually performed the experiment because of prolonged delays in getting institutional approval for what was then a pioneering form of human experimentation. “It took four years to get approval,” she recalls, “and then two months later, they took away the approval again. My entire postdoc was spent waiting for this experiment to be approved.” (“It still hasn’t been approved!” she adds.)

While waiting for the approval that never came, Schiller began to work on a side project that turned out to be even more interesting. It grew out of an offhand conversation with a colleague about some anomalous data described at meeting of LeDoux’s lab: a group of rats “didn’t behave as they were supposed to” in a fear experiment, Schiller says.

The data suggested that a fear memory could be disrupted in animals even without the use of a drug that blocked protein synthesis. Schiller used the kernel of this idea to design a set of fear experiments in humans, while Marie-H. Monfils, a member of the LeDoux lab, simultaneously pursued a parallel line of experimentation in rats. In the human experiments, volunteers were shown a blue square on a computer screen and then given a shock. Once the blue square was associated with an impending shock, the fear memory was in place. Schiller went on to show that if she repeated the sequence that produced the fear memory the following day but broke the association within a narrow window of time—that is, showed the blue square without delivering the shock—this new information was incorporated into the memory.

Here, too, the timing was crucial. If the blue square that wasn’t followed by a shock was shown within 10 minutes of the initial memory recall, the human subjects reconsolidated the memory without fear. If it happened six hours later, the initial fear memory persisted. Put another way, intervening during the brief window when the brain was rewriting its memory offered a chance to revise the initial memory itself while diminishing the emotion (fear) that came with it. By mastering the timing, the NYU group had essentially created a scenario in which humans could rewrite a fearsome memory and give it an unfrightening ending. And this new ending was robust: when Schiller and her colleagues called their subjects back into the lab a year later, they were able to show that the fear associated with the memory was still blocked.

The study, published in Nature in 2010, made clear that reconsolidation of memory didn’t occur only in rats.

Read the entire article here.

Atlas Shrugs

She or he is 6 feet 2 inches tall and weighs 330 pounds, and goes by the name Atlas.

[tube]zkBnFPBV3f0[/tube]

Surprisingly this person is not the new draft pick for the Denver Broncos or Ronaldo’s replacement at Real Madrid. Well, it’s not really a person, not yet anyway. Atlas is a humanoid robot. Its primary “parents” are Boston Dynamics and DARPA (Defense Advanced Research Projects Agency), a unit of the U.S. Department of Defense. The collaboration unveiled Atlas to the public on July 11, 2013.

From the New York Times:

Moving its hands as if it were dealing cards and walking with a bit of a swagger, a Pentagon-financed humanoid robot named Atlas made its first public appearance on Thursday.

C3PO it’s not. But its creators have high hopes for the hydraulically powered machine. The robot — which is equipped with both laser and stereo vision systems, as well as dexterous hands — is seen as a new tool that can come to the aid of humanity in natural and man-made disasters.

Atlas is being designed to perform rescue functions in situations where humans cannot survive. The Pentagon has devised a challenge in which competing teams of technologists program it to do things like shut off valves or throw switches, open doors, operate power equipment and travel over rocky ground. The challenge comes with a $2 million prize.

Some see Atlas’s unveiling as a giant — though shaky — step toward the long-anticipated age of humanoid robots.

“People love the wizards in Harry Potter or ‘Lord of the Rings,’ but this is real,” said Gary Bradski, a Silicon Valley artificial intelligence specialist and a co-founder of Industrial Perception Inc., a company that is building a robot able to load and unload trucks. “A new species, Robo sapiens, are emerging,” he said.

The debut of Atlas on Thursday was a striking example of how computers are beginning to grow legs and move around in the physical world.

Although robotic planes already fill the air and self-driving cars are being tested on public roads, many specialists in robotics believe that the learning curve toward useful humanoid robots will be steep. Still, many see them fulfilling the needs of humans — and the dreams of science fiction lovers — sooner rather than later.

Walking on two legs, they have the potential to serve as department store guides, assist the elderly with daily tasks or carry out nuclear power plant rescue operations.

“Two weeks ago 19 brave firefighters lost their lives,” said Gill Pratt, a program manager at the Defense Advanced Projects Agency, part of the Pentagon, which oversaw Atlas’s design and financing. “A number of us who are in the robotics field see these events in the news, and the thing that touches us very deeply is a single kind of feeling which is, can’t we do better? All of this technology that we work on, can’t we apply that technology to do much better? I think the answer is yes.”

Dr. Pratt equated the current version of Atlas to a 1-year-old.

“A 1-year-old child can barely walk, a 1-year-old child falls down a lot,” he said. “As you see these machines and you compare them to science fiction, just keep in mind that this is where we are right now.”

But he added that the robot, which has a brawny chest with a computer and is lit by bright blue LEDs, would learn quickly and would soon have the talents that are closer to those of a 2-year-old.

The event on Thursday was a “graduation” ceremony for the Atlas walking robot at the office of Boston Dynamics, the robotics research firm that led the design of the system. The demonstration began with Atlas shrouded under a bright red sheet. After Dr. Pratt finished his remarks, the sheet was pulled back revealing a machine that looked a like a metallic body builder, with an oversized chest and powerful long arms.

Read the entire article here.

Helping the Honeybees

Agricultural biotechnology giant Monsanto is joining efforts to help the honeybee. Honeybees the world over have been suffering from a widespread and catastrophic condition often referred to a colony collapse disorder.

From Technology Review:

Beekeepers are desperately battling colony collapse disorder, a complex condition that has been killing bees in large swaths and could ultimately have a massive effect on people, since honeybees pollinate a significant portion of the food that humans consume.

A new weapon in that fight could be RNA molecules that kill a troublesome parasite by disrupting the way its genes are expressed. Monsanto and others are developing the molecules as a means to kill the parasite, a mite that feeds on honeybees.

The killer molecule, if it proves to be efficient and passes regulatory hurdles, would offer welcome respite. Bee colonies have been dying in alarming numbers for several years, and many factors are contributing to this decline. But while beekeepers struggle with malnutrition, pesticides, viruses, and other issues in their bee stocks, one problem that seems to be universal is the Varroa mite, an arachnid that feeds on the blood of developing bee larvae.

“Hives can survive the onslaught of a lot of these insults, but with Varroa, they can’t last,” says Alan Bowman, a University of Aberdeen molecular biologist in Scotland, who is studying gene silencing as a means to control the pest.

The Varroa mite debilitates colonies by hampering the growth of young bees and increasing the lethality of the viruses that it spreads. “Bees can quite happily survive with these viruses, but now, in the presence of Varroa, these viruses become lethal,” says Bowman. Once a hive is infested with Varroa, it will die within two to four years unless a beekeeper takes active steps to control it, he says.

One of the weapons beekeepers can use is a pesticide that kills mites, but “there’s always the concern that mites will become resistant to the very few mitocides that are available,” says Tom Rinderer, who leads research on honeybee genetics at the U.S. Department of Agriculture Research Service in Baton Rouge, Louisiana. And new pesticides to kill mites are not easy to come by, in part because mites and bees are found in neighboring branches of the animal tree. “Pesticides are really difficult for chemical companies to develop because of the relatively close relationship between the Varroa and the bee,” says Bowman.

RNA interference could be a more targeted and effective way to combat the mites. It is a natural process in plants and animals that normally defends against viruses and potentially dangerous bits of DNA that move within genomes. Based upon their nucleotide sequence, interfering RNAs signal the destruction of the specific gene products, thus providing a species-specific self-destruct signal. In recent years, biologists have begun to explore this process as a possible means to turn off unwanted genes in humans (see “Gene-Silencing Technique Targets Scarring”) and to control pests in agricultural plants (see “Crops that Shut Down Pests’ Genes”).  Using the technology to control pests in agricultural animals would be a new application.

In 2011 Monsanto, the maker of herbicides and genetically engineered seeds, bought an Israeli company called Beeologics, which had developed an RNA interference technology that can be fed to bees through sugar water. The idea is that when a nurse bee spits this sugar water into each cell of a honeycomb where a queen bee has laid an egg, the resulting larvae will consume the RNA interference treatment. With the right sequence in the interfering RNA, the treatment will be harmless to the larvae, but when a mite feeds on it, the pest will ingest its own self-destruct signal.

The RNA interference technology would not be carried from generation to generation. “It’s a transient effect; it’s not a genetically modified organism,” says Bowman.

Monsanto says it has identified a few self-destruct triggers to explore by looking at genes that are fundamental to the biology of the mite. “Something in reproduction or egg laying or even just basic housekeeping genes can be a good target provided they have enough difference from the honeybee sequence,” says Greg Heck, a researcher at Monsanto.

Read the entire article here.

Image: Honeybee, Apis mellifera. Courtesy of Wikipedia.

Of Mice and Men

Biomolecular and genetic engineering continue apace. This time researchers have inserted artificially constructed human genes into the cells of living mice.

From the Independent:

Scientists have created genetically-engineered mice with artificial human chromosomes in every cell of their bodies, as part of a series of studies showing that it may be possible to treat genetic diseases with a radically new form of gene therapy.

In one of the unpublished studies, researchers made a human artificial chromosome in the laboratory from chemical building blocks rather than chipping away at an existing human chromosome, indicating the increasingly powerful technology behind the new field of synthetic biology.

The development comes as the Government announces today that it will invest tens of millions of pounds in synthetic biology research in Britain, including an international project to construct all the 16 individual chromosomes of the yeast fungus in order to produce the first synthetic organism with a complex genome.

A synthetic yeast with man-made chromosomes could eventually be used as a platform for making new kinds of biological materials, such as antibiotics or vaccines, while human artificial chromosomes could be used to introduce healthy copies of genes into the diseased organs or tissues of people with genetic illnesses, scientists said.

Researchers involved in the synthetic yeast project emphasised at a briefing in London earlier this week that there are no plans to build human chromosomes and create synthetic human cells in the same way as the artificial yeast project. A project to build human artificial chromosomes is unlikely to win ethical approval in the UK, they said.

However, researchers in the US and Japan are already well advanced in making “mini” human chromosomes called HACs (human artificial chromosomes), by either paring down an existing human chromosome or making them “de novo” in the lab from smaller chemical building blocks.

Natalay Kouprina of the US National Cancer Institute in Bethesda, Maryland, is part of the team that has successfully produced genetically engineered mice with an extra human artificial chromosome in their cells. It is the first time such an advanced form of a synthetic human chromosome made “from scratch” has been shown to work in an animal model, Dr Kouprina said.

“The purpose of developing the human artificial chromosome project is to create a shuttle vector for gene delivery into human cells to study gene function in human cells,” she told The Independent. “Potentially it has applications for gene therapy, for correction of gene deficiency in humans. It is known that there are lots of hereditary diseases due to the mutation of certain genes.”

Read the entire article here.

Image courtesy of Science Daily.